no one is born with phobias, fear or racism, sexism or a belief in religion, they learn it.. why would a machine not learn the same? (if exposed to certain elements)
There are some behaviours that are inherited, not taught. An example is thought to be a child's instinctive dislike of vegetables, or unfamiliar foodstuffs. The theory goes that, in prehistory, children who were cautious of trying new foods were the ones that tend not to poison themselves to death and so they were more likely to live to adulthood (breeding age). Consider the efforts of parents, who try to teach their children to eat new foods, and struggle all the way. Instinct overcomes, or tries to, taught behaviour. There are other examples.
We, that is all of us, are at the pinnacle of a chain of evolution that goes all the way back to the most primitive slime in some puddle tens of millions of years ago. We, as a result, are perfectly adapted to our environment. An AI would not be able to make that statement. The test of an AI is the Turing Test, but to claim that an AI that can pass the Turing Test is truly self aware as opposed to merely appearing to be self aware is a huge step to take. At what point in our path from pond slime to today did we become self aware ? How can an complex organism be self aware ? I think we are a long way from anything remotely like a truly self aware AI as opposed to autonomous systems designed to perform complex yet in reality fairly simple tasks. Is an Aegis guided missile system aware ? I doubt it.
Interesting that you should bring up the food thing. My brother and his wife are vegetarians, but this isn't from wanting to cuddle animals particularly. So they can see that it is beneficial for their kids to eat some meat. I remember being at the dining table with them and their small son. It was hilarious. He was helping himself to large quantities of salad but had to be coaxed into eating a piece of chicken! So I'm not so sure that kids have a built-in love for meat. I suspect they learn it according to their environment.
theirs actually a chemical in green veg that a lot of young kids tongues are very sensitive too but grow out of.
Indeed. The paper I quoted makes the following observation: An AI machine that is given the mission to win as many games of chess as possible will come to the conclusion that humans might disconnect it, preventing it from winning games of chess. It will therefore take the rational decision to limit this eventuality. Not out of prejudice but simply to optimise its chances of winning chess games. But the paper makes the point a lot more cogently than I can in 4 lines. The military wants all sorts of things. On of them is to win wars with as little loss of life and as few casualties as possible. If it really thought it could control an army of soldier drones, it would use them. How else do you explain the robot in the video at the top of my blog post? The difficulty is that the military and the people who give them orders continually underestimate the unintended consequences of their actions. You only have to look at the current situation in Iraq, which was widely foreseen before hostilities commenced (and demonstrated against for the same reasons) but would they listen? One of the things that an AI machine would probably get up to, if more intelligent than humans, is to take control of the internet upon which our whole way of life is now dependent. Once you have created machines which can auto develop their own intelligence (self programming) you are in trouble. If you give them human mobility, you are in really deep trouble.