Many humans are incapable of making a similar calculation. They will continue to try to win "the short game", ignoring the effect this will have on "the long game". I was talking of means, rather than ends. The principles behind drones and the fully autonomous devices we are discussing are very different. The military want drones (and personnel) who are incapable of making decisions that lay outside the scope of a given operation. For example, they want a soldier who will decide not to shoot harmless non-combatants, but not a soldier who will decide that the whole idea of shooting people is wrong or self-defeating. The military don't usually decide to go anywhere to pick a fight. However, once the military is there, they can often be relied upon to make the best of a bad decision. Thus, although no one in their right mind wanted Britain to help invade Iraq, and although the British soldiers weren't welcomed with open arms, it was seen that for instance, in Basra, the British troops had a far better reputation with the locals than that of the American troops in other areas of the country. Bad choice by Government to go into Iraq - good choices by British military personnel once there. /digression This. Play to your strengths, in an arena that your opposition isn't even properly aware of Yes, you can really worry at that point.
I met a chap in his 80's a few years back at a bar i was working in... sounds dodgy.. But he was a very interesting old boy who as well being the head (if i remember rightly) of the atomic energy program in the sixty's was apparently one of the founding members of MENSA (maybe he was maybe he wasnt however he got one of my mates a job at a private old boys club in central london and he said that the man was obviously very well connected). Had many a chat with old Gordon. One was about intelligence and how it was measured. Apparently the guy at the time with the highest I Q in the club (200+?) was the most socially awkward bloke youd ever meet.. no mates, women or social skills. I suggested that maybe the way they were measuring intelligence was flawed. he may have been very clever with math, problem solving, thinking outside the box blah blah but, surely one of the fundamentals (as well as self awareness) is interaction with others? Wouldnt an intelligent man be able to communicate? or read another's body language, understand when he is under threat or about to get laid? The old timer was chuffed with this! (maybe partly because he only scrapped into the club and wasnt the biggist fan of the egghead) and we were both mashed on red wine!!! But he did agree.. The point? I think you may be able to program something to "learn" for itself and adapt to its environment but interaction, negotiation, humor and what not will only ever be found in humans.. (and chimps... and dolphins... and one day aliens) and maybe Xcige
The thing is, people will try to develop artificial intelligence, because it's an interesting challenge and they will be paid to do it (for good reasons of course). The difficulty arises when you have succeeded in the challenge - and we will succeed, eventually and probably a lot faster than many people seem to think. People severely underestimate what is possible. If you had said a few years ago that Google would map the earth and create StreetView, people would have told you that such an undertaking was impossible or would take lifetimes. Wrong. Now we aren't amazed, we just use it and think it's normal. Look at what computing has achieved in the last 25 years. It's astonishing. So you can be prepared to be astonished again in the next 25 years. You then come across the whole idea of the Black Swan - something that you don't foresee but which will happen anyway - the unknown unknown in Donald Rumsfeld parlance. Once you've created a recipe for disaster, disaster is an inevitability. It's only the time frame that is unknown.
The robot in the video on your blog post is called petman, developed by Boston dynamics to test the durability of military clothing. It has no combat application. Atlas by BD is a testbed to develop a 2 legged platform that is stable on rough terrain. Again no combat application as yet but it is a military project. Bigdog however is a legged robot designed to carry soldiers kit. Why would an AI take over the Internet? Live and thrive in it, yes but why take it over? It would just lead to it being unplugged, not exactly self serving is it? Why is a self programming machine trouble? Humans are organic self programming machines. Ones which walk too! Azimovs 3 rules of robotics are fictional but are considered by many as a basis for a potential ethical code for an AI, yes a self programming machine could choose to ignore them (the same as a human breaking a law) but why would it unless we (humans) give it a reason to?
Will Figs grandchildrens grandchild great grandchild three times removed in 5000 years be asking 'how were they able to do that?'
We already have "dumb" programmes (I like to call them "Windows") with not a shred of AI, but which are rarely capable of doing the same task the same way twice in a row. The reason for this behaviour may lay with "chaos theory" or with the fact that the programme/hardware interaction is too complicated to be easily predictable ... but that's just from one of today's OS's. How can we predict the behaviour of something *really* complex, such as AI?
The Turing test is a test of a simulation. Turing contemplated a hypothetical machine (hypothetical in his day) which could be programmed to produce a convincing simulation of human-like intelligence. A machine which would have actual human-like intelligence was inconceivable, even to a genius like Turing.
I think people are too hung up on AI being a replica of human intelligence. It wouldn't be, I feel. It would be a very logical machine intelligence. It simply means the ability to go beyond the initial programme. If you set up a machine to acquire knowledge which is useful for its own mission, to respond to its environment, process external stimuli and learn from it in order to better fulfil its mission, you have what the programmers are probably aiming at. I don't think AI is about being human, feeling empathetic or fooling people at cocktail parties. In what way would that really be useful? Nonetheless, highly developed AI might decide that that was useful, in its interactions with humans and might set out to develop this subterfuge. You just can't know. Once you have set up a machine to make its own decisions about what is useful, you don't know where it's going to end up.