see nerd blog — artificial intelligence

When I was a student, decades ago, I tried to specialise in Artificial Intelligence. Unfortunately, the course was cancelled for lack of interest.

This wasn’t so bad in retrospect, because for many decades Artificial Intelligence went nowhere. The ideas required more machine power than was available. But now Moore’s law has made Artificial Intelligence practical. As it happens, the software technologies haven’t really changed: those that would have been discussed in that cancelled course underlie those used now.

see nerd blog — artificial intelligence

I’m talking about specialised Artificial Intelligence, not the general kind. Specialised AI is capable of acting intelligently in one problem domain, but it’s no good for other subjects. So, for example, specialised AI can be used to drive a car safely, but that AI would be extremely useless at diagnosing illnesses in animals. On the other hand, vets can and do drive cars safely.

News stories make it plain that investing in specialised AI makes good commercial sense. It is coming. Over the next few years, a good number of problem domains will be affected. Any job that requires specialising in a particular subject area, without general knowledge, can probably be automated. Any job that requires general expertise, even if only a little, is likely to be unattainable for AI for time being.

Let me give you an example of each. The work on automated self–driving cars make it very clear that professional drivers will probably be replaced. Lorry drivers, bus drivers, taxi drivers, any kind of driver, are jobs likely to be automated: the AI driving the vehicle doesn’t need to know about any other kind of knowledge domain to drive well. I don’t think all driving jobs will go, because some require human interaction, but the great majority will.

The other example is something you might well have used already. Language translation was automated by google and others a few years ago. But even the best translation engines make errors. Many of those errors occur when it is necessary to understand the text being translated to find a good result. A translator needs to understand not just the languages, but the text, and the matters referenced in the text, to translate well. That’s why the current AIs produce poor results: they know the words, but not what they mean. Often, the AI translation results are good enough for someone to finish the job properly. That’s why we’ll see AI assistants for jobs that need general knowledge, but they’ll not take over.

For example, you cannot translate the short sentence “The cat sang happily in the rain” without knowing the kind of cat in question. Is it an animal? Is it the car part? Is it a mechanical digger? Is it the jazz fan? Without any other information, a machine translation would probably settle on the animal, because that’s the normal use of the word. But a human translator would know the animal is quite famous for not being fond of water, so would be unlikely to be yowling happily in the wet. And then there’s Gene Kelly, whose iconic dance happens to be implied in the sentence. The human translator might scowl at the use of 1920s slang, but remember that’s when the song was written, that’s when the film was set, so would probably translate cat for the actor filmed singin’ in the rain.

Today, AI is not ready for general work, but it is potentially suitable, and often potentially perfect, for specialised domains. The thing that’s stopping them getting to general intelligence levels is that same that stopped them being practical for specialist domains only until recently, that of computing power. As computers get more powerful, so AIs will become more intelligent.

When information technology arrived back in the 1980s, a lot of predictions were made that most of us would spend most of our time relaxing. What actually happened was most of us spent more time working. The same predictions are made about AI. But this time round they might well have something to them, given it seems AI will automate something like 50% of professions (consider reports from the torygraph, the bbc, tech republic). It looks to me that the implications to society of AI will be enormous. It’s not surprising politicians are taking note.

see nerd blog — artificial intelligence

It is surprising to me, though, that there are already social experiments to introduce the kind of economic structures that allows people to live in a post job society. One concept is that of a basic income. Everyone receives it, no matter what, and it’s enough to live on. This old idea is being tried out now in different forms in different places, such as Utrecht. I think that, over the next few years, it’ll become clear how such a scheme might be implemented effectively.

If people have no job, they have no reason to live, to misquote Prof Vardi’s reported opinion in the torygraph piece. I don’t share this pessimistic opinion, because of the aristocratic experience. Aristocrats, historically, were quite notorious for leaving full lives without needing to hold down a job. Indeed, I know people now whose parents left them enough money for them to do the same thing, and they strike me as more poetical than suicidal. Rather, as the experience of previous centuries suggests, living without working is at the apex of society, not the nadir. If aristocrats could have the life the others aspired to in the 18th and 19th centuries, why are such lives seen as dismal by some now? I don’t get it. Not needing to work is like not needing to be in chains, it’s not so much a reason to be depressed, more a reason to party. Those aristocrats are role models for people to live full lives without needing a job.

Clearly, the arrival of AI is likely to cause great social upheaval. In my opinion, some of us will be lucky enough to live in civilised societies, where the economic structure will be adapted to enable everyone to live a reasonable life. Others will live in societies with a deep social insanity where the victims of automation will be maligned, causing an instability that will put their societies on the permanent cusp of revolution (which some of the selfish may regard as positive, so long as it doesn’t affect them). AI adds instability to the world, the instability of liberty. I suspect a social order’s survival will come down to the conflict between national responsibility and national arseholedom.

ancient front