AI

We will be gamed: Developing AI systems and interfaces

The first post in this series about AI looked at how even dedicated chess engines powered by “weak” AI now routinely beat grand masters. Though the algorithms are likely still inefficient, they can evaluate permutations many times more quickly than the human brain.

Whilst it seems uncontentious that AI should support decision making and that its development should be informed by human constructs such as games, real life situations are less tangible and well defined. Morality, accountability, value judgements and human fallibility also compound complexity.

With parameters and variables setup, Game theory and deep learning help focus calculations on just the most promising outcomes in a given situation. But beyond outstanding play and situational management, the ability to manipulate huge data sets through multiple dimensions to find novel insights offers extraordinary possibilities.

 

Google Analytics can already show how a website performs against similar sites and criteria e.g. bounce rate. I look forward to when it offers an opinion about why visitors are leaving a site too quickly e.g. font size of body copy likely too small to be legible to 70% of the visiting demographic.

 

Game: – “to use your knowledge of the rules to obtain benefits from a situation, especially in an unfair way” 

Macmillan Dictionary.com

 

Deep Mind’s AlphaZero, a strong AI, which with some customisation, is now world champion at chess, Shogi, and the more complicated game of Go (AlphaGoZero), is also the only entity on the planet (AlphaFold) capable of crunching mind boggling permutations to predict the possible structures of folding protein molecules.

Perhaps the most extraordinary aspect of AlphaZero’s achievement is how it trained itself. In December 2017 and starting with just the rules, it took only a few days to comprehensively beat Stockfish8, the reigning AI chess champion (these days humans are “not very close”).

Although many chess grandmasters are involved with Alpha Zero, it is more like an artificial general intelligence or strong AI, with ground breaking deep learning. Strong AI can train itself to perform new and different functions. Stockfish on the other hand, is a narrow or weak AI, capable of just playing chess.

 

The future is already here – it’s just not evenly distributed.

William Gibson 2003

 

AI being logical, would appear to make interactions more straightforward, but that might not be a great experience for people. Game theory mathematics is already used to model economic and sociological situations, but what will happen when people encounter logic and “computer says no”? Mr. Spock didn’t always enjoy harmonious relations.

There is no shortage of dystopic, “bad robot” sci-fi, so when strong AI starts to improve itself in ways we are unlikely to understand, it will be important for us to define the questions it answers, tasks it undertakes, rules it obeys and values it upholds, especially when it starts running critical services.

 

Image of TED talk about how AI is different from human intelligence
It often helps to know the reasons behind a decision i.e. transparency. AI will get things wrong in ways we won’t understand, so accurately defining the problems it solves will be important.

 

“Understand user needs” is the first point of the Gov.uk service standard   and arguably a good starting point when integrating AI into the world. Other considerations might be:

  • What constitute necessary and sufficient conditions
  • Working with incomplete data
  • Managing exceptions
  • What to do when someone cannot engage for whatever reason (accessibility).

 

Effective systems are robust in the real, unpredictable world, and can recover from errors. And whilst Fuzzy logic helps to address uncertainty, enrich programmed meaning, and provide situational awareness, the world can be more messy than fuzzy.

 

Below are some considerations that seem relevant for developing robust AI based systems with a user centred approach. A subsequent post will look at how user research can address them.

  • Actual users should have ways to meaningfully contribute to development (including defining areas for improvement and how interacting with the system makes them feel)
  • Contribute to assessing real world performance
  • Human experts and service managers will need to define the metrics for success and failure
  • Model the domain and process
  • Plan the entire service experience for the real world
  • Define how to handle exceptions
  • Human experts, users and managers should contribute to training systems
  • Holistically assess the system’s usefulness
  • Guide evolution in the ecosystem

 

Science can only ascertain what is, but not what should be, and outside of its domain value judgements of all kinds remain necessary.

Albert Einstein 1935

Cover image by Gerd Leonhard licensed under creative commons

Leave a Reply

Your email address will not be published. Required fields are marked *