Can computers make decisions like humans? A new study may have the answer
Method enables artificial intelligence to be more similar to human intelligence
A team of British researchers has developed a method that enables computers to make decisions in a way that is more similar to humans. Specifically, the method mimics the complex process of how humans make decisions by enabling the computers to render several acceptable decisions to one specific problem.
The research was published in the May issue of IEEE/CAA Journal of Automatica Sinica (JAS).
Human decision-making is not perfect, and different decisions may be reached even when the same input is given. This is called variability, and it exists on two levels: among a group of individuals who are experts in a field, and among the decisions that have been made by just one expert. These are referred to as inter-and intra-expert variability. Having established that this variation in decision-making behavior is an important part of making expert systems, the researchers propose that, rather than expecting computers to make the same decisions 100% of the time, they should instead be expected to perform at the same level as humans.
“If the problem domain is such that human experts cannot achieve 100% performance, then we should not expect a computer expert system in this domain to do so, or to put it another way: if we allow human experts to make mistakes, then we must allow a computer expert system to do so,” says Jonathan M. Garibaldi, Ph.D., Head of School of Computer Science at the University of Nottingham, UK, who leads the Intelligent Modelling and Analysis (IMA) Research Group.
The investigators have found a way to introduce variation into computers and show that there is benefit to be gained in doing so. By using fuzzy inference – a system that features an ‘if-then’ production of rules whereby data can be represented on a range between 0 to 1 – rather than either 0 or 1 – they were able to create a computer system that makes decisions with similar variability as human experts.
“Exploring variation in the decision making is useful. Introducing variation in carefully controlled manner can lead to better performance,” adds Garibaldi. “Unless we allow computer systems to make the same mistakes as the best humans, we will delay the benefits that may be available through their use,” he adds further.
The researchers view artificial intelligence as being devices that help treat problems and help make decisions. For example, instead of expecting AI to replace a doctor in coming up with the best treatment option for a cancer patient, it should be used as a tool to help physicians avoid the “most wrong” choices among a range of potential options that a trained human doctor (or a group of trained human doctors) might have made.
“Computers are not taking over but simply providing more decisions,” says Garibaldi. “This is time- and ultimately life-saving because disasters happen as a result of sub-optimal care. Computers can help avoid the glaring mistake that humans make as ‘adjunct experts’ in the room that rule out the wrong decisions and errors by providing a set of alternative decisions, all of which could be correct.”
In the future, the researchers hope to get these systems into real medical use, whereby there is a problem and a computer system that can address it and support real-life decision making.
This research was supported by the University of Nottingham.
Fulltext of the paper is available:
IEEE/CAA Journal of Automatica Sinica aims to publish high-quality, high-interest, far-reaching research achievements globally, and provide an international forum for the presentation of original ideas and recent results related to all aspects of automation. Researchers (including globally highly cited scholars) from institutions all over the world, such as NASA Ames Research Center, MIT, Yale University, Stanford University, Princeton University, select to share their research with a large audience through JAS. We are pleased to announce IEEE/CAA Journal of Automatica Sinica‘s latest CiteScore is 5.31, ranked among top 9% (22/232) in the category of “Control and Systems Engineering”, and top 10% (27/269?20/189) both in the categories of “Information System” and “Artificial Intelligence”. JAS has entered the 1st quantile (Q1) in all three categories it belongs to. Why publish with us: Fast and high quality peer review; Simple and effective online submission system; Widest possible global dissemination of your research; Indexed by IEEE, ESCI, EI, Scopus, Inspec. JAS papers can be found at http://ieeexplore.