Comprehensive coverage

IBM is launching a service that will warn against biases in artificial intelligence applications and systems

The service will also provide - in real time - explanations of the decision-making process of the systems and even recommend how the bias can be corrected. A new study by IBM showed that 82% of organizations are considering implementing artificial intelligence systems but 60% fear being held legally responsible for the decisions made

artificial intelligence. Illustration: shutterstock
artificial intelligence. Illustration: shutterstock

Artificial intelligence systems offer a leap forward in data analysis, but their Achilles heel is the quality of the data used to train the system - and the lack of built-in bias in the applications themselves. Now IBM is launching a new tool that will scan artificial intelligence applications during their work, identify cases of bias in the analysis, explain the reasons for the bias and even recommend how it can be fixed in the future. In addition, IBM's research division will make available to the open source community a set of tools for diagnosing biases in the construction and application processes of machine learning models.

The new service and the toolkit for developers together constitute the comprehensive move of its kind in the industry to promote transparency, trust and control in the development of new artificial intelligence technologies, with the aim of promoting cooperation in dealing with the risks that may arise from such biases. The service and the toolkit are based on a document recently published by IBM that includes guiding principles for the ethical development of artificial intelligence systems.

The fully automated software service, which will run in IBM's cloud environment, will provide explanations about biases in the decision-making processes of artificial intelligence applications during the decision-making process, and will identify unfair results as the system produces them. The new tool will provide the explanation for the bias it has identified in simple terms, with the help of visual dashboards, and will even indicate the variables that biased the decision in a certain direction compared to other variables. He will also provide a detailed score of the level of confidence he has in the recommendation presented by the system, and the factors behind this score.
The new tool will also be able to recommend data that should be added to the model in order to minimize and prevent any bias that the system has detected. In addition, all records relating to model accuracy, performance and fairness, as well as the decision-making process in the artificial intelligence system, can be tracked and isolated for the purpose of receiving better service or ensuring compliance with regulations such as GDPR regulations.

"In order for artificial intelligence systems to evolve, and for businesses to take advantage of their benefits, managers need to trust them," said Dr. Aya Sofer, global vice president of artificial intelligence technology at IBM. "Managers need the ability to manage the systems, identify and deal with bias. It is of utmost importance, and sometimes also a legal requirement, that transparency be part of the decision-making process in artificial intelligence applications. The service and the new set of tools will help open the "black box" of artificial intelligence applications and give companies confidence in their ability to manage these systems and explain how they made the decisions."

Dr. Aya Sofer, Global Vice President of Artificial Intelligence Technology at IBM. Photo PR
Dr. Aya Sofer, Global Vice President for Artificial Intelligence Technology at IBM. PR photo

IBM's new service is adapted to models based on a variety of machine learning and artificial intelligence systems such as those of Watson, Tensorflow, SparkML, AWS SageMaker as well as AzureML. The multitude of environments supported by the new service ensures that many organizations will be able to take advantage of the control and bias prevention capabilities within most of the artificial intelligence environments currently in use. It is also possible to program the new service in order to monitor unique variables in decision-making processes and adapt it to specific use in the organization.
At the same time, IBM's research division announced a set of tools that will be made available to the open source community and will enable bias diagnosis and fairness testing for artificial intelligence systems. The assessment called AI Fairness 360 toolkit includes a library of innovative algorithms, code and training classes that will provide developers, researchers, data scientists and academics with the tools and knowledge to integrate bias detection capabilities into the processes of building and implementing machine learning models.
Unlike other tools and solutions available for the open source community that focus solely on checking the bias in the data used to train the system, the new toolkit is able to help check and prevent biases in the artificial intelligence model itself.
Meanwhile, a new survey by the IBV Institute for Business Value (IBM's Institute for Business Value), with the participation of 5,000 senior managers in organizations all over the world, indicates a significant change in the way managers perceive the potential inherent in artificial intelligence. The survey reveals that while 82% of organizations are currently considering the implementation of artificial intelligence applications, 60% of respondents fear issues related to bearing the legal responsibility for decisions made from using these applications, and 63% responded that they do not have the skilled internal personnel capable of safely managing the new technologies.

Summary of the survey on the IBM website

More of the topic in Hayadan:

One response

  1. The giants Google deepmind, IBM as well as Intel and TESLA are trying to enter the nascent market of artificial intelligence.
    IBM caught chills in criticizing the implementation of the digital doctor - but they aspired to the audacity. Just be careful.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.