Four approaches to realize "responsible AI" Hoshina Hoshina, Suzuki
There is the word "responsible AI".
This means "how companies realize social responsibility in design, development, and utilization of AI."
Accenture held a study session for this "responsible AI".
At the study session, there were explanations such as the background of "responsible AI" required, what kind of risks there were in AI utilization, and what measures should be taken.
Therefore, in this article, I will introduce the explanated content.I would like companies to use AI to read it.
table of contents
Social impact on AI
Currently, the AI market has a high growth rate and is expected to be a stable growing field.
In companies, AI has begun to support people's decisions for the automation of business processes using RPA, things and things.In addition, the use of AI in areas with large social impact such as medical, human resources, and social infrastructure is also progressing.
From this situation, the awareness that the AI has to be implemented is increasing worldwide.
Accenture Business Consulting Headquarters AI Group Japan Group AI, Director Gakusho Hoshina (hereinafter referred to as Hoshina) cited "racial unequal" as a risk brought by AI.
In fact, police investigations reported that a black man was misunderstood and arrested due to a mistake in facial authentication technology.
In this case, a major company withdrew from the face recognition market to avoid the risk of AI.
Currently, the racist protest movement, "Black Lives Matter (BLM)," is currently expanding, but this BLM has also affected AI.
Under such circumstances, it is said that the movement of AI utilization guidelines is progressing in the world.
Mr. Hoshina said, "In Japan, there is a movement to formulate guidelines, and we are at a time when we have to seriously consider how companies develop and use AI."
Four approaches to realize "responsible AI"
In response to these social backgrounds, Accenture defines "responsible AI" as an approach that ensures AI's fairness and transparency for customers and society.
To realize "responsible AI", as a guideline for the development and utilization of AI, "Trustworky", "Reliable", "Understandable", and "Understandable" are maintained.In accordance with "Trust", which takes the initials of (Secure) and "TeaChable", practice four approaches: "technology", "brand", "governance", and "organization / human resources".It is important to ensure fairness and transparency.
Below, we will explain the four approaches.
Responsible AI approach 1: Technology
First, the business consulting headquarters AI Group Manager Hirokazu Suzuki (hereinafter referred to as Suzuki) mentioned "technology" as the first step in "responsible AI".
As a pre -processing process for developing AI, "Training data" is created by selecting "data collection", "data cleansing", and "features".After that, a learning algorithm is constructed and evaluated for AI models made by learning using training data.Develop AI by turning this series of cycles.
On the other hand, in the development process of AI, automation is progressing, but human decision -making is important in the development of AI development.
In a situation where human decisions are required, for example, "Handling data", the following human decisions are required.
In other words, "as long as human decisions are needed in developing AI, human bias and beliefs can be mixed into AI."
For example, there is a case where chatbots have begun to make discriminatory remarks as they continue learning.
This phenomenon is called "operation bias".
In general, AI uses feedback data to improve the operation, but may be unintended if the feedback data itself has biased.
In addition, there is a bias that emphasizes the hypothesis and results that developers think is correct, which calls this "confirmation bias."The most affected thing is "social prejudice", as "confirmation bias" affects the design of the algorithm.
This is a problem that the data used in AI learning contains racial and gender biases, which can have a bad effect on AI.
Implementation of "algorithm assessment"
As mentioned above, AI development has a variety of risks, such as "sampling bias" and "confirmation bias".
Therefore, Accenture recommends establishing AI's reliability by implementing the "algorithm assessment" in AI -utilizing decision -making.
The "algorithm assessment" has three steps: "import", "assessment process", and "results report".
Step1. 取り込み
In the case, it is a step that gives priority to the AI use case.Priority is not necessarily given priority from the viewpoint of revenue and profits, but is based on high -risk impact on business.
Therefore, understanding the use case is very important.In understanding the use case, it is necessary to involve not only researchers and managers, but also those affected by AI.
Step2. アセスメントプロセス
In the assessment process step, technical evaluation is performed.This evaluation is broadly divided into two types: "quantitative evaluation" and "qualitative evaluation".
In "quantitative evaluation", the fairness and transparency of AI performance are numerically evaluated.
In "qualitative evaluation", each stakeholder evaluates fairness through interviews and questionnaires.
The "Assessment Process" proposes measures to improve unintended results by surfacing the potential risks of the AI system through this assessment process.
Step3. 成果報告
Finally, the steps are reported as a result report of the results from the assessment process.
There are a variety of people, such as technical stakeholders and quality stakeholders, which are important for reporting.
In order to understand each other between stake folders, and to communicate as a unified view, it is necessary to create an easy -to -understand report for all stakeholders instead of reporting technical terms.。
Utilization case of "algorithm assessment"
Here, "Arid Irish Bank" was introduced as an example of using algorithm assessment.
In this case, the "fairness" of the AI model used for decision making was a problem, and we were considering incorporating "assessments" there.
They provided an algorithm assessment toolkit, and implemented action evaluation to improve the fairness of the AI model and reduce the bias.
As a result, the evaluation of data and AI models can be fully understood from the fairness perspective.In addition, it has become possible to understand not only data scientists but also management.
At the same time, the result was that the decision making in the model construction process was improved.
Responsible AI approach 2: Brand
The second responsible AI approach, the brand, reflects stakeholders' interests in core businesses using AI to ensure long -term competitiveness and business resilience.
In recent years, ESG's perspective has become very important.Many Japanese companies have not yet progressed about how to involve AI here.Especially in social responsibility, the point is how to use AI and contribute.
If the "responsible AI" is not actually made, a problem may lead to damage to corporate brands and consumers.
Problems viewed from the perspective of "environment", "social", and "governance"
Therefore, the following summarizes what problems are available from each viewpoint of "environment", "social", and "governance" for brands.
Regarding the creation of teachers data, many companies outsource to gig workers (those who undertake work through the Internet), and in recent years, gig workers wages and labor conditions have been considered as problems.
Therefore, some say that the development of AI requires a concept of fair trade, and it is necessary to consider whether there is no problem in the outside AI supply chain.
To that end, Mr. Suzuki said, "I need to keep an eye on ESGs in the development, deployment, and use of AI, and always ask myself about compliance, work style, environmental impact, and social significance, and practice it.By disseminating, we can expect to maintain and improve corporate brand value. "
Responsible AI approach 3: Governance
The "governance" as a responsible AI approach is the construction of a governance system to perform AI's risks, ethics judgments, and AI strategies throughout the company.
As a need for governance, the number of managers who think that if the AI is expanding the entire company is that it does not carry out the AI monitoring system by a company that aims for a healthy corporate management, it will directly lead to a decrease in business performance.。
However, 63 % of the leaders answered that "the monitoring of the AI system is important, but I do not know how to do it."
In addition, 24 % of the total is that "consistent results are forced to review the AI system due to lack of transparency and incorrect results."
This suggests that in the company's company expansion, a "company -level governance system" is needed instead of managing AI in categories.
6 phases for practicing governance
Accenture also assumes six phases as a step to practice governance.
フェーズ1: 「倫理委員会」
In phase 1, an ethics committee will be installed.In the Ethics Committee, it is necessary to build an organization that gather not only AI experts but also experts such as humanities and law experts and provides an organization that provides necessary advice and approval.
フェーズ2: 「経営トップのコミットメント」
In Phase 2, the management understands the "responsible AI" and creates a system to support the practice of in -house and organizations.
フェーズ3: 「トレーニング&コミュニケーション」
Phase 3 provides "responsible AI" education for all employees in the company.
フェーズ4: 「レッドチームと消防隊員」
In phase 4, red teams and fire brigade are placed in the organization.
Red team is originally a term of security.In addition to the positive aspects of AI, it is a role in preventing the occurrence of ethical problems in consideration of negative aspects.Firefighters play a role in learning and practicing the cause of the fire and the appropriate initial fire extinguishing method when the fire occurs.By receiving this training, all employees can be in charge.
フェーズ5: 「ポジティブな影響をもたらす倫理指標」
In the phase 5, the value of the business and the ethics indicators are well consistent.
フェーズ6: 「問題提起できる環境」
Phase 6 prepares an environment where a problem can be raised when a problem occurs.As a corporate organization, it is important to have the courage to gather when a voice is raised or a problem.
Responsible AI approach 4: organization / human resources
The last approach is to develop organizations and human resources as a responsible AI approach and foster a "responsible AI" culture.
Training is necessary to foster a "responsible AI" culture.Training should be conducted on the assignments of management, business members, and development members.
Assignments for each position
In terms of management issues, it is necessary to incorporate "responsible AI" into the corporate strategy to achieve long -term competitiveness and business resilience, but there is an issue that does not understand specific approach methods.be.It is necessary to learn not only the basic knowledge of "responsible AI", but also the rules of governance and the way of thinking about reducing risk.
In some cases, the risks of AI have not grasped the risks of business members.In such cases, the discovery and response of the problem may be delayed, which may be a risk factor in corporate activities.It is necessary to practice practically what kind of bias actually there is and how to assess it.
For development members, training from a more technical point of view is required.AI development members tend to emphasize technical events and confidence.
As mentioned above, in the AI development process, it is necessary to learn the process because we develop AI with a daily consciousness of the risk of human bias.
Therefore, as a training menu, it is necessary to conduct training on how to identify the bias on the model and to increase the possibility of the interpretation of the model.
Finally, Suzuki said, "By conducting training in accordance with the positions of management, business members, and development members," AI with responsible "will lead to corporate culture and organizational climate."
中川 誠大We will cover various information about IoT and deliver it to everyone.
Currently, we are studying the value created by incorporating digital into business.He is particularly interested in the AI field.