Angle: Practical use of “AI explaining reason”, possibility of drastic business change | Reuters

[Oakland (Californie, États-Unis) 6e Reuters]–Since the introduction of artificial intelligence (AI) software to the sales team in July last year, “Linked-in”, a business exchange site (SNS) under the aegis from Microsoft, was introduced to its sales team. Enrollment fee revenue increased by 8%.

On April 6, Microsoft’s social networking site (SNS), LinkedIn, increased its revenue from registration fees by 8% since it introduced artificial intelligence (AI) software to its sales team in July last year in a paddy field. The photo is an image. Taken June 2013 (2022 Reuters/Kacper Pempel)

This AI not only anticipates customers who might cancel their subscription, for example, but also explains why they have come to that conclusion. It is revolutionary software that creates new business opportunities by clarifying the process by which AI draws conclusions.

For AI scientists, it’s no wonder they design a system that accurately predicts every outcome of a business. However, in order to make it a more useful tool for humans on the management side, the AI ​​may need to explain its own way of thinking through another algorithm. Scientists are beginning to have this idea.

This new field, called Explainable AI (XAI), is now attracting significant investment in Silicon Valley. Startups and cloud giants compete for development.

Additionally, the US government and European Union (EU) headquarters, who want AI decisions to be fair and transparent, have intensified discussions with the advent of XAI.

AI technology can correct social stigma related to race, gender and culture. Some AI scientists believe that “explainability” plays an important role in solving these problems.

US consumer protection authorities, such as the Federal Trade Commission (FTC), have warned for the past two years that they may investigate unexplained AI.

The EU could pass an “artificial intelligence law” next year. It should include a series of AI-related obligations, such as helping users understand the context of AI expectations.

It has been pointed out that XAI has increased the efficiency of introducing AI in areas such as medical care and sales. For example, Google Cloud sells an XAI service that tells customers who want to improve their system’s accuracy which pixels are most useful in predicting the subject of a photo.

However, there are criticisms that the AI ​​explanation is unreliable because the AI ​​technology to analyze the idea of ​​machines is still insufficient.

Companies such as LinkedIn, which develops XAI, also recognize that there is room for improvement at every stage. The steps involve analyzing the predictions made by the AI, generating explanations, verifying their accuracy, and making them practical for users.

However, LinkedIn, which has been doing trial and error for two years, says its XAI technology has created practical value. Proof of this is that registration renewals for the current fiscal year have shown growth 8% higher than normally expected.

LinkedIn salespeople once relied on intuition and sporadic automated warnings about whether customers would adopt the service.

AI now investigates and analyzes quickly. A LinkedIn system called “Crystal Candles” picks up trends you haven’t noticed and explains the context.

Responsible staff can hone their sales tactics by retaining customers who are likely to run away and recommending other customers to switch to high-performing services.

According to LinkedIn, more than 5,000 sales reps now use XAI across a wide range of departments, from recruiting to advertising, marketing, and education departments.

“(XAI) has empowered experienced salespeople to use specific information as a weapon to stay ahead and hold conversations,” said Parvez Ahmad, director of machine learning at LinkedIn. The newcomers were quickly able to get down to work. “

LinkedIn introduced the “unexplained” AI predictions for the first time in 2020. The score is displayed for the likelihood that a customer who is about to renew their registration will upgrade, continue registering, or cancel their registration. recording, and the accuracy is about 80%.

However, the vendors who sell the recruiting software weren’t happy. It was unclear what type of sales strategy should be adopted, especially when the client company is renewing or not renewing.

However, with the introduction of XAI in July last year, sales reps became able to read automatically generated short sentences that indicate the factors that influenced the score.

For example, a client company has decided that the likelihood of an upgrade is high because the number of employees has increased by 240 over the past year and the responsiveness of potential employees has increased by 146% over the past year. last month.

In addition, the index indicating the likelihood that a client company has successfully used LinkedIn recruiting software has increased by 25% in the last three months.

However, some AI experts question the need for “explainability”. This can be quite detrimental, creating false assurance on the AI ​​and inducing design sacrifices that reduce the accuracy of predictions.

Fei-Fei Li, co-director of the Human-Centered AI Institute at Stanford University, points out that people use products like “Google Maps” without necessarily understanding the internal mechanisms. In these cases, robust testing and monitoring have dispelled doubts about its effectiveness.

It was pointed out that the entire AI system can be fair even if the individual decision-making process is unknown.

However, LinkedIn argues that if you don’t understand the idea, you can’t assess the integrity of the algorithm.

Moreover, AI such as crystal candles can also be applied to other areas. For example, if the AI ​​predicts that a patient is at high risk of developing a particular disease, the doctor can find out why. People who fail the credit card check may be able to explain why AI made this decision.

Bean Kim, an AI researcher at Google, says it is desirable for the explanation to clarify whether the AI ​​system matches the concepts and values ​​desired by the operator. “We believe that explainability will ultimately enable human-machine dialogue. If we really want human-machine cooperation, we need explainability,” Kim said.

(Report by Paresh Dave)

Leave a Comment