From my perspective, it is pivotal to improve the explainability of AI applications for three reasons. First of all, an explainable process offers a shortcut for
troubling shooting. Since the process is transparent and trustworthy, we can
find the bugs instantly based on the XAI feedback. Also, being able to explain
makes AI applications more accessible to the public, and thus attract more
people to engage and make contributions to the entire community. Last but not
least, by tackling AI in an explainable mathematical approach, the advantage of
math in AI could be maximized and thus empower XAI to a great extent.
Bibliography
Fiano –, Liane.
n.d. “Credit Score Myths That Might Be Holding You Back from Improving Your
Credit.” Consumer Financial Protection Bureau. Accessed January 17, 2021.
https://www.consumerfinance.gov/about-us/blog/credit-score-myths-might-be-holding-you-back-improving-your-credit/.
Gunning, David,
Mark Stefik, Jaesik Choi, Timothy Miller, Simone Stumpf, and Guang-Zhong Yang.
2019. “XAI—Explainable Artificial Intelligence.” Science Robotics 4
(37): eaay7120. https://doi.org/10.1126/scirobotics.aay7120.