<< The scientific method is the cornerstone of human progress across all branches of the natural and applied sciences, from understanding the human body to explaining how the universe works. The scientific method is based on identifying systematic rules or principles that describe the phenomenon of interest in a reproducible way that can be validated through experimental evidence. In the era of artificial intelligence (AI), there are discussions on how AI systems may discover new knowledge. >>
<< More specifically, knowing what data AI systems used to make decisions can be a point of contact with domain experts and scientists, that can lead to divergent or convergent views on a given scientific problem. Divergent views may spark further scientific investigations leading to new scientific knowledge. Convergent views may instead reassure that the AI system is operating within bounds deemed reasonable to humans. >>️
<< The perspective (AA) present here was inspired by several authors that published on the topic of AI for science in the past few years, but perhaps one contribution stands out: the inspiring New York Times editorial by Steven Strogatz (Strogatz, S. One giant step for a chess-playing machine. New York Times 26 (2018)) covering the winning of AlphaZero against Stockfish. In that piece, Strogatz states: “What is frustrating about machine learning, however, is that the algorithms can’t articulate what they’re thinking. We don’t know why they work, so we don’t know if they can be trusted. AlphaZero gives every appearance of having discovered some important principles about chess, but it can’t share that understanding with us.”. He additionally cites Garry Kasparov (the former world chess champion) that stated: “we would say that its [AlphaZero] style reflects the truth. This superior understanding allowed it to outclass the world’s top traditional program despite calculating far fewer positions per second.” >>️
AA highlights the importance of three aspects regarding scientific XAI (explainable Artificial Intelligence): accuracy, reproducibility, understandability, ️
Apropos of 'understandability', << The machine view should be understandable to scientists and domain experts. (..) If we want a scientist to make sense of the data used by a machine, this data should contain viable features that allow a scientist to tap into its existing corpus of knowledge. >>
<< XAI may also alleviate some of the risks that we may face when using AI for scientific discovery, that we share with Messeri and Crockett (‘adopting AI in scientific research can bind to our cognitive limitations and impede scientific understanding despite promising to improve it’). >>
️
Gianmarco Mengaldo. Explain the Black Box for the Sake of Science: Revisiting the Scientific Method in the Era of Generative Artificial Intelligence. arXiv: 2406.10557v1 [cs.AI]. Jun 15, 2024.
Also: ai (artificial intell), in https://www.inkgmr.net/kwrds.html
Keywords: AI, XAI, Artificial Intelligence