The New Special Issue "Deciphering the Link Between Information and Interpretability in Deep Learning and Artificial Intelli" is Open for Submission
Entropy MDPI
Entropy is an international and interdisciplinary peer-reviewed open access journal of entropy and information studies.
Guest Editors: Michael Mayo and Kevin Pilkiewicz
Submit to the Special Issue: https://www.mdpi.com/journal/entropy/special_issues/L4FHXN24IW
Submission deadline: 31 May 2025
Special Issue Information: Deep learning models have emerged as a way to reliably identify patterns and correlations in large, complex datasets, but the application of these models to increasingly complex tasks, such as spatiotemporal object tracking or synthetic data generation, has necessitated increasingly sophisticated model architectures whose emergent capabilities are often difficult to understand and interpret from a first principles perspective. Information theory, with its focus on quantifying correlations and uncertainty within datasets, is fertile ground for novel investigations into the ways in which the structure and parameters of a deep learning network synergize with each other to extract predictive patterns within a dataset. In this Special Issue, we are seeking manuscripts that leverage information theory or other statistical or correlative metrics to interrogate how the robust and multifaceted functionality of deep learning architectures or artificial intelligence emerges from the iteration of relatively straightforward mathematical operations that are individually agnostic to the particulars of the training data. As society considers artificial intelligence algorithms a replacement for human productivity, there remains an unacceptable level of mystery involving the mechanisms of their remarkable predictive and even creative capabilities. It is our hope that this Special Issue might help to bridge this gap and bring the state of science more in line with the current state of the art.