Making deep learning models transparent
Chen L, Lu X. Making deep learning models transparent. J of Med Artif Intell 2018; 1:5.
Deep learning models (DLMs) have made groundbreaking advances in many artificial intelligence (AI) fields such as speech recognition, image analysis, and game playing (1,2). As biomedical research and medicine march into a big data era, it is foreseeable that DLMs will play an increasing important role in analyzing biomedical data.
Early-stage DLMs, often referred to as artificial neural network (ANN), were inspired by biological brain information processing. Although nowadays DLMs constitute a broader family of machine learning methods, the most common form of DLM is deep neural networks (DNNs), which contain one visible layer, multiple hidden layers and one output layer (if supervised). Each hidden layer consists of a set of hidden nodes (“neurons”) fully connected to the nodes in adjacent layers, and the hierarchical hidden layers are believed to represent statistical structures of different degree of abstractions. Interestingly, as we have limited knowledge of how neurons in human brain acquire and store knowledge, we generally do not know what the so-called “neurons” in DLMs encode and how they learn a mapping function from inputs to outputs. In other words, contemporary DLMs largely behave as “black boxes”.
|Chen Lu Maiking deep learning models transparent Editorial 2018 J Med AI.pdf||101.29 KB|