The idea of an “information bottleneck” in artificial neural networks (ANNs) operates on a special principle related to the diffusion of various kinds of signaling. It is seen as a practical tool for examining the trade-offs that make these artificial intelligence systems self-optimize. A Wired article describing the information bottleneck concept presented by Tishby et. al. talks about “ridding noisy input data of extraneous details as if by squeezing the information through a bottleneck” and “retaining only the features most relevant to general concepts.”
As a relatively new concept, the information bottleneck idea can help to enhance and change how we use ANNs and related systems to model cognitive function. One way that this theory can help is by helping us to better understand the paradigms that support neural network functions. For instance, if the principle illustrates how only a certain feature set is retained by the system, we start to see how this “data discrimination” makes a network “ape” the human brain, and engineers can add that into neural network models. The idea here is that, eventually, neural network technology will become more of a “universal” concept, not just the province of a privileged few. Currently, companies are on the hunt for scarce AI talent; theories like the information bottleneck theory can help to spread knowledge about neural networks to the layperson and to “middle users” – those who may not be “experts” but may help in the emergence and dissemination of neural network technologies.
Another important value of the information bottleneck is that engineers can start to train systems to work in a more precise way. Having some top-level guidelines for system architecture can streamline the evolution of these types of technologies, and having a more defined idea of deep learning principles is therefore valuable in the IT world.
In general, the vanguard working on AI will continue to look at specifically how neural networks work, including the idea of “relevant information” and how systems discriminate to perform functions. One example is in image or speech processing, where systems have to learn to identify many variations as “objects.” In general, the information bottleneck shows a particular view of how a neural network would work with those objects, and specifically how these data models process information.