You are here: Home News 2018 Artificial neural networks learn …

Artificial neural networks learn to control biological neuronal networks

June 8, 2018: It is already common practice to influence brain activity by means of deep brain stimulation – which means through electrical impulses via implanted electrodes – for example in patients suffering from Parkinson's Disease or epilepsy. The effects, however, are still poorly understood. "I send an electrical pulse in, usually in a strictly predetermined rhythm, but even though it works, I'm not sure what the brain does with it and how much depends on the neuronal activity going on in the background," explains Dr. Sreedhar Kumar from the Bernstein Center Freiburg and the Department of Microsystems Technology (IMTEK). "There is no feedback so I cannot react to possible changes in brain activity. It's an open loop." But what if the stimulating system were able to do just that and find the optimal way to stimulate at any moment? This is where the interdisciplinary research project of the Freiburg Neurorobotics Lab, the Bernstein Center Freiburg and the IMTEK comes in.
Artificial neural networks learn to control biological neuronal networks

Description see below

"This would open up completely new possibilities for stimulating in a much more effective way", says PhD student Jan Wülfing from the Neurorobotics Lab. "The idea is that a controller will be able to receive feedback and interpret the state of the brain in such a way that it can be stimulated appropriately. This could be by adjusting the stimulus timing or changing its intensity”, he explains. "As a new approach, we have developed a closed-loop system, in which the controller could autonomously learn from trial runs how to get it right and we were able to verify that the outcome is really the optimum."

The advantages are obvious: If the system learns to stimulate more efficiently, this could save energy and the intervals between surgeries for battery replacement of the implant would be longer and side effects could be reduced. You could also react more specifically to the progression of a disease, to current brain activity, to daily fluctuations of responsiveness, etc.. Of course, there is still a long way to go, but their current research could be a step in this direction.

Learning through positive or negative reinforcement

But how do you teach a technical system how to learn, or to put it mathematically: How do I arrive at a control law that is able to make decisions depending on the state of the brain: ‘Must I carry out this or that action?’ "Designing a control law by hand is a tedious process and requires detailed knowledge of the system dynamics, which is normally not available”, says computer scientist Jan Wülfing. This is why the researchers are pursuing a different method: reinforcement learning, in which the control systems are rewarded for actions that achieve the desired effects and those that don’t are ignored or have negative consequences. "We employ an algorithm that uses an artificial neural network as a component and learns with the help of positive and negative amplification. After training, the algorithm ‘knows’ how the biological neural network reacts to different stimuli in different conditions and picks the best solution for the current situation."

In their project, as part of a test system, the two researchers adapted the reinforcement learning algorithm to biological neural networks in cell cultures. These networks are easily accessible model systems that preserve important aspects of actual in-vivo networks – including some of the challenges to neurotechnical treatments, such as unpredictable fluctuations of spontaneous activity in the network.

The two PhD students have now succeeded in showing that reinforcement learning methods are indeed able to control certain features of the activity in biological neural networks to a predefined target level and can also autonomously adjust the control when the network changes its activity.

The joint project shows interdisciplinary work at its best and is a positive example of excellent cooperation between experimental, theoretical and applied research, that presented everyone with corresponding challenges. For example, "in trying to apply the findings from a static and easily controlled artificial system to a dynamic, living and unpredictable system that is not nearly as accessible to control," says Jan Wülfing. "In the living system, it's just not always clear if what happens is an effect of the change I've made to the algorithm or if the network is just in a different mode right now."

During the collaboration, they also needed to find a common language between two different fields of science. "Biologists and computer scientists understand the term ‘state of a system’ completely differently," says Sreedhar Kumar, giving one example. Last but not least, everybody got involved in a completely new project with uncertain feasibility.

But the joint effort is already bearing fruit. Sreedhar Kumar completed his dissertation on the topic brilliantly and at the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN) in Bruges the young scientists were awarded the prize for Best Student Paper.

This work was part of a collaboration within and funded by the projects BrainLinks-BrainTools (DFG) and NAMASEN (EU).

Figure legend
Activity features of biological neural networks are controlled by means of electrical stimulation in a closed loop. The control law is learned with Reinforcement Learning methods.

Conference proceedings
Controlling Biological Neural Networks with Deep Reinforcement Learning

Contact
Dr. Sreedhar Saseendran Kumar
via
Prof. Dr. Ulrich Egert
University of Freiburg
Bernstein Center Freiburg &
Biomicrotechnology
Dept. for Microsystems Engineering, IMTEK
Georges-Koehler-Allee 102
79110 Freiburg
Tel .: +49 761 203-7524
E-mail: egert@imtek.uni-freiburg.de

Jan Wülfing
University of Freiburg
Neurorobotics Lab
Georges-Koehler-Allee 080
79110 Freiburg im Breisgau
E-mail: wuelfj@informatik.uni-freiburg.de

Latest News

Back to overview

All

20242023 | 202220212020 | 2019 | 2018 | 2017 | 2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2010 | 2009