One of the more exciting developments in AI has been the development of algorithms that can teach themselves the rules of a system. Early versions of things like game-playing algorithms had to be given the basics of a game. But newer versions don’t need that—they simply need a system that keeps track of some reward like a score, and they can figure out which actions maximize that without needing a formal description of the game’s rules.
A paper released by the journal Neuron takes this a step further by using actual neurons grown in a dish full of electrodes. This added an additional level of complication, as there was no way to know what neurons would actually find rewarding. The fact that the system seems to have worked may tell us something about how neurons can self-organize their responses to the outside world.
Say hello to DishBrain
The researchers behind the new work, who were primarily based in Melbourne, Australia, call their system DishBrain. And it’s based on, yes, a dish with a set of electrodes on the floor of the dish. When neurons are grown in the dish, these electrodes can do two things: sense the activity of the neurons above them or stimulate those electrodes. The electrodes are large relative to the size of neurons, so both the sensing and stimulation (which can be thought of as similar to reading and writing information) involve a small population of neurons, rather than a single one.
Beyond that, it’s a standard culture dish, meaning a variety of cell types can be grown in it—for some control experiments, the researchers used cells that don’t respond to electrical signals. For these experiments, the researchers tested two types of neurons: some dissected from mouse embryos, and others produced by inducing human stem cells to form neurons. In both cases, as seen in other experiments, the neurons spontaneously formed connections with each other, creating networks that had spontaneous activity.

Loading comments...