I always wondered how people thought they would be able to control artificial intelligence that became smarter than us. This kind of work would seem to provide a credible path, by increasing/decreasing various areas they could attempt to make the AI more subservient/loyal to its controllers.
Of course, the danger is always that something subtle is missed, and we could be only one mistake away from catastrophe.