In this talk, we present a universal description of neural networks using quiver representation theory. The first consequence of this description is the notion of neural teleportation which we use to accelerate the learning process of neural networks. The process of neural teleportation applies to any neural network independently of the architecture, the data and even the task.