• Tatyana Dobreva

Higher-Dimensional Languages

Human language is limited by multitude of physical constraints; such as perturbation of particles in an air medium, complexity and prior of decoder of the listener, and high entropy redundancies. Current advent of deep neural networks (DNN) allows one to achieve solutions to challenging problems by feeding the networks a volume of diverse and relevant data.


Given the above, I would find it curious to develop a language operating in a higher dimension. In a way, neural networks already do that by passing information between its layers to achieve constraints and goals posed to it by a human. However, what if you were to have two deep neural networks, each with a different internal structure (layers, weights, etc), that must communicate with each other to achieve a common goal? The idea here would be that each network would receive different information and the hypothesis is that the combination of that information would yield an interesting/useful/surprising decision. Diverse architectures would ideally come up with a better solution than two identical networks. The networks would also not be limited by physical constraints of humans, though one could technically simulate noise in communication. Through exchange of information, the networks would develop a solution to a given problem.


The challenge here is 1. How does one extract the "language" between the two networks? 2. Would the idea of having to develop such a language need to be put in as a constraint/goal to begin with? 3. How could a human use the learned language in future tasks equipping DNN to solve problems? 4. How does this scale to N communication networks? 5. How does one test that the language has value in being higher-dimensional and could not simply be projected down to lower dimensions and still maintain the same efficiency?



32 views0 comments

Recent Posts

See All