Supersymmetric Artificial Neural Network

Thought Curvature or the "Supersymmetric Artificial Neural Network" hypothesis, (accepted to the 2019 String Theory and Cosmology Conference GRC [1] ) is a Lie Superalgebra bound algorithmic learning model, on the horizon of evidence pertaining to Supersymmetry in the biological brain.[2]

It was introduced by Jordan Micah Bennett on May 10, 2016.

"Thought Curvature" or the "Supersymmetric Artificial Neural Network (2016)" is reasonably observable as a new branch or field of Deep Learning in Artificial Intelligence, called Supersymmetric Deep Learning, by Bennett. Supersymmetric Artificial Intelligence (though not Deep Gradient Descent-like machine learning) can be traced back to work by Czachor et al, concerning a single section/four paragraph thought experiment via segment "Supersymmetry and dimensional Reduction" on a so named "Supersymmetric Latent Semantic Analysis (2004)" based thought experiment; i.e. supersymmetry based single value decomposition, absent neural/gradient descent. Most of that paper apparently otherwise focusses on comparisons between non supersymmetric LSA/Single Value Decomposition, traditional Deep Neural Networks and Quantum Information Theory.[3] Biological science/Neuroscience saw application of supersymmetry, as far back as 2007 by Perez et al. (See reference 3 from Bennett's paper [4])

Method edit

Notation 1 - Manifold Learning:  [5]

Notation 2 - Supermanifold Learning:  [4]

Instead of some   neural network representation as is typical in mean field theory or manifold learning models[6][7][8], the Supersymmetric Artificial Neural Network is parameterized by the Supersymmetric directions  .

An informal proof of the representation power gained by deeper abstractions of the “Supersymmetric Artificial Neural Network” edit

Machine learning non-trivially concerns the application of families of functions that guarantee more and more variations in weight space. This means that machine learning researchers study what functions are best to transform the weights of the artificial neural network, such that the weights learn to represent good values for which correct hypotheses or guesses can be produced by the artificial neural network.

The 'Supersymmetric Artificial Neural Network' is yet another way to represent richer values in the weights of the model; because supersymmetric values can allow for more information to be captured about the input space. For example, supersymmetric systems can capture potential-partner signals, which are beyond the feature space of magnitude and phase signals learnt in typical real valued neural nets and deep complex neural networks respectively. As such, a brief historical progression of geometric solution spaces for varying neural network architectures follows:

  1. An optimal weight space produced by shallow or low dimension integer valued nodes or real valued artificial neural nets, may have good weights that lie for example, in one simple   cone per class/target group.[9][10]
  2. An optimal weight space produced by deep and high-dimension-absorbing real valued artificial neural nets, may have good weights that lie in disentangleable   manifolds per class/target group convolved by the operator  , instead of the simpler regions per class/target group seen in item (1). (This may guarantee more variation in the weight space than (1), leading to better hypotheses or guesses)[11]
  3. An optimal weight space produced by shallow but high dimension-absorbing complex valued artificial neural nets, may have good weights that lie in multiple   sectors per class/target group, instead of the real regions per class/target group seen amongst the prior items. (This may guarantee more variation of the weight space than the previous items, by learning additional features, in the “phase space”. This also leads to better hypotheses/guesses)[12]
  4. An optimal weight space produced by deep and high dimension-absorbing complex valued artificial neural nets, may have good weights that lie in chi distribution bound,   rayleigh space per class/target group convolved by the operator *, instead of the simpler sectors/regions per class/target group seen amongst the previous items. (This may guarantee more variation of the weight space than the prior items, by learning phase space representations, and by extension, strengthen these representations via convolutional residual blocks. This also leads to better hypotheses/guesses)[13]
  5. The 'Supersymmetric Artificial Neural Network' operable on high dimensional data, may reasonably generate good weights that lie in disentangleable   supermanifolds per class/target group, instead of the solution geometries seen in the prior items above. Supersymmetric values can encode rich partner-potential delimited features beyond the phase space of (4) in accordance with cognitive biological space[2], where (4) lacks the partner potential formulation describable in Supersymmetric embedding.[14] [15]

Naive Architecture for the “Supersymmetric Artificial Neural Network" edit

Following, is another view of “solution geometry” history, which may promote a clear way to view the reasoning behind the subsequent naive architecture sequence:

  1. There has been a clear progression of “solution geometries”, ranging from those of the ancient Perceptron [9] to unitaryRNN's[16][17][18], complex valued neural nets[19] or grassmann manifold artificial neural networks[20]. These models may be denoted by   parameterized by  , expressible as geometrical groups ranging from orthogonal[21] to special unitary group[22] based:   to  ..., and they got better at representing input data i.e. representing richer weights, thus the learning models generated better hypotheses or guesses.
  2. By “solution geometry” I mean simply the class of regions where an algorithm's weights may lie, when generating those weights to do some task.
  3. As such, if one follows cognitive science, one would know that biological brains may be measured in terms of supersymmetric operations. (“Supersymmetry at brain scale”[2])
  4. These supersymmetric biological brain representations can be represented by supercharge compatible special unitary notation  , or  ,   parameterized by  ,  , which are supersymmetric directions, unlike   seen in item (1). Notably, Supersymmetric values can encode or represent more information than the prior classes seen in (1), in terms of “partner potential” signals for example.
  5. So, state of the art machine learning work forming   or   based solution geometries, although non-supersymmetric, are already in the family of supersymmetric solution geometries that may be observed as occurring in biological brain or   supergroup representation.

The “Edward Witten/String theory powered artificial neural network”, is simply an artificial neural network that learns supersymmetric[14] weights.

Looking at the above progression of ‘solution geometries’; going from  [9] representation to  [17] representation has guaranteed richer and richer representations in weight space of the artificial neural network, and hence better and better hypotheses were generatable. It is only then somewhat natural to look to   representation, i.e. the “Edward Witten/String theory powered artificial neural network” (“Supersymmetric Artificial Neural Network”).

To construct an “Edward Witten/String theory powered artificial neural network”, it may be feasible to compose a system, which includes a grassmann manifold artificial neural network[20] then generate ‘charts’[23] until scenarios occur[14] where the “Edward Witten/String theory powered artificial neural network” is achieved, in the following way:

See points 1 to 5 in this reference[24]

It seems feasible that a   bound atlas-based learning model, where said   is in the family of supermanifolds from supersymmetry, may be obtained from a system, which includes charts  of grassmann manifold networks   and stiefel manifolds  , in  terms, where there exists some invertible submatrix   for   entailing matrix for where  is a submersion mapping on some stiefel manifold  , thereafter enabling some differentiable grassmann manifold  , and  .[25]



Artificial Neural Network/Symmetry group landscape visualization edit

1.   structure – Orthogonal is not connected enough, therefore not amenable to gradient descent in machine learning. (Paper: See note 2 at end of page 2, in reference [26] .)

2.   structure – Special Orthogonal; is connected, gradient descent compatible, while preserving orthogonality, concerning normal space-time. (Paper: See paper in item 1).

3.   structure – Special Unitary; is connected, gradient descent compatible; complex generalization of  , but only a subspace of larger unitary space, concerning normal space-time. (The Unitary Evolution Recurrent Neural Network[27] related to complex unit circle seen in   in physics (See page 2 in (See page 7 in [28]).))

4.   structure – Unitary; is connected, gradient descent compatible; Larger unitary landscape than  , concerning normal space-time. [29]

5.   structure – Supersymmetric; is connected, thereafter reasonably gradient descent compatible and even larger than the   landscape, to permit sparticle invariance, being a Poincare group extension (See page 7 in [30]) containing both normal space-time and anti-commuting components, as seen in the Supersymmetric Artificial Neural Network which this page proposes.


Ending Remarks edit

Pertinently, the “Edward Witten/String theory powered supersymmetric artificial neural network”, is one wherein supersymmetric weights are sought. Many machine learning algorithms are not empirically shown to be exactly biologically plausible, i.e. Deep Neural Network algorithms, have not been observed to occur in the brain, but regardless, such algorithms work in practice in machine learning.

Likewise, regardless of Supersymmetry's elusiveness at the LHC, as seen above, it may be quite feasible to borrow formal methods from strategies in physics even if such strategies are yet to show related physical phenomena to exist; thus it may be pertinent/feasible to try to construct a model that learns supersymmetric weights, as I proposed throughout this paper, following the progression of solution geometries going from   to   and onwards to  .[31]

References edit

  1. "GRC Frontiers of Science".
  2. 2.0 2.1 2.2 “Supersymmetric methods in the traveling variable: inside neurons and at the brain scale”; P´erez et al
  3. Aerts, Diederik; Czachor, Marek (2004-02-19). Quantum Aspects of Semantic Analysis and Symbolic Artificial Intelligence. 2004. 
  4. 4.0 4.1 Bennett, Jordan (2016-05-23). Thought Curvature. 2016. https://www.researchgate.net/publication/316586028_Thought_Curvature_An_underivative_hypothesis. 
  5. Deep Learning Book; Bengio et al
  6. "Neural Networks, Manifolds, and Topology".
  7. Higgins, Irina; Matthey, Loic; Glorot, Xavier; Pal, Arka; Uria, Benigno; Blundell, Charles; Mohamed, Shakir; Lerchner, Alexander (2016-06-17). Early Visual Concept Learning with Unsupervised Deep Learning. 2016. 
  8. Poole, Ben; Lahiri, Subhaneil; Raghu, Maithra; Sohl-Dickstein, Jascha; Ganguli, Surya (2016-06-16). Exponential expressivity in deep neural networks through transient chaos. 2016. 
  9. 9.0 9.1 9.2 Perceptron
  10. Artificial_neural_network#Hebbian_learning
  11. Deep_learning#Deep_neural_networks
  12. "Complex Valued Neural Networks - Experiments".
  13. Trabelsi, Chiheb; Bilaniuk, Olexa; Serdyuk, Dmitriy; Subramanian, Sandeep; Felipe, João; Soroush, Mehri; Rostamzadeh, Negar; Bengio, Yoshua et al. (2017-05-27). Deep Complex Networks. 2017. 
  14. 14.0 14.1 14.2 Supersymmetry
  15. Thought Curvature
  16. “A study on neural learning on manifold foliations: the case of the Lie group SU(3).”; Simone Fiori
  17. 17.0 17.1 Wisdom, Scott; Powers, Thomas; Hershey, John R; Le Roux, Jonathan; Atlas, Les (2016-10-31). Full-Capacity Unitary Recurrent Neural Networks. 2016. 
  18. Jing, Li; Shen, Yichen; Dubcek, Tena; Peurifoy, John; Skirlo, Scott; LeCun, Yann; Tegmark, Max; Soljaci, Marin (2017-04-03). Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs. 2017. 
  19. Trabelsi, Chiheb; Bilaniuk, Olexa; Zhang, Ying; Serdyuk, Dmitriy; Subramanian, Sandeep; Santos, João; Mehri, Soroush; Rostamzadeh, Negar et al. (2017-11-17). Deep Complex Networks. 2017. 
  20. 20.0 20.1 Huang†, Zhiwu; Wu†, Jiqing; Van Gool, Luc (2017-11-17). Building Deep Networks on Grassmann Manifolds. 2017. 
  21. Orthogonal group
  22. Special unitary group
  23. "The Grassmann Manifold" (PDF).
  24. "Math Wisc Edu/Grassmmann" (PDF).
  25. "The Grassmann Manifold" (PDF).
  26. Lezcano-Casado, Mario; Martínez-Rubio, David (2019-01-24). Cheap Orthogonal Constraints in Neural Networks: A Simple Parametrization of the Orthogonal and Unitary Group. 2019. 
  27. Arjovsky, Martin; Shah, Amar; Bengio, Yoshua (2018-11-20). Unitary Evolution Recurrent Neural Networks. 2018. 
  28. "Physics Wisc Edu/Symmetry Group Supermanifold, ncatlab" (PDF).
  29. Jing, Li; Shen, Yichen; Dubcek, Tena; Peurifoy, John; Skirlo, Scott; LeCun, Yann; Tegmark, Max; Soljaci, Marin (2017-04-03). Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs. 2017. 
  30. "Physics Wisc Edu/Symmetry Group Supermanifold, ncatlab" (PDF).
  31. Supergroup (physics)