Recent advances in deep learning have enabled researches to address the curse of dimensionality (COD) by solving problems in higher dimensions. A subset of such approaches of addressing the COD has led to solutions for high-dimensional partial differential equations (PDEs). This has resulted in opening doors to solving a variety of real-world problems ranging from mathematical finance to stochastic control for industrial applications.
Although feasible, these deep learning methods are still constrained by training time and memory. Tackling these shortcomings, Tensor Neural Networks (TNN) demonstrate that they can provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN).
In a paper, a group of researchers, including Christophe Michel from Crédit Agricole and quantum startup Multiverse Computing’s Samuel Mugel, show how TNN can be trained faster than DNN for the same accuracy. Besides TNN, they also introduce Tensor Network Initializer (TNN Init), a weight initialization scheme that leads to faster convergence with smaller variance for an equivalent parameter count as compared to a DNN. They benchmark TNN and TNN Init by applying them to solve the parabolic PDE associated with the Heston model, which is widely used in financial pricing theory.