Schedule of Workshop 4: Tensor Approximation in High Dimension

Friday, August 5

09:00-12:00 Everyone is invited for discussion

Evrim Acar Ataman: All-at-once Optmizaton for Mining Higher-order Tensors

In many disciplines, we deal with data that has more than two axes of variaton. For instance, multchannel EEG (electroencephalogram) signals have spatal, spectral and temporal dimensions. We represent such data as higher-order tensors and use tensor factorizatons, i.e., higher-order analogues of matrix factorizatons, to extract the underlying paterns in the data. However, informaton extracton is ofen challenging due to large-scale and incomplete data sets, i.e., data with missing or unknown values.
Besides, we are commonly faced with data from multple sources and need to fuse data sets to capture the underlying structures. In order to address these issues, we introduce a gradient-based optmizaton framework for factorizing higher-order tensors that can easily extend to the analysis of incomplete tensors as well as joint analysis of multple data sets. In partcular, we are interested in ftng the tensor model called CANDECOMP/PARAFAC (CP), which expresses a tensor as the sum of rank-one tensors and has been widely used in various disciplines including chemometrics, signal processing, computatonal neuroscience and social network analysis. The traditonal CP algorithm is an alternatng least squares algorithm, which is computatonally very efcient but not robust in the case of overfactoring, i.e., extractng more factors than the true underlying factors. Alternatve algorithms shown to be robust to overfactoring do not scale to large-scale data sets. The proposed gradient-based approach uses all-atonce optmizaton and solves for all CP factor matrices simultaneously. Using numerical experiments, we demonstrate that it is both accurate and scalable. We also show how the proposed approach extends to ft a CP model to incomplete data and joint analysis of multple data sets. We further demonstrate the usefulness of the proposed framework on several biomedical applicatons.

Top

Andre Uschmajew: A local convergence theory of the ALS algorithm for tensor approximation

Much analysis on the ALS algorithm for tensor approximation focusses onglobal questions like swamps, cycles and convergent subsequences to criticalpoints. But any widely used algorithm should desireably also come with a local convergence theory. In this talk we present a local convergence theorem for PARAFAC-ALS under reasonable (and maybe unavoidable) assumptions. This result is a particular instance of a general framework of alternating optimization with scaling redundancies. We apply the same theory to obtain a local convergence theorem for TT approximation by ALS.

Top

Hans De Sterck: Extending GMRES to Nonlinear Optimization: Application to Tensor Approximation

A new algorithm is presented for computing a canonical rank-R tensor approximation that has minimal distance to a given tensor in the Frobenius norm, where the canonical rank-R tensor consists of the sum of R rank-one components. Each iteration of the method consists of three steps. In the rst step, atentative new iterate is generated by a stand-alone one-step process, for which we use alternating least squares (ALS). In the second step, an accelerated iterate is generated by a nonlinear generalized minimal residual (GMRES) approach, recombining previous iterates in an optimal way, and essentially using the stand- alone one-step process as a preconditioner. In the third step, a line search is performed for globalization. The resulting nonlinear GMRES (N-GMRES) optimization algorithm is applied to dense and sparse tensor decomposition test problems. The numerical tests show that ALS accelerated by N-GMRES may signi cantly outperform both stand-alone ALS and a standard nonlinear conjugate gradient optimization method, especially when highly accurate stationary points are desired for diffcult problems. The proposed N-GMRES optimization algorithm is based on general concepts and may be applied to other nonlinear optimization problems. We briefly discuss how updates in the steepest descent direction can be used as a natural preconditioner to make the N-GMRES optimization algorithm applicable to a broad range of smooth nonlinear optimization problems.

Top

Berkant Savas: Low rank tensor approximation|Krylov-type methods and perturbation theoryIn this talk we will consider the low multilinear rank approximation of a given third order tensor. In the rest part of the talk we will present a few generalizations of matrix Krylov methods to tensors. The objective is to obtain a rank-(p, q, r) approximation of a given l x m x n tensor A. The problem can be viewed as nding low dimensional signal subspaces associated to the di erent modes of the tensor. Krylov methods, similar to the matrix case, are particularly well suited for problems involving large and sparse tensors or for tensors that allow effcient multilinear tensor-times-vector multiplications. For a few special cases we will prove that our methods capture the true signal subspaces associated to the tensor within certain number of steps in the algorithm. We will present experimental results on real world data that con firm the usefulness of the proposed methods for the given objective. In the second part of the talk we will discuss sensitivity of a low rank tensor approximation due to perturbations. As the low rank tensor approximation problem is a highly non-linear we will consider solutions at points of local optimum of the objective function. We will guratively describe the rst and second order conditions of optimality and compare them with the matrix case. Further, for the best rank-k matrix approximation, given by the truncated singular value decomposition (SVD), it is well known that the sensitivity of the left and right singular vectors is bounded in terms of the gap between the k'th and (k+1)'th singular values. We will see what this gap generalizes to in the case for low rank tensor approximations. We will use specific c examples to illustrate the concepts of the presentation. Download Slides

Top

Ognyan Kounchev: Tensor Decompositions and new models in Spectral theory

Spectral theory studies the unitary invariants of operators. However it is notorious that a satisfactory analysis including inverse problems of spectral (and scattering) theory is available only in the case of one-dimensional operators. In particular, in the case of discrete one-dimensional operators, which are the matrices, the spectral invariants are the eigenvalues, and every approximation (including rank approximations) is intimately related to the spectrum. Hence, a reasonable attempt to generalize the approximations to multidimensional operators has to be related to a generalization of the notion of spectrum. We provide some new Tensor decompositions and approximations which are related to new "pseudo-positive models" in Spectral theory of multidimensional operators, cf. [1]. For these models we show that one may de ne in a natural way the notions of "Determinant" and "Rank" which di er from other approaches, cf. e.g. [2]. The main issue is the notion of (multidimensional) Distributed Spectrum for the operators arising through such models. Based on this notion of Distributed Spectrum, one may introduce new approach to the rank-type approximation of the operators, in particular to the discrete versions which are called tensors. In the one-dimensional case one has an approximation which corresponds to the Gauss-Jacobi measure related to the orthogonal polynomials generated by the spectral measure. Respectively, for our "pseudo-positive models" we are able to generalize the usual rank approximation analogous to the Eckart-Young one, as well as an analog to the Gauss-Jacobi approximation. Alternatively, the above models provide a generalization to the multidimensional case of the Jacobi matrix representation, known for matrices (cf. [3], [4]), and hence a new approximation to the multi-way matrices (tensors).

[1] O. Kounchev, H. Render, The moment problem for pseudo-positive de nite functionals. Arkiv for Matematik, 48 (2010), pp. 97-120.
[2] I. Gelfand, Kapranov, Zelevinsky, Discriminants, Resultants, and Multidimensiona Determinants, Birkhaeuser, 1992; 2008:
[3] N. Achiezer, The classical moment problem, Holden Day, 1966: (translation
from Russian).
[4] B. Simon, Szegö's Theorem and Its Descendants: Spectral Theory for L2 Perturbations of Orthogonal Polynomials, Princeton University Press, 2010

Top

Lars Grasedyck: Hierarchical Tensor Methods for PDEs with Stochastic Parameters

We consider the problem to solve a (stochastic) parameter dependent equation A(w)u(w) = b(w),   w E of OMEGA for systems A governed by partial diff erential operators that depend on w. Our aim is to calculate quantities of interest (mean, variance, maximum etc.) of the set of solutions. One way to solve such a problem is by expansion of the system, the right-hand side as well as the solution in independent uncorrelated stochastic variables w_1,.....,2_p, and then solve the arising large-scale deterministic problem A(w_1.....,w_p)u(w_1,.....,w_p) = b(w_1,....,w_p): An alternative approach is to use (quasi or multilevel) Monte Carlo (MC) methods which require just a simple sampling (M simulations), but these are only useful for certain quantities of interest (e.g. the mean). We will present a new approach based on hierarchical Tucker (HT) representations of tensors. This method is based on standard PDE solvers for deterministic systems. The set of solutions is approximated in a low rank (HT) tensor format that allows for many parameters (thousands), since for xed rank the complexity depends only linearly or quadratically on the number of parameters. Download Slides

Top

Natalia Dück, Gerwald Lichtenberg: Towards state space identi cation for multilinear systems by hierarchical tensor decomposition

Well known state space identi cation methods efficiently nd parameters of linear systems from measurement data. If the investigated system is not linear but multilinear, the method is no longer direct applicable and tensor decomposition is a key to parameter identi cation for this class of systems. Moreover, as the dynamics of discrete time system is recursive, hierarchical decomposition is appropriate for this task. The identi cation problem will be presented as well as approaches to extend SVD methods for linear systems to multilinear systems based on hierarchical tensor decomposition algorithms.

Top

Stefan Ho mann, Gerwald Lichtenberg: An approach to hierarchical algebraic decomposition of Boolean tensors

Decomposition of tensor representations of Boolean functions can be of canonical polyadic (CP) form which leads to formats known as ternary vector lists. If the condition that all elements of the factor matrices and weighting vectors are Boolean is relaxed and real values are allowed, it is possible to reconstruct the Boolean values of the original tensor by quantization. A new approach to hierarchical algebraic decomposition of Boolean tensors will be presented. It uses hierarchical Tucker decomposition techniques based on low-rank approximated SVD on real numbers. The main design parameters of the algorithm are the quantization thresholds and the desired rank of the decomposition. Results of this new decomposition method are discussed together with some examples. Additionally a comparison to a pure binary method that uses Boolean matrix decomposition (BMD) instead of SVD is given.

Top

Rob Stevenson: The adaptive piecewise tensor product wavelet scheme for solving operator equations

The adaptive wavelet method for solving well-posed operator equations converges with the rate of best N-term approximation. When tensor product bases are applied this rate is independent of the dimension of the underlying domain. We discuss the application of piecewise tensor product bases to generalize these results to non-product domains.

Top

Christine Tobler: Preconditioned low-rank methods for high-dimensional elliptic PDE-eigenvalue problems

We consider elliptic PDE-eigenvalue problems on a tensorized domain, discretized such that the resulting matrix eigenvalue problem (Alpha)x = (Lamda)x exhibits Kronecker product structure. In particular, we are concerned with the case of high dimensions, where standard approaches to the solution of eigenvalue problems fail due to the exponentially growing degrees of freedom. Recent work shows that this curse of dimensionality can be addressed in many cases by approximating the desired solution vector x in a low-rank tensor format. We use the hierarchical Tucker decomposition to develop a low-rank variant of LOPCG, a classical preconditioned eigenvalue solver. Alternatively, the ALS and MALS (DMRG) methods known from computational quantum physics could be applied in this setting. The ALS and MALS methods require the solution of reduced eigenvalue problems involving 2-4 dimensional tensors. The standard LOPCG method was recently proposed for the solution of these problems. We propose the low-rank variant of the LOPCG method for the case of the MALS method. Additionally, given a preconditioner of the original system, a preconditioner can be found for the reduced eigenvalue problem. We present a number of numerical experiments, and brieffly discuss di erent choices of stopping criteria and truncation ranks for the reduced eigenvalue problems.

Top

Reinhold Schneider: Optimization in hierarchical and TT tensor formats

Novel hierarchical tensor formats o ffer stable approximation by a low order cost. We consider the solution of quadratic optimization problems constraint by the restriction to tensors of prescribed ranks r. We analyse the open manifold of such tensors and its projection onto the tangent space. We further derive deferential equations for the gradient low and stationary equations based on Dirac-Frenkel variational principle. We derive projected gradient methods, and a preconditioned version which generalize the selfconsistent iteration scheme. And focus on the ALS, the alternating least square or alternating linear scheme. We examine the convergence of these algorithms and stability properties. For sake of simplicity, we often con ne ourselves to TT format and its matrix product representation. Nevertheless most material generalizes to hierarchical tensor formats resp. tree tensor networks. The talk contains material form ongoing collaborative research with the group at MPI Leipzig.

Top

Lek-Heng Lim: Multilinear Algebraic Analogues of Linear Algebraic Notions

We will discuss multilinear algebraic versions of the some well-known notions in linear algebra. Our list will include: multilinear systems of equations, hyperdeterminants, multilinear programming, multilinear least squares, tensor eigenvalue and symmetric tensor eigenvalue problems, tensor singular value problems, tensor ranks, symmetric positive de nite tensors and Cholesky decomposable symmetric tensors, spectral/nuclear/Schatten/Ky Fan norms of tensors.

Top

Ivan Oseledets: Multiparametric model reduction using TT-format

One of the most promising applications of the novel tensor formats is the solution of the multiparametric problems and multiparametric model reduction. The solution is represented as a big tensor and the solution is sought in this format directly. The idea to lower the complexity is to perform ALS iterations between physical and parametric spaces and treat them separately. In this talk, I will show how the recent results can be applied to tackle this problem. Possible variants include:
1) Cross interpolation, especially the TT-RC (TT-Renormalization-Cross) method (Oseledets, Savostyanov)
2) Solution of linear systems in the TT-format using DMRG approach (Oseledets, Dolgov) Numerical experiments will be provided.

Top

Dmitry Savostyanov: FFT and cross interpolation schemes for QTT format

Tensor train (TT) format has been recently proposed in [6] as a new method of data-sparse representation of the high-dimensional data (d-tensors, i.e. arrays with d indices) based on separation of variables. TT format has the advances of both canonical and Tucker formats: the number of parameters does not grow exponentially with dimension, decomposition/approximation problem is stable and can be computed by methods based on SVD. Using the quantization idea proposed in [5, 4], we can apply TT format to the data of low dimension, resulting in the QTT format. Recent progress in the development of the efficient algorithms for QTT data includes the FFT algorithm and cross interpolation scheme. The m{dimensional Fourier transform of an n x. . . x n vector with n = 2^d has logarithmic complexity, O(m^2d^2r^3); where r is the maximum QTT rank of input, output and all intermediate vectors during the procedure. For vectors with moderate r and large n and m the proposed algorithm outperforms the O(n^m log n) FFT algorithm of almost volume complexity. In this talk we describe the class of vectors with r = 1: Also, we show how we can combine cross interpolation algorithm with FFT to compute Fourier images of some one-, two- and three-dimensional functions efficiently, as well as the convolutions. The proposed cross interpolation algorithm is based on the DMRG scheme and uses maximum-volume principle [2, 3] to select the interpolation positions. It can be also formulated using the Wedderburn rank reduction framework, applied to 3-tensors in [1]. This is joint work with Boris Khoromskij, Sergey Dolgov and Ivan Oseledets, supported by RFBR/DFG grant 09-01-91332 and RFBR grants 09-01-00565, 10-
01-00757.

[1] S. A. Goreinov, I. V. Oseledets, and D. V. Savostyanov. Wedderburn rank reduction and Krylov subspace method for tensor approximation. Part 1: Tucker case. Preprint 2010-01, INM RAS, Moscow, 2010.
[2] S. A. Goreinov and E. E. Tyrtyshnikov. The maximal-volume concept in approximation by low-rank matrices. Contemporary Mathematics, 208:47-51,2001.
[3] S.A. Goreinov, I.V. Oseledets, D.V. Savostyanov, E.E. Tyrtyshnikov, and N.L. Zamarashkin. How to nd a good submatrix. In V. Olshevsky and E. Tyrtyshnikov, editors, Matrix Methods: Theory, Algorithms, Applications, pages 247-256. World Scienti c Publishing, 2010.
[4] B. N. Khoromskij and I. V. Oseledets. Quantics-TT approximation of elliptic solution operators in high dimensions. J. Numer. Math., (2011).
[5] I. V. Oseledets. Approximation of 2d2d matrices using tensor decomposition. SIAM J. Matrix Anal. Appl., 31(4):2130-2145, 2010.
[6] I. V. Oseledets and E. E. Tyrtyshnikov. Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM J. Sci. Comp., 31(5):3744-3759, 2009.
[7] I. V. Oseledets and E. E. Tyrtyshnikov. TT-cross algorithm for the approximation of multidimensional arrays. Linear Algebra Appl., 432(1):70-88, 2010.

Top

Thomas Huckle: Computations in Quantum Tensor Networks

We consider tensor representations of vectors. The vector x we are looking for is the eigenvector to the smallest eigenvalue of a Hamiltonian matrix which describes a spin system with N particles for large N. Hence, the length of the vector is 2^N, and the vector cannot be stored explicitly. Therefore, the vector is approximated via the Matrix Product State (MPS) ansatz (similar to a Tensor Train representation). We discuss properties of MPS vectors and operators, e.g. normalization, symmetries, Matrix Product Operators, and related eigenvalue problems and computations. Download Slides

Top

Mike Espig: Tensor Networks

In this talk we discuss a calculus of variations in arbitrary tensor representations with a special focus on contracted tensor networks and apply it to functionals of practical interest. The survey provides all necessary ingredients for applying minimization methods in a general setting. The important cases of target functionals which are linear and quadratic with respect to the tensor product are discussed, and combinations of these functionals are presented in detail.

Top

Boris N. Khoromskij: Tensor Numerical Methods for High-Dimensional PDEs. Prospects and Limitations

Modern methods of rank-structured tensor decomposition allow an efficient separable approximation of multivariate functions and operators, providing linear complexity scaling in the dimension [6, 7]. The recent quantized tensor train (QTT) matrix product states technique is proved to provide the supercompressed representation of high-dimensional data with log-volume complexity [1]. The approach opens the way to efficient numerical solution of high-dimensional PDEs with linear scaling in the dimension and logarithmic scaling in the grid size [2, 3, 4, 5, 8, 9]. We discuss the asymptotically optimal low QTT-rank representations for a class of multivariate functions and operators, substantiating the computational efficiency of the QTT transform to higher dimensions. In particular, the QTT expansions for a family of discrete multidimensional Hamiltonian operators will be addressed. Numerical illustrations in electronic structure calculations, quantum molecular dynamics and stochastic PDEs are presented. We conclude with discussion on future prospects and limitations of tensor numerical methods in scientific computing.

[1] B.N. Khoromskij. O(d logN)-Quantics Approximation of N-d Tensors in High-Dimensional Numerical Modeling. J. Constr. Approx. 2011, DOI: 10.1007/s00365-011-9131-1. Preprint 55/2009 MPI MIS, Leipzig 2009.
[2] B.N. Khoromskij, V. Khoromskaia, and H.-J. Flad. Numerical solution of the Hartree-Fock equation in multilevel tensor-structured format. SIAM J. Sci. Comp., 33(1), 2011, 45-65.
[3] B.N. Khoromskij, and Ch. Schwab. Tensor-Structured Galerkin Approximation of Parametric and Stochastic Elliptic PDEs. SIAM J. Sci. Comp., 33(1), 2011, 1-25.
[4] B.N. Khoromskij, and I. Oseledets. Quantics-TT collocation approximation of parameter-dependent and stochastic elliptic PDEs. Comp. Meth. in Applied Math., 10(4):34-365, 2010.
[5] B.N. Khoromskij, and I. Oseledets. DMRG+QTT approach to high-dimensional quantum molecular dynamics. Preprint 68/2010, MPI MiS, Leipzig 2010 (Numer. Math. submitted).
[6] B.N. Khoromskij. Tensors-structured Numerical Methods in Scientific Computing: Survey on Recent Advances. Preprint 21/2010, MPI MiS, Leipzig 2010 (submitted).
[7] B.N. Khoromskij, Introduction to Tensor Numerical Methods in Scientific Computing. Lecture Notes, Preprint 06-2011, University of Zuerich, Institute of Mathematics, 2011, pp. 1 - 238.
[8] V. Kazeev, B.N. Khoromskij, and E.E. Tyrtyshnikov. QTT-tensor structure of multilevel Toeplitz matrices leads to convolution with logarithmic complexity. Preprint 36/2011, MPI MiS, Leipzig 2011.
[9] V. Khoromskaia, B.N. Khoromskij, and R. Schneider. QTT Representation of the Hartree and Exchange Operators in Electronic Structure Calculations. Preprint 37/2011, MPI MiS, Leipzig 2011.

Top