This talk will start with an overview of tensor decompositions. For higher-order tensors, there are several generalizations of the matrix rank. I will focus on the so-called tensor rank (corresponding to the canonical polyadic decompositions, or CPD).
The CPD has several remarkable properties, such as uniqueness and the absence of the best low-rank approximation. These properties also hold true for a larger class of so-called X-rank decompositions.
In the last part of the talk, I will address a particular neural network architecture with flexible activation functions. In particular, I will show why the X-rank decompositions are useful for studying identifiability of such networks and how the CPD-based algorithms can be used for network compression.