The meticulous design of neural architectures is a critical component behind the success of deep networks. Polynomial Networks (PNs) enable a new network design that treats a network as a high-degree polynomial expansion of the input. In the first part of the talk, we identify how polynomial expansions exist inside popular deep networks, such as non-local networks or self-attention mechanism. Then, we extend the PNs beyond the single-input variable polynomial expansions. Having multiple (possibly diverse) inputs is critical for conditional generation tasks. PNs can be extended to tackle such conditional generation tasks and we showcase how they can be used for recovering missing attribute combinations from the training set, e.g. in image generation.
Refreshments will be served before the talk, starting at 2:30.