[1311.2987] Learning Input and Recurrent Weight Matrices ...
https://arxiv.org/abs/1311.298713.11.2013 · Echo State Networks (ESNs) are a special type of the temporally deep network model, the Recurrent Neural Network (RNN), where the recurrent matrix is carefully designed and both the recurrent and input matrices are fixed. An ESN uses the linearity of the activation function of the output units to simplify the learning of the output matrix. In this paper, we …
Kronecker Recurrent Units - fleuret.org
fleuret.org › papers › jose-et-al-icml2018a unitary recurrent weight matrix. The use of norm preserv-ing unitary maps prevent the gradients from exploding or vanishing, and thus help to capture long-term dependencies. The resulting model called unitary RNN (uRNN) is compu-tationally efficient since it only explores a small subset of general unitary matrices. Unfortunately, since uRNNs can
Learning Input and Recurrent Weight Matrices in Echo State ...
www.microsoft.com › en-us › researchThese connections are mathematically represented by the recurrent weight matrix W rec, the input weight matrix W, and the output weight matrix U, respectively. The RNN architecture, in terms of the signal flow, is illustrated in Fig. 1, which also includes input-to-output and output-to-hidden (feedback) connections, with the latter denoted by W fd. The sequential sections of Fig. 1(a), 1(b),
Persistent RNNs: Stashing Recurrent Weights On-Chip
proceedings.mlr.press › v48 › diamos16sented by a two-dimensional matrix, referred to as the re-current weight matrix. In this case, each timestep must be processed sequentially because the outputs of the next timestep depend on the outputs of the current timestep, requiring this operation to be performed using a matrix-vector product, followed by an application of the activa-tion function. This is the most computationally expensive
Learning Input and Recurrent Weight Matrices in Echo State ...
www.microsoft.com › en-us › researchThe traditional echo state network (ESN) is a special type of a temporally deep model, the recurrent network (RNN), which carefully designs the recurrent matrix and fixes both the recurrent and input matrices in the RNN. The ESN also adopts the linear output (or readout) units to simplify the leanring of the only output matrix in the RNN. In this paper, we devise a special technique that takes advantage of the linearity in the output units in the ESN to learn the input and recurrent ...