Recent Advances In Sequence-To-Sequence Learning - Materialübersicht
« zurück
Zugriffsbeschränkte Kursseite
Im Laufe des Semesters werden auf dieser Seite Kursmaterialien zur Verfügung gestellt.
Neural Network Architectures
- Gehring, Jonas, et al. "Convolutional Sequence to Sequence Learning"
- Vaswani, Ashish, et al. "Attention is all you need"
- Chen, Mia Xu, et al. "The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation"
- Chen, Mia Xu, et al. "Quasi-Recurrent Neural Networks"
- Xia, Yingce, et al. "Deliberation Networks: Sequence Generation Beyond One-Pass Decoding"
- Oord, Aaron van den, et al. "WaveNet: A generative model for raw audio"
- Pham, Ngoc-Quan, et al. "Very Deep self-attention networks for end2end speech recognition"
Semi-supervised Seq2Seq
- He, Di, et al. "Dual Learning for Machine Translation"
- Baskar, Murali Karthick, et al. "Semi-supervised Sequence-to-sequence ASR using Unpaired Speech and Text"
- Liu, Alexander H., et. al. "Adversarial training of end-to-end speech recognition using a criticizing language model"
- Chorowski, Jan, and Navdeep Jaitly "Towards better decoding and language model integration in sequence to sequence models"
- Gulcehre, Caglar, et al. "On Using Monolingual Corpora in Neural Machine Translation"
- Sriram, Anuroop, et al. "Training Seq2Seq Models Together with Language Models"
Beyond left-to-right decoding
Other