generated at
Transformer
RNNなしCNNなしで注意機構だけ構成されたTransformerが翻訳タスクで良い成績を出すという報告。
>We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.

Attention Is All You Need Łukasz Kaiser et al., arXiv, 2017/06
2017年


縮小付き内積注意 (Scaled Dot-Product Attention)