Презентация на тему: Quality Estimation in Machine Translation

Quality Estimation in Machine Translation
Plan
1990s – 2010s: Statistical Machine Translation
2014: Neural Machine Translation
RNN concept
Sequence-to-sequence translation
Evaluation
Neural Quality Estimation
Similar model for Quality Estimation
WMT task
Our experiments
Summary
References
Thank You for Your attention
1/14
Средняя оценка: 4.5/5 (всего оценок: 85)
Код скопирован в буфер обмена
Скачать (555 Кб)
1

Первый слайд презентации: Quality Estimation in Machine Translation

AndreY golman Diht, 3 rd year bachelor student 18.06.2019 Andrew Golman, DIHT 18.06.2019

Изображение слайда
2

Слайд 2: Plan

Why do we need Neural Machine Translation? What are Seq2Seq models? Can we apply same techniques for Quality Estimation task? What have we achieved? Andrew Golman, DIHT 18.06.2019

Изображение слайда
3

Слайд 3: 1990s – 2010s: Statistical Machine Translation

Pictures from Stanford CS224n, lecture 8 Alignment issues Huge word- and phrase-level dictionaries For every pair of languages! Andrew Golman, DIHT 18.06.2019

Изображение слайда
4

Слайд 4: 2014: Neural Machine Translation

A way to do Machine Translation with a single neural network Most complex models have 200 million parameters More fluent than SMT Pick up the meaning first, then phrase it Andrew Golman, DIHT 18.06.2019

Изображение слайда
5

Слайд 5: RNN concept

Predicted word probabilities Source sentence Predicted word probabilities Andrew Golman, DIHT 18.06.2019 Encoded words Hidden states Picture from Stanford CS224n, lecture 6

Изображение слайда
6

Слайд 6: Sequence-to-sequence translation

Andrew Golman, DIHT 18.06.2019 Picture from Stanford CS224n, lecture 8

Изображение слайда
7

Слайд 7: Evaluation

Compare with baseline translations Pay assessors for marking errors Build a neural system for error detection Andrew Golman, DIHT 18.06.2019

Изображение слайда
8

Слайд 8: Neural Quality Estimation

Andrew Golman, DIHT 18.06.2019 к просмотру

Изображение слайда
9

Слайд 9: Similar model for Quality Estimation

First bidirectional LSTM Second bidirectional LSTM OK/BAD labels Error classification Picture from [2] Andrew Golman, DIHT 18.06.2019

Изображение слайда
10

Слайд 10: WMT task

English-German, English-Spanish, English-Russian datasets 15,000 labelled sentences Pretrained models are allowed Results from [2] Andrew Golman, DIHT 18.06.2019

Изображение слайда
11

Слайд 11: Our experiments

Use CRFs (lattice-structured RNNs) for final classification Try transformer architecture Results from [3] Andrew Golman, DIHT 18.06.2019

Изображение слайда
12

Слайд 12: Summary

Andrew Golman, DIHT 18.06.2019 NMT is more efficient than SMT RNNs are used for text generation F irst NMT systems were based on two RNNs Quality estimation models can be used to compare translation systems Results are improving every year

Изображение слайда
13

Слайд 13: References

Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation,   Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, 2016 Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation, Hyun Kim, Jong- Hyeok Lee and Seung-Hoon Na, 2017 Comparison of Various Architectures for World-Level Quality Estimation Mikhail Mosyagin, Amir Yagudin, Andrey Golman, 2019 Andrew Golman, DIHT 18.06.2019

Изображение слайда
14

Последний слайд презентации: Quality Estimation in Machine Translation: Thank You for Your attention

Andrew Golman, DIHT 18.06.2019 mailto:golman.as@phystech.edu

Изображение слайда