CS11-747 Neural Networks for NLP
Risk, Minimum Risk Training, Reinforcement Learning
Graham Neubig
Site https://phontron.com/class/nn4nlp2020/
Risk, Minimum Risk Training, Reinforcement Learning Graham Neubig - - PowerPoint PPT Presentation
CS11-747 Neural Networks for NLP Risk, Minimum Risk Training, Reinforcement Learning Graham Neubig Site https://phontron.com/class/nn4nlp2020/ <latexit
CS11-747 Neural Networks for NLP
Graham Neubig
Site https://phontron.com/class/nn4nlp2020/
in the reference given the previous words `(E | F) = − log P(E | F) = −
T
X
t=1
log P(et | F, e1, . . . , et−1)
<latexit sha1_base64="GeA/Os4/BK6Zz954iZvfPtPrQE=">ACXHicbVFdSxwxFM1MtepY61qhL325uLQo6DIjhdoHQVpafFzBrcLOdshk7q7BZDIkdwrLMH+yb/Wlf6XZdR786IHAyTn3kOQkr5R0FMd/gvDFyurLtfWNaPV1uvt3s6bH87UVuBIGXsdc4dKlniCQpvK4scp0rvMpvy78q19onTlJc0rnGg+K+VUCk5eynqUolL73yDVsoDvB/ABTuEoVWYGwdqmkbwGMs5SF2ts4ZOk/Znc9lCl8OMuQhYJYcer0w5Babho6S9iDr9eNBvAQ8J0lH+qzDMOv9Tgsjao0lCcWdGydxRZOGW5JCYRultcOKi1s+w7GnJdfoJs2ynRbe6WAqbF+lQRL9WGi4dq5uc79pOZ0456C/F/3rim6cmkWVE5bi/qBprYAMLKqGQloUpOaecGlvyuIG265IP8hkS8hefrk52R0Pg8SC4+9s+dG2s3dsj+2zhH1iZ+ycDdmICXYXsGAjiIK/4Wq4GW7dj4ZBl9ljxC+/Qcv6aoL</latexit><latexit sha1_base64="GeA/Os4/BK6Zz954iZvfPtPrQE=">ACXHicbVFdSxwxFM1MtepY61qhL325uLQo6DIjhdoHQVpafFzBrcLOdshk7q7BZDIkdwrLMH+yb/Wlf6XZdR786IHAyTn3kOQkr5R0FMd/gvDFyurLtfWNaPV1uvt3s6bH87UVuBIGXsdc4dKlniCQpvK4scp0rvMpvy78q19onTlJc0rnGg+K+VUCk5eynqUolL73yDVsoDvB/ABTuEoVWYGwdqmkbwGMs5SF2ts4ZOk/Znc9lCl8OMuQhYJYcer0w5Babho6S9iDr9eNBvAQ8J0lH+qzDMOv9Tgsjao0lCcWdGydxRZOGW5JCYRultcOKi1s+w7GnJdfoJs2ynRbe6WAqbF+lQRL9WGi4dq5uc79pOZ0456C/F/3rim6cmkWVE5bi/qBprYAMLKqGQloUpOaecGlvyuIG265IP8hkS8hefrk52R0Pg8SC4+9s+dG2s3dsj+2zhH1iZ+ycDdmICXYXsGAjiIK/4Wq4GW7dj4ZBl9ljxC+/Qcv6aoL</latexit><latexit sha1_base64="GeA/Os4/BK6Zz954iZvfPtPrQE=">ACXHicbVFdSxwxFM1MtepY61qhL325uLQo6DIjhdoHQVpafFzBrcLOdshk7q7BZDIkdwrLMH+yb/Wlf6XZdR786IHAyTn3kOQkr5R0FMd/gvDFyurLtfWNaPV1uvt3s6bH87UVuBIGXsdc4dKlniCQpvK4scp0rvMpvy78q19onTlJc0rnGg+K+VUCk5eynqUolL73yDVsoDvB/ABTuEoVWYGwdqmkbwGMs5SF2ts4ZOk/Znc9lCl8OMuQhYJYcer0w5Babho6S9iDr9eNBvAQ8J0lH+qzDMOv9Tgsjao0lCcWdGydxRZOGW5JCYRultcOKi1s+w7GnJdfoJs2ynRbe6WAqbF+lQRL9WGi4dq5uc79pOZ0456C/F/3rim6cmkWVE5bi/qBprYAMLKqGQloUpOaecGlvyuIG265IP8hkS8hefrk52R0Pg8SC4+9s+dG2s3dsj+2zhH1iZ+ycDdmICXYXsGAjiIK/4Wq4GW7dj4ZBl9ljxC+/Qcv6aoL</latexit>but at test time we may make mistakes that propagate
during training, and cannot deal with them at test
I
classify classify
I I I
classify
I encoder I
classify
I I
classify
I
e.g. BLEU or METEOR
so we'd like to penalize them appropriately
ˆ E = argmax ˜
EP( ˜
E | F)
<latexit sha1_base64="6ek90mJoNPTvCtomTW+aydQsu2s=">ACH3icbVBNSwMxEM3W7/pV9eglWAS9lF0RqgehKIrHClaFbinZ7LQNTXaXZFZalv0pXvwrXjyoiDf/jWkt4teDwJv3ZpjMCxIpDLru1OYmp6ZnZtfKC4uLa+sltbWr0ycag4NHstY3wTMgBQRNFCghJtEA1OBhOugfzLyr29BGxFHlzhMoKVYNxIdwRlaqV2q+j2G2WlOj6iPMCM6a5ig7yd+ShkCNbKaX3nq6C+EiE92Xym7FHYP+Jd6ElMkE9XbpzQ9jniqIkEtmTNzE2zZdSi4hLzopwYSxvusC01LI6bAtLxgTndtkpIO7G2L0I6Vr9PZEwZM1SB7VQMe+a3NxL/85opdg5amYiSFCHin4s6qaQY01FaNBQaOMqhJYxrYf9KeY9pxtFmWrQheL9P/ksae5XDinexX64dT9KYJ5tki+wQj1RJjZyTOmkQTu7IA3kiz8698+i8OK+frQVnMrNBfsB5/wAY9KMb</latexit><latexit sha1_base64="6ek90mJoNPTvCtomTW+aydQsu2s=">ACH3icbVBNSwMxEM3W7/pV9eglWAS9lF0RqgehKIrHClaFbinZ7LQNTXaXZFZalv0pXvwrXjyoiDf/jWkt4teDwJv3ZpjMCxIpDLru1OYmp6ZnZtfKC4uLa+sltbWr0ycag4NHstY3wTMgBQRNFCghJtEA1OBhOugfzLyr29BGxFHlzhMoKVYNxIdwRlaqV2q+j2G2WlOj6iPMCM6a5ig7yd+ShkCNbKaX3nq6C+EiE92Xym7FHYP+Jd6ElMkE9XbpzQ9jniqIkEtmTNzE2zZdSi4hLzopwYSxvusC01LI6bAtLxgTndtkpIO7G2L0I6Vr9PZEwZM1SB7VQMe+a3NxL/85opdg5amYiSFCHin4s6qaQY01FaNBQaOMqhJYxrYf9KeY9pxtFmWrQheL9P/ksae5XDinexX64dT9KYJ5tki+wQj1RJjZyTOmkQTu7IA3kiz8698+i8OK+frQVnMrNBfsB5/wAY9KMb</latexit><latexit sha1_base64="6ek90mJoNPTvCtomTW+aydQsu2s=">ACH3icbVBNSwMxEM3W7/pV9eglWAS9lF0RqgehKIrHClaFbinZ7LQNTXaXZFZalv0pXvwrXjyoiDf/jWkt4teDwJv3ZpjMCxIpDLru1OYmp6ZnZtfKC4uLa+sltbWr0ycag4NHstY3wTMgBQRNFCghJtEA1OBhOugfzLyr29BGxFHlzhMoKVYNxIdwRlaqV2q+j2G2WlOj6iPMCM6a5ig7yd+ShkCNbKaX3nq6C+EiE92Xym7FHYP+Jd6ElMkE9XbpzQ9jniqIkEtmTNzE2zZdSi4hLzopwYSxvusC01LI6bAtLxgTndtkpIO7G2L0I6Vr9PZEwZM1SB7VQMe+a3NxL/85opdg5amYiSFCHin4s6qaQY01FaNBQaOMqhJYxrYf9KeY9pxtFmWrQheL9P/ksae5XDinexX64dT9KYJ5tki+wQj1RJjZyTOmkQTu7IA3kiz8698+i8OK+frQVnMrNBfsB5/wAY9KMb</latexit>error(E, ˆ E) = 1 − BLEU(E, ˆ E)
<latexit sha1_base64="KRxJjxRAFBSumCLgm+mSm7rf7k=">ACHicbVDLSgNBEJyNrxhfUY9eBoMQcOuBNSDECIBDx4iuEZIQpidJIhsw9mesWw5Ee8+CtePKh48SD4N04eB40WNBRV3XR3eZEUGm37y0rNzS8sLqWXMyura+sb2c2tGx3GioPLQxmqW49pkCIAFwVKuI0UMN+TUP65yO/dgdKizC4xkETZ91A9ERnKGRWtliA+EeE1AqVMN85YA2egyTynCfnlHncGKWLyvuL6+VzdkFewz6lzhTkiNTVFvZj0Y75LEPAXLJtK47doTNhCkUXMIw04g1RIz3WRfqhgbMB91Mxt8N6Z5R2rQTKlMB0rH6cyJhvtYD3zOdPsOenvVG4n9ePcbOSTMRQRQjBHyqBNLiEdRUXbQgFHOTCEcSXMrZT3mGIcTaAZE4Iz+/Jf4h4VTgvOVTFXKk/TSJMdskvyxCHpEQuSJW4hJMH8kReyKv1aD1b9b7pDVlTWe2yS9Yn9F6BW</latexit><latexit sha1_base64="KRxJjxRAFBSumCLgm+mSm7rf7k=">ACHicbVDLSgNBEJyNrxhfUY9eBoMQcOuBNSDECIBDx4iuEZIQpidJIhsw9mesWw5Ee8+CtePKh48SD4N04eB40WNBRV3XR3eZEUGm37y0rNzS8sLqWXMyura+sb2c2tGx3GioPLQxmqW49pkCIAFwVKuI0UMN+TUP65yO/dgdKizC4xkETZ91A9ERnKGRWtliA+EeE1AqVMN85YA2egyTynCfnlHncGKWLyvuL6+VzdkFewz6lzhTkiNTVFvZj0Y75LEPAXLJtK47doTNhCkUXMIw04g1RIz3WRfqhgbMB91Mxt8N6Z5R2rQTKlMB0rH6cyJhvtYD3zOdPsOenvVG4n9ePcbOSTMRQRQjBHyqBNLiEdRUXbQgFHOTCEcSXMrZT3mGIcTaAZE4Iz+/Jf4h4VTgvOVTFXKk/TSJMdskvyxCHpEQuSJW4hJMH8kReyKv1aD1b9b7pDVlTWe2yS9Yn9F6BW</latexit><latexit sha1_base64="KRxJjxRAFBSumCLgm+mSm7rf7k=">ACHicbVDLSgNBEJyNrxhfUY9eBoMQcOuBNSDECIBDx4iuEZIQpidJIhsw9mesWw5Ee8+CtePKh48SD4N04eB40WNBRV3XR3eZEUGm37y0rNzS8sLqWXMyura+sb2c2tGx3GioPLQxmqW49pkCIAFwVKuI0UMN+TUP65yO/dgdKizC4xkETZ91A9ERnKGRWtliA+EeE1AqVMN85YA2egyTynCfnlHncGKWLyvuL6+VzdkFewz6lzhTkiNTVFvZj0Y75LEPAXLJtK47doTNhCkUXMIw04g1RIz3WRfqhgbMB91Mxt8N6Z5R2rQTKlMB0rH6cyJhvtYD3zOdPsOenvVG4n9ePcbOSTMRQRQjBHyqBNLiEdRUXbQgFHOTCEcSXMrZT3mGIcTaAZE4Iz+/Jf4h4VTgvOVTFXKk/TSJMdskvyxCHpEQuSJW4hJMH8kReyKv1aD1b9b7pDVlTWe2yS9Yn9F6BW</latexit>decisions
everywhere, not-conducive to gradient-based training
risk(F, E, θ) = X
˜ E
P( ˜ E | F; θ)error(E, ˜ E).
<latexit sha1_base64="iwD7OmBG4KhDZEWl5K36ziE3oIk=">ACTHicbVFdSyMxFM1U14/uh1UfQmWhRakzIigIoIoLfvYhe1W6JSydza0GRmSO6IZg/6Iuwb/svfPFBRdi0Hcpu3QuBk3POvUlOgkQKg672ymtrH5YW9/YLH/89PnLVmV756eJU82hw2MZ6+uAGZAig4KlHCdaGAqkNANxldTvXsL2og4+oGTBPqK3URiKDhDSw0qoY9wh5kWZpzXWge0eUB9HAGyOj2nvknVIPNRyBCyZp7Tdm2xob4SIW2dLezQaB1rPabEzhrDcGlarbcGdF3wOvAFVSVHtQ+eWHMU8VRMglM6bnuQn2M6ZRcAl52U8NJIyP2Q30LIyYAtPZmnk9KtlQjqMtV0R0hn7d0fGlDETFVinYjgy9qU/J/WS3F40s9ElKQIEZ8fNEwlxZhOo6Wh0MBRTixgXAt7V8pHTDO9gPKNgRv+cnvQewcdrwvh9VLy6LNDbIHtknNeKRY3JBvpE26RBO7skjeSYvzoPz5Lw6b3NrySl6dsk/Vr7A9xQsM=</latexit><latexit sha1_base64="iwD7OmBG4KhDZEWl5K36ziE3oIk=">ACTHicbVFdSyMxFM1U14/uh1UfQmWhRakzIigIoIoLfvYhe1W6JSydza0GRmSO6IZg/6Iuwb/svfPFBRdi0Hcpu3QuBk3POvUlOgkQKg672ymtrH5YW9/YLH/89PnLVmV756eJU82hw2MZ6+uAGZAig4KlHCdaGAqkNANxldTvXsL2og4+oGTBPqK3URiKDhDSw0qoY9wh5kWZpzXWge0eUB9HAGyOj2nvknVIPNRyBCyZp7Tdm2xob4SIW2dLezQaB1rPabEzhrDcGlarbcGdF3wOvAFVSVHtQ+eWHMU8VRMglM6bnuQn2M6ZRcAl52U8NJIyP2Q30LIyYAtPZmnk9KtlQjqMtV0R0hn7d0fGlDETFVinYjgy9qU/J/WS3F40s9ElKQIEZ8fNEwlxZhOo6Wh0MBRTixgXAt7V8pHTDO9gPKNgRv+cnvQewcdrwvh9VLy6LNDbIHtknNeKRY3JBvpE26RBO7skjeSYvzoPz5Lw6b3NrySl6dsk/Vr7A9xQsM=</latexit><latexit sha1_base64="iwD7OmBG4KhDZEWl5K36ziE3oIk=">ACTHicbVFdSyMxFM1U14/uh1UfQmWhRakzIigIoIoLfvYhe1W6JSydza0GRmSO6IZg/6Iuwb/svfPFBRdi0Hcpu3QuBk3POvUlOgkQKg672ymtrH5YW9/YLH/89PnLVmV756eJU82hw2MZ6+uAGZAig4KlHCdaGAqkNANxldTvXsL2og4+oGTBPqK3URiKDhDSw0qoY9wh5kWZpzXWge0eUB9HAGyOj2nvknVIPNRyBCyZp7Tdm2xob4SIW2dLezQaB1rPabEzhrDcGlarbcGdF3wOvAFVSVHtQ+eWHMU8VRMglM6bnuQn2M6ZRcAl52U8NJIyP2Q30LIyYAtPZmnk9KtlQjqMtV0R0hn7d0fGlDETFVinYjgy9qU/J/WS3F40s9ElKQIEZ8fNEwlxZhOo6Wh0MBRTixgXAt7V8pHTDO9gPKNgRv+cnvQewcdrwvh9VLy6LNDbIHtknNeKRY3JBvpE26RBO7skjeSYvzoPz5Lw6b3NrySl6dsk/Vr7A9xQsM=</latexit>do so for NMT
calculate risk over that
n-best search
risk(F, E, S) = X
˜ E∈S
P( ˜ E | F) Z error(E, ˆ E)
<latexit sha1_base64="s7VNmewP+sEAU60nHL1SnfP+azM=">ACTHicbVFNaxsxFNS6aZo4/XDbYy4ipmBDMLul0OYQC02OTokTkK8xmi1b2NhSbtIb0uN2D/YSyC3/otekhCILjQz76QGiYmfckjZJCoth+CeovVh5ufpqb2+8frN23eN9x+ObV4aDgOey9ycJsyCFBoGKFDCaWGAqUTCSTL9MdPfoKxItdHOCtgpNi5FpngD01bqQxwi90Rthp1ept0+42PWzTXRrbUo1djEKm4LoVjYWmh37LDOu3ogKJHSXrtyZx4vZoExualaflI8Yeg97XGjGXbCRdHnIFqCJlWf9y4jNOclwo0csmsHUZhgSPHDAouoarHpYWC8Sk7h6GHmimwI7dIo6KfPJPSLDd+aQL9mGHY8ramUq8UzGc2KfanPyfNiwx+zZyQhclgub3B2WlpJjTebQ0FQY4ypkHjBvh70r5hPnA0H9A3YcQPX3yczD43NnpRAdfmnvfl2mskU2yRVokIl/JHtknfTIgnPwmf8kVuQ4ugn/BTXB7b60Fy56P5FHVu8A7qWy3A=</latexit><latexit sha1_base64="s7VNmewP+sEAU60nHL1SnfP+azM=">ACTHicbVFNaxsxFNS6aZo4/XDbYy4ipmBDMLul0OYQC02OTokTkK8xmi1b2NhSbtIb0uN2D/YSyC3/otekhCILjQz76QGiYmfckjZJCoth+CeovVh5ufpqb2+8frN23eN9x+ObV4aDgOey9ycJsyCFBoGKFDCaWGAqUTCSTL9MdPfoKxItdHOCtgpNi5FpngD01bqQxwi90Rthp1ept0+42PWzTXRrbUo1djEKm4LoVjYWmh37LDOu3ogKJHSXrtyZx4vZoExualaflI8Yeg97XGjGXbCRdHnIFqCJlWf9y4jNOclwo0csmsHUZhgSPHDAouoarHpYWC8Sk7h6GHmimwI7dIo6KfPJPSLDd+aQL9mGHY8ramUq8UzGc2KfanPyfNiwx+zZyQhclgub3B2WlpJjTebQ0FQY4ypkHjBvh70r5hPnA0H9A3YcQPX3yczD43NnpRAdfmnvfl2mskU2yRVokIl/JHtknfTIgnPwmf8kVuQ4ugn/BTXB7b60Fy56P5FHVu8A7qWy3A=</latexit><latexit sha1_base64="s7VNmewP+sEAU60nHL1SnfP+azM=">ACTHicbVFNaxsxFNS6aZo4/XDbYy4ipmBDMLul0OYQC02OTokTkK8xmi1b2NhSbtIb0uN2D/YSyC3/otekhCILjQz76QGiYmfckjZJCoth+CeovVh5ufpqb2+8frN23eN9x+ObV4aDgOey9ycJsyCFBoGKFDCaWGAqUTCSTL9MdPfoKxItdHOCtgpNi5FpngD01bqQxwi90Rthp1ept0+42PWzTXRrbUo1djEKm4LoVjYWmh37LDOu3ogKJHSXrtyZx4vZoExualaflI8Yeg97XGjGXbCRdHnIFqCJlWf9y4jNOclwo0csmsHUZhgSPHDAouoarHpYWC8Sk7h6GHmimwI7dIo6KfPJPSLDd+aQL9mGHY8ramUq8UzGc2KfanPyfNiwx+zZyQhclgub3B2WlpJjTebQ0FQY4ypkHjBvh70r5hPnA0H9A3YcQPX3yczD43NnpRAdfmnvfl2mskU2yRVokIl/JHtknfTIgnPwmf8kVuQ4ugn/BTXB7b60Fy56P5FHVu8A7qWy3A=</latexit>risk(F, E, θ, τ, S) = X
˜ E∈S
P( ˜ E | F; θ)1/τ Z error(E, ˆ E)
<latexit sha1_base64="e4M3TNipvdjfyQh+cH52R4IE4w0=">ACa3icbVHLbhMxFPUMrxIeTQsLXguLqFIiRWGmqgRVhVSBWrEMKqEVmRB5PHcaK7ZnZN9BRNas+EN2fAIbvgFPOovSciXrHp1zH/ZxWkphMYp+BeGNm7du39m427l3/8HDze7W9mdbVIbDhBeyMGcpsyCFhgkKlHBWGmAqlXCaLt83+uk3MFYU+hOuSpgpdq5FLjhDT827PxKE7+iMsMu6fzykR0Oa4AKQNZlVQ3oyoG9pYis1dwkKmYE7qmkiND3xKTeMu3H/kqBERo8P2hmDry5+1Yypa/fFi+tVYExh6n6zaMHQNw3m3V40itZBr4O4BT3Sxnje/ZlkBa8UaOSWTuNoxJnjhkUXELdSoLJeNLdg5TDzVTYGdubVZNdzyT0bw/mika/Zyh2PK2pVKfaViuLBXtYb8nzatMH8zc0KXFYLmF4vySlIsaOM8zYQBjnLlAeNG+LtSvmDeQfT/0/EmxFefB1Mdkf7o/jXu/wXevGBnlOXpI+iclrckg+kDGZE5+B5vBk+Bp8Cd8HD4LX1yUhkHb84j8E+HOX85buPE=</latexit><latexit sha1_base64="e4M3TNipvdjfyQh+cH52R4IE4w0=">ACa3icbVHLbhMxFPUMrxIeTQsLXguLqFIiRWGmqgRVhVSBWrEMKqEVmRB5PHcaK7ZnZN9BRNas+EN2fAIbvgFPOovSciXrHp1zH/ZxWkphMYp+BeGNm7du39m427l3/8HDze7W9mdbVIbDhBeyMGcpsyCFhgkKlHBWGmAqlXCaLt83+uk3MFYU+hOuSpgpdq5FLjhDT827PxKE7+iMsMu6fzykR0Oa4AKQNZlVQ3oyoG9pYis1dwkKmYE7qmkiND3xKTeMu3H/kqBERo8P2hmDry5+1Yypa/fFi+tVYExh6n6zaMHQNw3m3V40itZBr4O4BT3Sxnje/ZlkBa8UaOSWTuNoxJnjhkUXELdSoLJeNLdg5TDzVTYGdubVZNdzyT0bw/mika/Zyh2PK2pVKfaViuLBXtYb8nzatMH8zc0KXFYLmF4vySlIsaOM8zYQBjnLlAeNG+LtSvmDeQfT/0/EmxFefB1Mdkf7o/jXu/wXevGBnlOXpI+iclrckg+kDGZE5+B5vBk+Bp8Cd8HD4LX1yUhkHb84j8E+HOX85buPE=</latexit><latexit sha1_base64="e4M3TNipvdjfyQh+cH52R4IE4w0=">ACa3icbVHLbhMxFPUMrxIeTQsLXguLqFIiRWGmqgRVhVSBWrEMKqEVmRB5PHcaK7ZnZN9BRNas+EN2fAIbvgFPOovSciXrHp1zH/ZxWkphMYp+BeGNm7du39m427l3/8HDze7W9mdbVIbDhBeyMGcpsyCFhgkKlHBWGmAqlXCaLt83+uk3MFYU+hOuSpgpdq5FLjhDT827PxKE7+iMsMu6fzykR0Oa4AKQNZlVQ3oyoG9pYis1dwkKmYE7qmkiND3xKTeMu3H/kqBERo8P2hmDry5+1Yypa/fFi+tVYExh6n6zaMHQNw3m3V40itZBr4O4BT3Sxnje/ZlkBa8UaOSWTuNoxJnjhkUXELdSoLJeNLdg5TDzVTYGdubVZNdzyT0bw/mika/Zyh2PK2pVKfaViuLBXtYb8nzatMH8zc0KXFYLmF4vySlIsaOM8zYQBjnLlAeNG+LtSvmDeQfT/0/EmxFefB1Mdkf7o/jXu/wXevGBnlOXpI+iclrckg+kDGZE5+B5vBk+Bp8Cd8HD4LX1yUhkHb84j8E+HOX85buPE=</latexit>(Review of Karpathy 2016)
up or down, and R is the win/loss at the end of the game
scenario: e.g. a dialog where we can make responses and will get a reward at the end.
the latent variable, then get a reward based on their configuration.
such as BLEU score that we cannot optimize without first generating a whole sentence.
“imitation learning,” imitating a teacher (although imitation learning is more general)
`super(Y, X) = − log P(Y | X)
ˆ Y ∼ P(Y | X) ˆ Y = argmaxY P(Y | X)
sentences where multiple models agree (Blum and Mitchell 1998)
`self(X) = − log P( ˆ Y | X)
`self(X) = −R( ˆ Y , Y ) log P( ˆ Y | X)
account the time delay between action and reward
a1 a2 a3 a4 a5 a6 +1
a1 a2 a3 a4 a5 a6 +3
learning is unstable
spaces (e.g. words of a vocabulary)
for a particular sentence Reward 0.8 0.3 0.95 Baseline 0.1 B-R
0.2 “This is an easy sentence” “Buffalo Buffalo Buffalo”
reflect when we did better or worse than expected `baseline(X) = −(R( ˆ Y , Y ) − B( ˆ Y )) log P( ˆ Y | X)
state (e.g. Ranzato et al. 2016)
the baseline (e.g. Dayan 1990)
can sample many different examples before performing update
done before an update to stabilize
them when we update parameters (experience replay, Lin 1993)
(not latent variables or standard RL settings)
MLE to the full objective
structure of the computation depends on the choices you make:
computation structure doesn’t change.
stable.
function.
probabilistic policy that maximizes the expectation
the result of taking a particular action, and take the action with the highest expected value
input and previously generated words, action will be the next word to generate
Q(st, at) = E[
T
X
t
R(at)] ˆ at = argmaxatQ(st, at)
function for every state and update
regression with neural networks (e.g. Tesauro 1995) Q(st, at) ← (1 − α)Q(st, at) + αR(at)
stuck in a local minimum
based methods, as we randomly sample actions
with a certain probability ε
states (Schmidhuber 1991, Bellemare et al. 2016)
reinforcement learning in NLP (Survey: Young et al. 2013)
partially observed MDPs (to handle uncertainty)
(Williams and Zweig 2017) and chatbot dialog (Li et
simulator that has an internal state (Schatzmann et al. 2007)
to track user state w/ incomplete information
based on progress (Branavan et al. 2009)
2017)
for MT, agent decides whether to wait or translate (Grissom et al. 2014, Gu et al. 2017)
searching the web as necessary (Narasimhan et al. 2016)
down sentences before reading in depth
measure the results as a reward