How to Contribute
Sign up · Log in
1 year ago
Paper Summary: A Neural Conversational Model
The paper presents a domain agnostic approach for conversational modelling based on
Sequence to Sequence Learning Framework
Link to the paper
Neural Conversational Model (NCM)
A Recurrent Neural Network (RNN) reads the input sentence, one token at a time, and predicts the output sequence, one token at a time.
Learns by backpropagation.
The model maximises the cross entropy of correct sequence given its context.
Greedy inference approach where predicted output token is used as input to predict the next output token.
IT HelpDesk dataset of conversations about computer related issues.
OpenSubtitles dataset containing movie conversations.
The paper has reported some samples of conversations generated by the interaction between human actor and the NCM.
NCM reports lower perplexity as compared to n-grams model.
NCM outperforms CleverBot in a subjective test involving human evaluators to grade the two systems.
End-To-End training without handcrafted rules.
Underlying architecture (Sequence To Sequence Framework) can be leveraged for machine translation, question answering etc.
The responses are simple, short and at times inconsistent.
The objective function of Sequence To Sequence Framework is not designed to capture the objective of conversational models.
Read more…(192 words)
About the author:
Join the discussion. Add a reply…
Table of contents
Ready to join our community?
Sign up below to automatically get notified of new courses, get
to finish ones you subscribe to, and
lessons to read later.
Continue with Facebook
— OR —
Your Full Name
I have an account. Log in instead
By signing up, you agree to our
Natural Language Processing
UX & UI Design
How to Contribute
Upgrade to Pro
Help and FAQ
Privacy, Terms & Refunds
Get in touch
Copyright 2016-18, Compose Labs Inc. All rights reserved.