DE eng

Search in the Catalogues and Directories

Hits 1 – 2 of 2

1
Assessing the Robustness of Conversational Agents using Paraphrases
Abstract: The 2019 IEEE International Conference on Artificial Intelligence Testing (AITest), San Francisco, United States of America, 4-9 April 2019 ; Assessing a conversational agent’s understanding capabilities is critical, as poor user interactions could seal the agent’s fate at the very beginning of its lifecycle with users abandoning the system. In this paper we explore the use of paraphrases as a testing tool for conversational agents. Paraphrases, which are different ways of expressing the same intent, are generated based on known working input by per- forming lexical substitutions. As the expected outcome for this newly generated data is known, we can use it to assess the agent’s robustness to language variation and detect potential understanding weaknesses. As demonstrated by a case study, we obtain encouraging results as it appears that this approach can help anticipate potential understanding shortcomings and that these shortcomings can be addressed by the generated paraphrases. ; Science Foundation Ireland ; Microsoft Corporation
Keyword: Engines; Granner; Robustness; Software; Task analysis; Testing; Tools
URL: http://hdl.handle.net/10197/10135
https://doi.org/10.1109/AITest.2019.000-7
BASE
Hide details
2
Assessing the robustness of conversational agens using paraphrases
Guichard, Jonathan; Ruane, Elayne; Smith, Ross. - : IEEE Computer Society, 2019
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
2
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern