IEEE P3168
$30.33
IEEE Draft Standard for Robustness Evaluation Test Methods for a Natural Language Processing Service that uses Machine Learning
Published By | Publication Date | Number of Pages |
IEEE | N/A | 27 |
New IEEE Standard – Active – Draft. The Natural Language Processing (NLP) services using machine learning have rich applications in solving various tasks, and have been widely deployed and used, usually accessible by API calls. The robustness of the NLP services is challenged by various well-known general corruptions and adversarial attacks. Examples of general corruptions include inadvertent or random deletion, addition, or repetition of characters or words. Adversarial attacks generate adversarial characters, words or sentence samples causing the models underpinning the NLP services to produce incorrect results. This standard proposes a method for quantitatively evaluating the robustness the NLP services. Under the method, different cases the evaluation needs to perform against are specified. Robustness metrics and their calculation are defined. With the standard, the service stakeholders including the service developer, service providers, and service users can develop understanding of the robustness of the services. The evaluation can be performed during various phases in the life cycle of the NLP services, the testing phase, in the validation phase, after deployment, etc.