Weak supervision for Question Type Detection with large language models - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Weak supervision for Question Type Detection with large language models

Résumé

Large pre-trained language models (LLM) have shown remarkable Zero-Shot Learning performances in many Natural Language Processing tasks. However, designing effective prompts is still very difficult for some tasks, in particular for dialogue act recognition. We propose an alternative way to leverage pretrained LLM for such tasks that replace manual prompts with simple rules, which are more intuitive and easier to design for some tasks. We demonstrate this approach on the question type recognition task, and show that our zero-shot model obtains competitive performances both with a supervised LSTM trained on the full training corpus, and another supervised model from previously published works on the MRDA corpus. We further analyze the limits of the proposed approach, which can not be applied on any task, but may advantageously complement prompt programming for specific classes.
Fichier principal
Vignette du fichier
paper.pdf (156.11 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03786135 , version 1 (23-09-2022)

Identifiants

  • HAL Id : hal-03786135 , version 1

Citer

Jiří Martínek, Christophe Cerisara, Pavel Král, Ladislav Lenc, Josef Baloun. Weak supervision for Question Type Detection with large language models. INTERSPEECH 2022 -, Sep 2022, Incheon, South Korea. ⟨hal-03786135⟩
100 Consultations
112 Téléchargements

Partager

Gmail Facebook X LinkedIn More