WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia

WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia is …
instance of (P31):
conference paperQ23927052

External links are
P7505ACL Anthology article ID2023.findings-emnlp.157
P818arXiv ID2305.14292
P356DOI10.18653/V1/2023.FINDINGS-EMNLP.157
P953full work available at URLhttps://aclanthology.org/2023.findings-emnlp.157.pdf
P856official websitehttps://aclanthology.org/2023.findings-emnlp.157/

P50authorSina J. SemnaniQ120968159
Monica S. LamQ11763
P2093author name stringHeidi Zhang
Violet Yao
P2860cites workA Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and InteractivityQ117792668
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-AwarenessQ121158677
Wizard of Wikipedia: Knowledge-Powered Conversational agentsQ61895939
Language Models are Few-Shot LearnersQ95727440
P407language of work or nameEnglishQ1860
P921main subjectWikipediaQ52
hallucinationQ116197048
large language modelQ115305900
P1104number of pages27
P304page(s)2387-2413
P577publication date2023-01-01
P1433published inFindings of the Association for Computational Linguistics: EMNLP 2023Q126740907
P1476titleWikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia

Search more.