#自然语言处理#WikiChat is an improved RAG. It stops the hallucination of large language models by retrieving data from a corpus.
Loki: Open-source solution designed to automate the process of verifying factuality
Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".
Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"
RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Language Models.
A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation"
[Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers
Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"
#自然语言处理#Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"
Implementation of the paper "FactGraph: Evaluating Factuality in Summarization with Semantic Graph Representations (NAACL 2022)"
OLAPH: Improving Factuality in Biomedical Long-form Question Answering
#自然语言处理#Distillation Contrastive Decoding: Improving LLMs Reasoning with Contrastive Decoding and Distillation
#大语言模型#The implementation for EMNLP 2023 paper ”Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators“
Code and data for the ACL 2024 Findings paper "Do LVLMs Understand Charts? Analyzing and Correcting Factual Errors in Chart Captioning"
#大语言模型#SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433
Source code of our EMNLP 2024 paper "FactAlign: Long-form Factuality Alignment of Large Language Models"
#大语言模型#Code for paper "Factual Confidence of LLMs: on Reliability and Robustness of Current Estimators"
Code and data for the Dreyer et al (2023) paper on abstractiveness and factuality in abstractive summarization
Dataset: Fighting the COVID-19 Infodemic: Modeling the Perspective of Journalists, Fact-Checkers, Social Media Platforms, Policy Makers, and the Society
#自然语言处理#Event factuality prediction.Trigger state LSTM