#自然语言处理#An LLM-powered knowledge curation system that researches a topic and generates a full-length report with citations.
Official implementation for Yuan & Liu & Zhong et al., KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches. EMNLP Findings 2024
#大语言模型#Official Implementation for EMNLP 2024 (main) "AgentReview: Exploring Academic Peer Review with LLM Agent."
#自然语言处理#[EMNLP 2024 Findings] Official PyTorch Implementation of "Adaptive Contrastive Search: Uncertainty-Guided Decoding for Open-Ended Text Generation"
EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding for improving reasoning.
Source code of our paper "PairDistill: Pairwise Relevance Distillation for Dense Retrieval", EMNLP 2024 Main.
#自然语言处理#An official implementation of "PromptKD: Distilling Student-Friendly Knowledge for Generative Language Models via Prompt Tuning" (EMNLP 2024 Findings) in PyTorch.
[EMNLP 2024] The official repository for our long paper, "How You Prompt Matters! Even Task-Oriented Constraints in Instructions Affect LLM-Generated Text Detection"
#自然语言处理#Code for Generative Deduplication For Socia Media Data Selection (Findings of EMNLP 2024)
Official Repository for Cross-lingual Back-Parsing: Utterance Synthesis from Meaning Representation for Zero-Resource Semantic Parsing (EMNLP 2024)
#自然语言处理#🚩🚩 An LLM-powered knowledge curation system that researches a topic and generates a full-length report with citations
#大语言模型#Data and software artifacts for the EMNLP 2024 (Main) paper "What Are the Odds? Language Models Are Capable of Probabilistic Reasoning"
#大语言模型#[EMNLP 2024] Materials for the paper "Evaluating Large Language Models via Linguistic Profiling"
Repository for the paper STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions (EMNLP 2024)
Axis Tour: Word Tour Determines the Order of Axes in ICA-transformed Embeddings (Published in EMNLP 2024 Findings)
Understanding Higher-Order Correlations Among Semantic Components in Embeddings