A curated list of awesome papers related to pre-trained models for information retrieval (a.k.a., pretraining for IR).
#计算机科学#Efficient Inference for Big Models
翻译 - 大型预训练语言模型 (PLM) 的低成本推理包
#自然语言处理#On Transferability of Prompt Tuning for Natural Language Processing
#自然语言处理#The code for the ACL 2023 paper "Linear Classifier: An Often-Forgotten Baseline for Text Classification".
Code for the paper "Exploiting Pretrained Biochemical Language Models for Targeted Drug Design", to appear in Bioinformatics, Proceedings of ECCB2022.
FusionDTI utilises a Token-level Fusion module to effectively learn fine-grained information for Drug-Target Interaction Prediction.
#自然语言处理#A Keras-based and TensorFlow-backend NLP Models Toolkit.
The official repository for AAAI 2024 Oral paper "Structured Probabilistic Coding".
This research examines the performance of Large Language Models (GPT-3.5 Turbo and Gemini 1.5 Pro) in Bengali Natural Language Inference, comparing them with state-of-the-art models using the XNLI dat...
#自然语言处理#Identified ADEs and associated terms in an annotated corpus with Named Entity Recognition (NER) modeling with Flair and PyTorch. Fine-tuned pre-trained transformer models such as XLM-RoBERTa, SpanBERT...
A python tool for evaluating the quality of few-shot prompt learning.
LSTM models for text classification on character embeddings.
#自然语言处理#Fine tuned BERT, mBERT and XLMRoBERTa for Abusive Comments Detection in Telugu, Code-Mixed Telugu and Telugu-English.
The code of An Empirical Study of Pre-trained Language Models in Simple Knowledge Graph Question Answering