每日阅读论文记录

每日阅读论文记录

9 月

日期 标题 备注
09/15 A Short Review: Deep Retrieval-Based Dialogue Systems
09/16 Improved Deep Learning Baselines for Ubuntu Corpus Dialogs UDC
09/16 Sequential Attention-based Network for Noetic End-to-End Response Selection ESIM
09/23 Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks SBERT
09/24 Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks
09/25 Supervised Learning of Universal Sentence Representations from Natural Language Inference Data InferSent
09/26 Learning Semantic Textual Similarity from Conversations USE
09/27 A SIMPLE BUT TOUGH-TO-BEAT BASELINE FOR SENTENCE EMBEDDINGS (undone)
09/27 An Effective Domain Adaptive Post-Training Method for BERT in Response Selection BERT-VFT
09/28 Sequential Matching Network: A New Architecture for Multi-turn Response Selection in Retrieval-Based Chatbots SMN
09/29 APPLYING DEEP LEARNING TO ANSWER SELECTION: A STUDY AND AN OPEN TASK Siam-CNN

10 月

日期 标题 备注
10/06 Learning an Effective Context-Response Matching Model with Self-Supervised Tasks for Retrieval-based Dialogues BERT-SL
10/07 Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
10/08 Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference PET
10/09 GPT Understands, Too P-Tuning
10/09 SimCSE: Simple Contrastive Learning of Sentence Embeddings SimCSE
10/10 Structural Pre-training for Dialogue Comprehension SPIDER
10/11 What Makes for Good Views for Contrastive Learning?
10/21 ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer ConSERT
10/23 Fine-grained Post-training for Improving Retrieval-based Dialogue Systems BERT-FP
10/26 SEMANTIC RE-TUNING WITH CONTRASTIVE TENSION CT
10/27 PRE-TRAINING TASKS FOR EMBEDDING-BASED LARGE-SCALE RETRIEVAL

11 月

日期 标题 备注
11/3 Building an Efficient and Effective Retrieval-based Dialogue System BE/CE

12 月

才发现自己已经摆烂一个多月了。接下来重点放在代码实现上,读完论文一定要看代码。

日期 标题 备注
12/6 Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks P-Tuning v2