Sehee Lim
NLP
  • Sep 19, 2024

    Learning Transferable Visual Models From Natural Language Supervision

  • May 9, 2024

    Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation

  • May 6, 2024

    Training Language Models to Follow Instructions with Human Feedback (InstructGPT)

  • Mar 7, 2024

    Selective Mixup Helps with Distribution Shifts, But Not (Only) because of Mixup

  • Feb 27, 2024

    Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks

  • Feb 15, 2024

    Improving Factuality and Reasoning in Language Models through Multiagent Debate

  • Feb 5, 2024

    Self-Rewarding Language Models

  • Feb 4, 2024

    REALM: Retrieval-Augmented Language Model Pre-Training

  • Feb 3, 2024

    Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena

  • Jan 30, 2024

    Prefix-Tuning: Optimizing Continuous Prompts for Generation

  • Jan 6, 2024

    Improving Language Understanding by Generative Pre-Training

  • Jan 4, 2024

    Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate

  • Dec 27, 2023

    Gamma Poisson Shrinker

  • Dec 4, 2023

    Simplifying Transformer Blocks

  • Nov 21, 2023

    Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement

  • Nov 21, 2023

    Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves

  • Nov 21, 2023

    Least-to-Most Prompting Enables Complex Reasoning in Large Language Models

  • Nov 11, 2023

    Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks

  • Nov 9, 2023

    Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models

  • Nov 5, 2023

    Large Language Models are Zero-Shot Reasoners

  • Nov 4, 2023

    Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

  • Oct 30, 2023

    ALBERT: A Lite BERT for Self-Supervised Learning of Languange Representations

  • Oct 28, 2023

    DistilBERT, a distilled version of BERT: smaller, faster and lighter

  • Oct 24, 2023

    DeBERTa: Decoding-Enhanced BERT with Disentangled Attention

  • Oct 23, 2023

    RoBERTa: A Robustly Optimized BERT Pretraining Approach

  • Oct 22, 2023

    BERT: Bidirectional Encoder Representations from Transformers

  • Sep 27, 2023

    Navie Bayes

  • Sehee Lim
  • sehee0706@yonsei.ac.kr