cot-finetuning ReFT: Reasoning with Reinforced Fine-Tuning Paper • 2401.08967 • Published Jan 17, 2024 • 31
faster-decoding Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads Paper • 2401.10774 • Published Jan 19, 2024 • 59
Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads Paper • 2401.10774 • Published Jan 19, 2024 • 59
interesting-papers Self-Rewarding Language Models Paper • 2401.10020 • Published Jan 18, 2024 • 151 Self-Discover: Large Language Models Self-Compose Reasoning Structures Paper • 2402.03620 • Published Feb 6, 2024 • 117
Self-Discover: Large Language Models Self-Compose Reasoning Structures Paper • 2402.03620 • Published Feb 6, 2024 • 117
interpretability Rethinking Interpretability in the Era of Large Language Models Paper • 2402.01761 • Published Jan 30, 2024 • 23
Rethinking Interpretability in the Era of Large Language Models Paper • 2402.01761 • Published Jan 30, 2024 • 23
cot-finetuning ReFT: Reasoning with Reinforced Fine-Tuning Paper • 2401.08967 • Published Jan 17, 2024 • 31
interesting-papers Self-Rewarding Language Models Paper • 2401.10020 • Published Jan 18, 2024 • 151 Self-Discover: Large Language Models Self-Compose Reasoning Structures Paper • 2402.03620 • Published Feb 6, 2024 • 117
Self-Discover: Large Language Models Self-Compose Reasoning Structures Paper • 2402.03620 • Published Feb 6, 2024 • 117
faster-decoding Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads Paper • 2401.10774 • Published Jan 19, 2024 • 59
Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads Paper • 2401.10774 • Published Jan 19, 2024 • 59
interpretability Rethinking Interpretability in the Era of Large Language Models Paper • 2402.01761 • Published Jan 30, 2024 • 23
Rethinking Interpretability in the Era of Large Language Models Paper • 2402.01761 • Published Jan 30, 2024 • 23