segmond
's Collections
Chain-of-Thought Reasoning Without Prompting
Paper
•
2402.10200
•
Published
•
99
How to Train Data-Efficient LLMs
Paper
•
2402.09668
•
Published
•
38
BitDelta: Your Fine-Tune May Only Be Worth One Bit
Paper
•
2402.10193
•
Published
•
17
A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts
Paper
•
2402.09727
•
Published
•
35
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language
Models
Paper
•
2401.01335
•
Published
•
64
OS-Copilot: Towards Generalist Computer Agents with Self-Improvement
Paper
•
2402.07456
•
Published
•
41
BiLLM: Pushing the Limit of Post-Training Quantization for LLMs
Paper
•
2402.04291
•
Published
•
48
Self-Discover: Large Language Models Self-Compose Reasoning Structures
Paper
•
2402.03620
•
Published
•
109
Shortened LLaMA: A Simple Depth Pruning for Large Language Models
Paper
•
2402.02834
•
Published
•
14
TrustLLM: Trustworthiness in Large Language Models
Paper
•
2401.05561
•
Published
•
65
SliceGPT: Compress Large Language Models by Deleting Rows and Columns
Paper
•
2401.15024
•
Published
•
69
DeepSeek-Coder: When the Large Language Model Meets Programming -- The
Rise of Code Intelligence
Paper
•
2401.14196
•
Published
•
47
Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated
Text
Paper
•
2401.12070
•
Published
•
43
Self-RAG: Learning to Retrieve, Generate, and Critique through
Self-Reflection
Paper
•
2310.11511
•
Published
•
74
Chain-of-Verification Reduces Hallucination in Large Language Models
Paper
•
2309.11495
•
Published
•
38
Adapting Large Language Models via Reading Comprehension
Paper
•
2309.09530
•
Published
•
77
Language Modeling Is Compression
Paper
•
2309.10668
•
Published
•
82
Paper
•
2309.16609
•
Published
•
34
CodeFusion: A Pre-trained Diffusion Model for Code Generation
Paper
•
2310.17680
•
Published
•
69
Extending LLMs' Context Window with 100 Samples
Paper
•
2401.07004
•
Published
•
15
The Impact of Reasoning Step Length on Large Language Models
Paper
•
2401.04925
•
Published
•
16
Paper
•
2401.04088
•
Published
•
159
Alpha-CLIP: A CLIP Model Focusing on Wherever You Want
Paper
•
2312.03818
•
Published
•
32
Chain of Code: Reasoning with a Language Model-Augmented Code Emulator
Paper
•
2312.04474
•
Published
•
29
ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs
Paper
•
2311.13600
•
Published
•
42
The Generative AI Paradox: "What It Can Create, It May Not Understand"
Paper
•
2311.00059
•
Published
•
18
CodePlan: Repository-level Coding using LLMs and Planning
Paper
•
2309.12499
•
Published
•
73
LoRAShear: Efficient Large Language Model Structured Pruning and
Knowledge Recovery
Paper
•
2310.18356
•
Published
•
22
Agents: An Open-source Framework for Autonomous Language Agents
Paper
•
2309.07870
•
Published
•
42
Direct Language Model Alignment from Online AI Feedback
Paper
•
2402.04792
•
Published
•
29
Rethinking Interpretability in the Era of Large Language Models
Paper
•
2402.01761
•
Published
•
21
PDFTriage: Question Answering over Long, Structured Documents
Paper
•
2309.08872
•
Published
•
53
OLMo: Accelerating the Science of Language Models
Paper
•
2402.00838
•
Published
•
80
Self-Rewarding Language Models
Paper
•
2401.10020
•
Published
•
144
ReFT: Reasoning with Reinforced Fine-Tuning
Paper
•
2401.08967
•
Published
•
28
Understanding LLMs: A Comprehensive Overview from Training to Inference
Paper
•
2401.02038
•
Published
•
62
LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning
Paper
•
2401.01325
•
Published
•
26
A Comprehensive Study of Knowledge Editing for Large Language Models
Paper
•
2401.01286
•
Published
•
16
Time is Encoded in the Weights of Finetuned Language Models
Paper
•
2312.13401
•
Published
•
19
TinyGSM: achieving >80% on GSM8k with small language models
Paper
•
2312.09241
•
Published
•
37
Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations
Paper
•
2312.06674
•
Published
•
6
Magicoder: Source Code Is All You Need
Paper
•
2312.02120
•
Published
•
79
Using Human Feedback to Fine-tune Diffusion Models without Any Reward
Model
Paper
•
2311.13231
•
Published
•
26
Exponentially Faster Language Modelling
Paper
•
2311.10770
•
Published
•
118
Orca 2: Teaching Small Language Models How to Reason
Paper
•
2311.11045
•
Published
•
70
Fast Chain-of-Thought: A Glance of Future from Parallel Decoding Leads
to Answers Faster
Paper
•
2311.08263
•
Published
•
15
OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset
Paper
•
2402.10176
•
Published
•
35
Generative Representational Instruction Tuning
Paper
•
2402.09906
•
Published
•
51
Prometheus: Inducing Fine-grained Evaluation Capability in Language
Models
Paper
•
2310.08491
•
Published
•
53
Tuna: Instruction Tuning using Feedback from Large Language Models
Paper
•
2310.13385
•
Published
•
10
AgentTuning: Enabling Generalized Agent Abilities for LLMs
Paper
•
2310.12823
•
Published
•
35
LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B
Paper
•
2310.20624
•
Published
•
12
Learning From Mistakes Makes LLM Better Reasoner
Paper
•
2310.20689
•
Published
•
28
S-LoRA: Serving Thousands of Concurrent LoRA Adapters
Paper
•
2311.03285
•
Published
•
28
Levels of AGI: Operationalizing Progress on the Path to AGI
Paper
•
2311.02462
•
Published
•
33
Can LLMs Follow Simple Rules?
Paper
•
2311.04235
•
Published
•
10
LLaMA Pro: Progressive LLaMA with Block Expansion
Paper
•
2401.02415
•
Published
•
53
A Zero-Shot Language Agent for Computer Control with Structured
Reflection
Paper
•
2310.08740
•
Published
•
14
Premise Order Matters in Reasoning with Large Language Models
Paper
•
2402.08939
•
Published
•
25
More Agents Is All You Need
Paper
•
2402.05120
•
Published
•
51
Infinite-LLM: Efficient LLM Service for Long Context with DistAttention
and Distributed KVCache
Paper
•
2401.02669
•
Published
•
14
Supervised Knowledge Makes Large Language Models Better In-context
Learners
Paper
•
2312.15918
•
Published
•
8
Instruction-tuning Aligns LLMs to the Human Brain
Paper
•
2312.00575
•
Published
•
11
Prompt Engineering a Prompt Engineer
Paper
•
2311.05661
•
Published
•
20
MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning
Paper
•
2311.02303
•
Published
•
4
Tell Your Model Where to Attend: Post-hoc Attention Steering for LLMs
Paper
•
2311.02262
•
Published
•
10
Personas as a Way to Model Truthfulness in Language Models
Paper
•
2310.18168
•
Published
•
5
Improving Text Embeddings with Large Language Models
Paper
•
2401.00368
•
Published
•
79
Customizing Language Model Responses with Contrastive In-Context
Learning
Paper
•
2401.17390
•
Published
Aya Model: An Instruction Finetuned Open-Access Multilingual Language
Model
Paper
•
2402.07827
•
Published
•
45
Efficient Tool Use with Chain-of-Abstraction Reasoning
Paper
•
2401.17464
•
Published
•
16
Specialized Language Models with Cheap Inference from Limited Domain
Data
Paper
•
2402.01093
•
Published
•
45
Can Mamba Learn How to Learn? A Comparative Study on In-Context Learning
Tasks
Paper
•
2402.04248
•
Published
•
30
Tag-LLM: Repurposing General-Purpose LLMs for Specialized Domains
Paper
•
2402.05140
•
Published
•
20
TeacherLM: Teaching to Fish Rather Than Giving the Fish, Language
Modeling Likewise
Paper
•
2310.19019
•
Published
•
9
Language Models can be Logical Solvers
Paper
•
2311.06158
•
Published
•
18
GPQA: A Graduate-Level Google-Proof Q&A Benchmark
Paper
•
2311.12022
•
Published
•
25
Memory Augmented Language Models through Mixture of Word Experts
Paper
•
2311.10768
•
Published
•
16
Digital Socrates: Evaluating LLMs through explanation critiques
Paper
•
2311.09613
•
Published
•
1
On the Prospects of Incorporating Large Language Models (LLMs) in
Automated Planning and Scheduling (APS)
Paper
•
2401.02500
•
Published
•
1
In-Context Principle Learning from Mistakes
Paper
•
2402.05403
•
Published
•
14
Can Large Language Models Understand Context?
Paper
•
2402.00858
•
Published
•
21
Data Engineering for Scaling Language Models to 128K Context
Paper
•
2402.10171
•
Published
•
21
A Closer Look at the Limitations of Instruction Tuning
Paper
•
2402.05119
•
Published
•
5
CodeIt: Self-Improving Language Models with Prioritized Hindsight Replay
Paper
•
2402.04858
•
Published
•
14
Code Representation Learning At Scale
Paper
•
2402.01935
•
Published
•
12