Adding `safetensors` variant of this model
#20 opened over 1 year ago
		by
		
				
 SFconvertbot
							
						SFconvertbot
	
 
							Compatibility with Llama-2-7b LoRAs
#18 opened over 1 year ago
		by
		
				
 Balint831d
							
						Balint831d
	
Adding Evaluation Results
#15 opened almost 2 years ago
		by
		
				
 leaderboard-pr-bot
							
						leaderboard-pr-bot
	
 
							Traceback (most recent call last)
#14 opened about 2 years ago
		by
		
				
 fwrefewrfwe
							
						fwrefewrfwe
	
llama2 forward pass seemingly not working with padded inputs, unless one element in batch is not padded
👍
							
						2
				
									3
	#13 opened about 2 years ago
		by
		
				
 joehakim
							
						joehakim
	
Input validation error: `max_new_tokens` must be <= 1. Given: 20
									1
	#12 opened about 2 years ago
		by
		
				
 reubenlee3
							
						reubenlee3
	
Loading model without fast-attn
									1
	#10 opened about 2 years ago
		by
		
				
 TZ20
							
						TZ20
	
Great model. Plans for 13b version?
👍
							
						1
				
									1
	#9 opened about 2 years ago
		by
		
				
 nahuel89p
							
						nahuel89p
	
 
							Model gives itself instructions and keeps going and going and going?
									5
	#8 opened about 2 years ago
		by
		
				
 michael-newsrx-com
							
						michael-newsrx-com
	
Quantizations for llama.cpp
❤️
							
						1
				#7 opened about 2 years ago
		by
		
				
 rozek
							
						rozek
	
Any plans for chat model?
									1
	#5 opened about 2 years ago
		by
		
				
 brekk
							
						brekk
	
 
							when will have a ggml version?
									8
	#3 opened about 2 years ago
		by
		
				
 CUIGuy
							
						CUIGuy
	
LocalAI Model Loading
									3
	#2 opened about 2 years ago
		by
		
				
 FIWisher
							
						FIWisher
	
The model doesn't seem to stop
									15
	#1 opened about 2 years ago
		by
		
				
 LaferriereJC
							
						LaferriereJC
	
