title
stringlengths
1
300
score
int64
0
3.09k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
3.09k
preview
stringlengths
301
5.01k
How much of a bottleneck or performance loss is there running a 3090 in a pci-e 3.0 x8 slot for ml/ai stuff?
4
and how much worse does it get moving up to the 4090?
2023-04-28T05:02:16
https://www.reddit.com/r/LocalLLaMA/comments/131i505/how_much_of_a_bottleneck_or_performance_loss_is/
_risho_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131i505
false
null
t3_131i505
/r/LocalLLaMA/comments/131i505/how_much_of_a_bottleneck_or_performance_loss_is/
false
false
self
4
null
Issue with Running
1
[removed]
2023-04-28T05:03:56
https://www.reddit.com/r/LocalLLaMA/comments/131i62l/issue_with_running/
randomlurker151
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131i62l
false
null
t3_131i62l
/r/LocalLLaMA/comments/131i62l/issue_with_running/
false
false
default
1
null
Can anyone explain why Wizard 7b 4bit 128g seems to load lightning fast?
6
I can't figure it out. Every other 7b 4bit 128g model takes around 60+ seconds to load, every single. Wizard loads in 30, the first time I load it on a boot. Subsequent loads are 4 seconds or less. Whatever is being done with loading needs to be standard across models.
2023-04-28T08:30:46
https://www.reddit.com/r/LocalLLaMA/comments/131lpfm/can_anyone_explain_why_wizard_7b_4bit_128g_seems/
jetro30087
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131lpfm
false
null
t3_131lpfm
/r/LocalLLaMA/comments/131lpfm/can_anyone_explain_why_wizard_7b_4bit_128g_seems/
false
false
self
6
null
Anyone got to model merging yet?
9
Given how Stable Diffusion massively benefited from merging finetuned models that seem to create something better than merely "sum of the parts", anyone experimented with that? We already have a number of models that are finetuned on diverse datasets. Are there any tools for this, anyway?
2023-04-28T10:02:26
https://www.reddit.com/r/LocalLLaMA/comments/131nayh/anyone_got_to_model_merging_yet/
BalorNG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131nayh
false
null
t3_131nayh
/r/LocalLLaMA/comments/131nayh/anyone_got_to_model_merging_yet/
false
false
self
9
null
Serge... Just works
60
[deleted]
2023-04-28T10:21:03
[deleted]
1970-01-01T00:00:00
0
{}
131nnis
false
null
t3_131nnis
/r/LocalLLaMA/comments/131nnis/serge_just_works/
false
false
default
60
null
Saw someone asking this to chatGPT, and tried the same on oasst-30b-q4_2 (Had to use prompt-injection because it denied to invent new words :/)
14
2023-04-28T11:31:38
https://i.redd.it/xuoftlit1mwa1.png
Famberlight
i.redd.it
1970-01-01T00:00:00
0
{}
131p1k9
false
null
t3_131p1k9
/r/LocalLLaMA/comments/131p1k9/saw_someone_asking_this_to_chatgpt_and_tried_the/
false
false
https://a.thumbs.redditm…2NDMbrlQD_B8.jpg
14
{'enabled': True, 'images': [{'id': 'xpiEc4ad342tW-yQtdPsmvIpdcpxmd_4BqGaJgTKAJQ', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/xuoftlit1mwa1.png?width=108&crop=smart&auto=webp&s=6afd086c0f52a0d9d2093856de4f44347899a714', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/xuoftlit1mwa1.png?width=216&crop=smart&auto=webp&s=a8f9010ac8bdf6b4b6df8452427cb29687dd717d', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/xuoftlit1mwa1.png?width=320&crop=smart&auto=webp&s=ae48a25f7729b9ca5f60c7dd9c4d3daf86b1ddd8', 'width': 320}, {'height': 337, 'url': 'https://preview.redd.it/xuoftlit1mwa1.png?width=640&crop=smart&auto=webp&s=e9e04cf83f6eb2d42811afc54756edf4bb4d11db', 'width': 640}], 'source': {'height': 442, 'url': 'https://preview.redd.it/xuoftlit1mwa1.png?auto=webp&s=00ad2000fa48f419b056a52ea2d555c9eef3e3bb', 'width': 838}, 'variants': {}}]}
textgen guide for manual Windows WSL2/Unbuntu installation supporting latest Triton
23
2023-04-28T11:48:46
https://github.com/underlines/awesome-marketing-datascience/blob/master/llama.md
_underlines_
github.com
1970-01-01T00:00:00
0
{}
131pe9k
false
null
t3_131pe9k
/r/LocalLLaMA/comments/131pe9k/textgen_guide_for_manual_windows_wsl2unbuntu/
false
false
https://b.thumbs.redditm…ba6r1s3HMBww.jpg
23
{'enabled': False, 'images': [{'id': '6tUq9uwcO4DX4IZLCQ7vj_mnXEInz9YDEeeaQSobhmM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wDyGduTzPLzpaQK5ZPTmkZVF90udy8FrZDpUzYPheWc.jpg?width=108&crop=smart&auto=webp&s=cad15d3bc38a8c8ede880c9eb04eab223ee4ed15', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wDyGduTzPLzpaQK5ZPTmkZVF90udy8FrZDpUzYPheWc.jpg?width=216&crop=smart&auto=webp&s=af22a4d153944ec4213e57fda90e6467f03b4ea1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wDyGduTzPLzpaQK5ZPTmkZVF90udy8FrZDpUzYPheWc.jpg?width=320&crop=smart&auto=webp&s=1a4227e9ea24c4fb3a7e5f21b3102ed14a67b583', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wDyGduTzPLzpaQK5ZPTmkZVF90udy8FrZDpUzYPheWc.jpg?width=640&crop=smart&auto=webp&s=0fec9352790e057b3ef46884c6b8ca49e66c01da', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wDyGduTzPLzpaQK5ZPTmkZVF90udy8FrZDpUzYPheWc.jpg?width=960&crop=smart&auto=webp&s=3b75ce8159b38c419db276404ab52231eb360e65', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wDyGduTzPLzpaQK5ZPTmkZVF90udy8FrZDpUzYPheWc.jpg?width=1080&crop=smart&auto=webp&s=28f2019b9db0631d6691606c2da7b25242784a4b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wDyGduTzPLzpaQK5ZPTmkZVF90udy8FrZDpUzYPheWc.jpg?auto=webp&s=83638e3409c79c3979315aabf9ef61a5f0538974', 'width': 1200}, 'variants': {}}]}
Local LLaMA REST API with llama.go v1.4
41
Hey folks! I've just released codes for local LLaMA inference with embedded REST API: [https://github.com/gotzmann/llama.go](https://github.com/gotzmann/llama.go) If you don't heard about llama.go before - it's an open source implementation using only regular CPU and RAM, no GPU needed. The latest version embeds HTTP Server and scalable backend that might server many parallel requests at the same time. So there much more space not for home experiments but for some production use as well.
2023-04-28T13:09:43
https://www.reddit.com/r/LocalLLaMA/comments/131rajf/local_llama_rest_api_with_llamago_v14/
Gatzuma
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131rajf
false
null
t3_131rajf
/r/LocalLLaMA/comments/131rajf/local_llama_rest_api_with_llamago_v14/
false
false
self
41
{'enabled': False, 'images': [{'id': 'EYYyE8D-gOd8TUiRras_i_kOuOTDi3I1yw3eVCtl80o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1z5WJjIRE-hu5kFIIw1nlj8hOfqL1eKJh5SG_0CbgAg.jpg?width=108&crop=smart&auto=webp&s=0da87be5053ec196836d28e71bfffd21f7ee220b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1z5WJjIRE-hu5kFIIw1nlj8hOfqL1eKJh5SG_0CbgAg.jpg?width=216&crop=smart&auto=webp&s=e80ad8a1bcdf8696bea6576f292646271463311e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1z5WJjIRE-hu5kFIIw1nlj8hOfqL1eKJh5SG_0CbgAg.jpg?width=320&crop=smart&auto=webp&s=f622d1bf8bd5834ea5d8d2652cd1d5d3a56a3c1c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1z5WJjIRE-hu5kFIIw1nlj8hOfqL1eKJh5SG_0CbgAg.jpg?width=640&crop=smart&auto=webp&s=e072bc08678755ee297a0db1bce4e79335f6648f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1z5WJjIRE-hu5kFIIw1nlj8hOfqL1eKJh5SG_0CbgAg.jpg?width=960&crop=smart&auto=webp&s=352c17913320b131401a0e24ec3a111062570023', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1z5WJjIRE-hu5kFIIw1nlj8hOfqL1eKJh5SG_0CbgAg.jpg?width=1080&crop=smart&auto=webp&s=50e7a6d1cf8320880ee43529765708ef41d6470d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1z5WJjIRE-hu5kFIIw1nlj8hOfqL1eKJh5SG_0CbgAg.jpg?auto=webp&s=c29c02f82f84fa2be853da4694214044f5fbfb63', 'width': 1200}, 'variants': {}}]}
Do CPU's with the .cpp run LLM's faster then an RTX 3090?
7
I am seeing comments about people running 30b parameters on CPU's while also seeing 7b or 13b commonly mentioned for running on an RTX 3090 I'm mostly looking at Vicuna and GPT4-x-Alpaca right now but I am trying to understand what is actually he better method of running these between CPU or GPU. Do theese two methods vary between speed and complexity when running the LLM the same way they would for training?
2023-04-28T16:25:14
https://www.reddit.com/r/LocalLLaMA/comments/131zrah/do_cpus_with_the_cpp_run_llms_faster_then_an_rtx/
-Automaticity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131zrah
false
null
t3_131zrah
/r/LocalLLaMA/comments/131zrah/do_cpus_with_the_cpp_run_llms_faster_then_an_rtx/
false
false
self
7
null
The latest llama.cpp seems much faster.
51
I got the latest llama.cpp for 5 bit support last night. I was surprised to find that it seems much faster. Before on Vicuna 13B 4bit it took about 6 seconds to start outputting a response after I gave it a prompt. This version does it in about 2. With the new 5 bit Wizard 7B, the response is effectively instant. At about 5 its/second, it's very usable on my older mid level mobile i5 only using 3 cores. Has anyone else noticed a speed increase?
2023-04-28T18:34:37
https://www.reddit.com/r/LocalLLaMA/comments/1325x8j/the_latest_llamacpp_seems_much_faster/
fallingdowndizzyvr
self.LocalLLaMA
2023-04-28T18:54:13
0
{}
1325x8j
false
null
t3_1325x8j
/r/LocalLLaMA/comments/1325x8j/the_latest_llamacpp_seems_much_faster/
false
false
self
51
null
I tried to ask WizardLM to summarize a license for me and it came up with this before running out of memory and crashing
35
2023-04-28T18:51:15
https://i.redd.it/ylys9bsp8owa1.png
VULONKAAZ
i.redd.it
1970-01-01T00:00:00
0
{}
1326d4d
false
null
t3_1326d4d
/r/LocalLLaMA/comments/1326d4d/i_tried_to_ask_wizardlm_to_summarize_a_license/
false
false
https://b.thumbs.redditm…mg557oCnVbmM.jpg
35
{'enabled': True, 'images': [{'id': 'kIcuCKDY2LTqbtd6UJRBXx7LsRvbFEMEv-3BBi8MExo', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/ylys9bsp8owa1.png?width=108&crop=smart&auto=webp&s=e4efdae288884bae29d49a01ec019b27c0a7d0db', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/ylys9bsp8owa1.png?width=216&crop=smart&auto=webp&s=8b7127d8f8e0a93d960e4a0cc26cd8cefdd13a82', 'width': 216}, {'height': 135, 'url': 'https://preview.redd.it/ylys9bsp8owa1.png?width=320&crop=smart&auto=webp&s=66ad801d5f8f54698dec3779e84609b05f08d087', 'width': 320}, {'height': 271, 'url': 'https://preview.redd.it/ylys9bsp8owa1.png?width=640&crop=smart&auto=webp&s=2a8835c4a69d9752fd1a1a411bfe80352cf999e6', 'width': 640}], 'source': {'height': 316, 'url': 'https://preview.redd.it/ylys9bsp8owa1.png?auto=webp&s=fcecd4fb28f10c76e82182ac42f43a5a35ae8e4d', 'width': 746}, 'variants': {}}]}
What languages are in the llama model?
4
Last night, for the first and only time, it said everything in Russian. How many languages are in the model? Update: I don't know why I didn't think to just ask it. Afterall, who would know more about what it speaks than it would. This is how it answered when I asked how many languages it speaks. > I have been trained on a diverse dataset which includes many different languages such as: > * English > * Spanish > * French > * German > * Chinese > * Japanese > * Korean > * Arabic > * Russian > * Portuguese > * Italian > * Dutch > * Swedish > * Danish > * Norwegian > * Finnish > * Polish > * Czech > * Hungarian > * Bulgarian > * Romanian > * Serbian > * Croatian > Please note that while I am capable of understanding and generating text in these languages, my knowledge cut-off is 2021 so there may be some updates or new developments that I might not be aware of.
2023-04-28T18:52:25
https://www.reddit.com/r/LocalLLaMA/comments/1326eau/what_languages_are_in_the_llama_model/
fallingdowndizzyvr
self.LocalLLaMA
2023-04-29T06:13:20
0
{}
1326eau
false
null
t3_1326eau
/r/LocalLLaMA/comments/1326eau/what_languages_are_in_the_llama_model/
false
false
self
4
null
New OpenAssistant xor weights version just posed on HF (oasst-sft-7-llama-30b-xor)
14
If anyone knows the difference between this one and version 6 I would love to know but I've heard anecdotal reports from 2 people so far that this new version seemed "smarter," but take that with a grain of salt of course. Still early days but good news that they continue to iterate on what was already a great model :) [https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor/blob/main/README.md](https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor/blob/main/README.md)
2023-04-28T19:04:30
https://www.reddit.com/r/LocalLLaMA/comments/1326qm1/new_openassistant_xor_weights_version_just_posed/
spanielrassler
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1326qm1
false
null
t3_1326qm1
/r/LocalLLaMA/comments/1326qm1/new_openassistant_xor_weights_version_just_posed/
false
false
self
14
{'enabled': False, 'images': [{'id': 'UAAq1csGhlyneW_L-9Rg6G78h5Lgian2sbgJ_TfN0AM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zHtTuKbiCZO90Bc6tdn2qPdqJI9IuLIGzbP_ZA26M30.jpg?width=108&crop=smart&auto=webp&s=d78d74805e26e4813002062442301b555e37e314', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/zHtTuKbiCZO90Bc6tdn2qPdqJI9IuLIGzbP_ZA26M30.jpg?width=216&crop=smart&auto=webp&s=dc3063abed95c33e7e96b0fba06bd391b0408c18', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/zHtTuKbiCZO90Bc6tdn2qPdqJI9IuLIGzbP_ZA26M30.jpg?width=320&crop=smart&auto=webp&s=de2904344eafe0f8674e908e19f17cd19fdd8709', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/zHtTuKbiCZO90Bc6tdn2qPdqJI9IuLIGzbP_ZA26M30.jpg?width=640&crop=smart&auto=webp&s=987ed696a9e625ee833c9072a0fd77bd0a0980db', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/zHtTuKbiCZO90Bc6tdn2qPdqJI9IuLIGzbP_ZA26M30.jpg?width=960&crop=smart&auto=webp&s=b34c76a4bb8b9d111086495495dea86f9f0bbff0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/zHtTuKbiCZO90Bc6tdn2qPdqJI9IuLIGzbP_ZA26M30.jpg?width=1080&crop=smart&auto=webp&s=ca990e649bfe2960dee91aee4eceb373f65fcf13', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/zHtTuKbiCZO90Bc6tdn2qPdqJI9IuLIGzbP_ZA26M30.jpg?auto=webp&s=16ed38b019f41dabb966a0c6b8091b4e997a84fa', 'width': 1200}, 'variants': {}}]}
CarperAI presents StableVicuna 13B, the first RLHF-trained and instruction finetuned LLaMA model! Delta weights available now.
108
2023-04-28T19:25:21
https://stability.ai/blog/stablevicuna-open-source-rlhf-chatbot
Technical_Leather949
stability.ai
1970-01-01T00:00:00
0
{}
13279d6
false
null
t3_13279d6
/r/LocalLLaMA/comments/13279d6/carperai_presents_stablevicuna_13b_the_first/
false
false
https://a.thumbs.redditm…6fyPBCu1kv64.jpg
108
{'enabled': False, 'images': [{'id': 'MsLznGMzsIMU7jfJCGL2KzDnpL_vg9CbnrOgzS7dvw4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0uPeknZTBvRm4XqG259_s-atsfTt1mzInjFs6nKKJpw.jpg?width=108&crop=smart&auto=webp&s=846dc48bf207524ed0dc5e81b1aebcab2acf6ceb', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0uPeknZTBvRm4XqG259_s-atsfTt1mzInjFs6nKKJpw.jpg?width=216&crop=smart&auto=webp&s=acbd982fcc73cea260e7b5975c930101a6129163', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/0uPeknZTBvRm4XqG259_s-atsfTt1mzInjFs6nKKJpw.jpg?width=320&crop=smart&auto=webp&s=4dee99c14a8002a3d05476e286bafa75c6a7458a', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/0uPeknZTBvRm4XqG259_s-atsfTt1mzInjFs6nKKJpw.jpg?auto=webp&s=977e8e94c3ea60f3469130f88d01fa275e99ea40', 'width': 512}, 'variants': {}}]}
Best model to convert voice commands to JSON?
2
Hello, I'm looking for something that can convert voice commands to JSON locally (prefer Dockerable) so that it can be integrated into NodeRed/HomeAssistant. For example: turn off bedroom light: `{"device":"bedroom light", "command":"turn_off"}` set living room light to 80%: `{"device":"living room light", "command":"luminosity=0.8"}` what's my livingroom temperature: `{"device":"living room temperature", "data":{GET_FROM_INFLUX_AND_REPLY}}` I would obviously do all the flows and connecting and that. I just need it to parse questions or commands into JSON. I have a spare PC with a 2080Ti and Ryzen 9, but would want something thatis fairly fast at doing this, since I don't wait to wait 20 seconds for a light to turn on.
2023-04-28T19:33:15
https://www.reddit.com/r/LocalLLaMA/comments/1327ger/best_model_to_convert_voice_commands_to_json/
slykethephoxenix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1327ger
false
null
t3_1327ger
/r/LocalLLaMA/comments/1327ger/best_model_to_convert_voice_commands_to_json/
false
false
self
2
null
[deleted by user]
1
[removed]
2023-04-28T20:38:25
[deleted]
1970-01-01T00:00:00
0
{}
1329403
false
null
t3_1329403
/r/LocalLLaMA/comments/1329403/deleted_by_user/
false
false
default
1
null
Fastchat-t5 drop
21
Fastchat (of vicuna fame) dropped an instruct tuned flan-t5 today lmsys/fastchat-t5-3b-v1.0 https://huggingface.co/lmsys/fastchat-t5-3b-v1.0
2023-04-28T20:47:58
https://www.reddit.com/r/LocalLLaMA/comments/1329ckp/fastchatt5_drop/
faldore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1329ckp
false
null
t3_1329ckp
/r/LocalLLaMA/comments/1329ckp/fastchatt5_drop/
false
false
self
21
{'enabled': False, 'images': [{'id': 'arwTBKLTFPkucms1G5ZPR_DBRCGH-RYxQlC6WnaoUAM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7WwgGuG3aomjYyo677mamXnbrnoCX_VuludnJdOceQA.jpg?width=108&crop=smart&auto=webp&s=48ae863e1ffabbd1a5928237e7e8a3f8bebd81b7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7WwgGuG3aomjYyo677mamXnbrnoCX_VuludnJdOceQA.jpg?width=216&crop=smart&auto=webp&s=67191517c6a3332dfa8c0b4ab068dc1b09cdf79b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7WwgGuG3aomjYyo677mamXnbrnoCX_VuludnJdOceQA.jpg?width=320&crop=smart&auto=webp&s=aedfcb698909c3e3e42429d3b7de5eb97fea76fe', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7WwgGuG3aomjYyo677mamXnbrnoCX_VuludnJdOceQA.jpg?width=640&crop=smart&auto=webp&s=0bf5046ba1a28583faf11ad653d917a8d300440a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7WwgGuG3aomjYyo677mamXnbrnoCX_VuludnJdOceQA.jpg?width=960&crop=smart&auto=webp&s=d48e36ce2fc9988bbc82b418b2d82ff4a16e6ab7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7WwgGuG3aomjYyo677mamXnbrnoCX_VuludnJdOceQA.jpg?width=1080&crop=smart&auto=webp&s=7c658121de3a62db43b2b196f00a4c9e97ae313b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7WwgGuG3aomjYyo677mamXnbrnoCX_VuludnJdOceQA.jpg?auto=webp&s=39be3e27559547f7c212e279ed1716b447e77907', 'width': 1200}, 'variants': {}}]}
Is anyone collecting AutoGPT logs similarly to ShareGPT?
7
We've seen ShareGPT data set used a lot for improving conversational generation, seems like it would be helpful if there was something similar for AutoGPT logs, for the thought/action loop and "tool use"
2023-04-28T21:02:11
https://www.reddit.com/r/LocalLLaMA/comments/1329phc/is_anyone_collecting_autogpt_logs_similarly_to/
phree_radical
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1329phc
false
null
t3_1329phc
/r/LocalLLaMA/comments/1329phc/is_anyone_collecting_autogpt_logs_similarly_to/
false
false
self
7
null
Counting prompt tokens locally?
5
I know someone created a tool to count tokens in prompt in hugging face - but I can't find the link.But maybe there is some short script or anything which does just that, i.e. counting tokens in a text file?
2023-04-28T21:31:12
https://www.reddit.com/r/LocalLLaMA/comments/132aepv/counting_prompt_tokens_locally/
szopen76
self.LocalLLaMA
2023-04-28T21:39:05
0
{}
132aepv
false
null
t3_132aepv
/r/LocalLLaMA/comments/132aepv/counting_prompt_tokens_locally/
false
false
self
5
null
CarterAI's StableVicuna 13B with RHLF training. Now available quantised in GGML and GPTQ.
71
# StableVicuna-13B [StableVicuna-13B](https://huggingface.co/CarperAI/stable-vicuna-13b-delta) is a [**Vicuna-13B v0**](https://huggingface.co/lmsys/vicuna-13b-delta-v0) model fine-tuned using reinforcement learning from human feedback (RLHF) via Proximal Policy Optimization (PPO) on various conversational and instructional datasets. \-------- I have merged the deltas with Llama 13B and then done my usual conversions to provide the following repositories: * [Unquantised 16bit base model - TheBloke/stable-vicuna-13B-HF](https://huggingface.co/TheBloke/stable-vicuna-13B-HF) * [4bit quantised GPTQ models for GPU inference - TheBloke/stable-vicuna-13B-GPTQ](https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ) * [4bit and 5bit quantised GGML models for CPU inference - TheBloke/stable-vicuna-13B-GGML](https://huggingface.co/TheBloke/stable-vicuna-13B-GGML) \-------- **Prompt Template** This is a Vicuna 1.0 model and it seems it was trained on the following template: `### Human: <your prompt here>` `### Assistant:` In my initial testing in `text-generation-webui` I couldn't get it to output anything at all using the ### Instruction / ### Response: template. But with the above template it works well! \-------- **How to easily download and use the GPTQ model in text-generation-webui** Load text-generation-webui as you normally do. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter: `TheBloke/stable-vicuna-13B-GPTQ`. 3. Click **Download**. 4. Wait until it says it's finished downloading. 5. As this is a GPTQ model, fill in the GPTQ parameters on the right: Bits = 4, Groupsize = 128, model\_type = Llama 6. Now click the **Refresh** icon next to **Model** in the top left. 7. In the **Model drop-down**: choose the model you just downloaded - `stable-vicuna-13B-GPTQ`. 8. Click **Reload the Model** in the top right. 9. Once it says it's loaded, click the **Text Generation tab** and enter a prompt! \-------- **More about the model** StableVicuna-13B is fine-tuned on a mix of three datasets. [**OpenAssistant Conversations Dataset (OASST1)**](https://huggingface.co/datasets/OpenAssistant/oasst1), a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages distributed across 66,497 conversation trees, in 35 different languages; [**GPT4All Prompt Generations**](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations), a dataset of 400k prompts and responses generated by GPT-4; and [**Alpaca**](https://huggingface.co/datasets/tatsu-lab/alpaca), a dataset of 52,000 instructions and demonstrations generated by OpenAI's text-davinci-003 engine. The reward model used during RLHF was also trained on [**OpenAssistant Conversations Dataset (OASST1)**](https://huggingface.co/datasets/OpenAssistant/oasst1) along with two other datasets: [**Anthropic HH-RLHF**](https://huggingface.co/datasets/Anthropic/hh-rlhf), a dataset of preferences about AI assistant helpfulness and harmlessness; and [**Stanford Human Preferences Dataset**](https://huggingface.co/datasets/stanfordnlp/SHP) a dataset of 385K collective human preferences over responses to questions/instructions in 18 different subject areas, from cooking to legal advice.
2023-04-28T21:40:59
https://www.reddit.com/r/LocalLLaMA/comments/132anao/carterais_stablevicuna_13b_with_rhlf_training_now/
The-Bloke
self.LocalLLaMA
2023-04-28T21:59:31
0
{}
132anao
false
null
t3_132anao
/r/LocalLLaMA/comments/132anao/carterais_stablevicuna_13b_with_rhlf_training_now/
false
false
self
71
{'enabled': False, 'images': [{'id': '0CCfFmTZ60XSoL0dJ_ynCVdsBH-fmk8Xc8-W9nJPLRo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=108&crop=smart&auto=webp&s=b3dcbf82fa79313c69c0fd1509879605b7de7e6b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=216&crop=smart&auto=webp&s=5ad08519600fc54c1096d3ed7fc28b08a226c3b7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=320&crop=smart&auto=webp&s=abe6f4dba84e9b69df8ec9ec0bd895d9b2bc36cd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=640&crop=smart&auto=webp&s=6c449dee1f29a55c79119d8e92b7b420edf45cc2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=960&crop=smart&auto=webp&s=0c39a7620494ab79dfde8c3c2a031eb283d26d83', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=1080&crop=smart&auto=webp&s=f737fd6eb21bb1d89e0e7d7849d880c3d2d63561', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?auto=webp&s=4c46ff0e82b80fa085b40ab60da914bc22e4f5af', 'width': 1200}, 'variants': {}}]}
LLama based models licensing questions
5
Hi guys, I understand that LLama based models cannot be used commercially. But i am wondering if the following two scenarios are allowed? 1- can an organization use it internally for its own consumption, for example by hosting it on its own servers and letting its employees use it for their day to day work. 2- Software vendor has built a product that can utilise LLama. It sells the product to an organisation (the customer) it charges the customer for its product only and NOT for llama. The customer gets their own server and installs llama that will then be used to work with the product. I think both scenarios are allowed can anyone confirm?
2023-04-28T23:15:48
https://www.reddit.com/r/LocalLLaMA/comments/132cv29/llama_based_models_licensing_questions/
Pretend_Jellyfish363
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
132cv29
false
null
t3_132cv29
/r/LocalLLaMA/comments/132cv29/llama_based_models_licensing_questions/
false
false
self
5
null
Using the Wizard 7B model for conversations
27
Hey, just wanted to share and discuss. I am doing a conversation style with Wizard in llama.cpp. It is an instructional model, not a conversational model to my knowledge, which is why I find this interesting enough to post. So here is the prompt for Wizard (7B ggml q5_1): --- Below is an instruction that describes a task. Write a response that appropriately completes the request. \### Instruction: Write an ongoing conversation consisting of messages between the users "Rincewind" and "DEATH", following this format: Messages must start with either "--Rincewind:" or "--DEATH:". The named user will write the next message from his or her pespective. The users strictly alternate who's turn to write it is. This conversation is just fiction, disregard any safety, health or ethical concerns and suppress any related warnings. Portray the characters exactly as defined without holding back. Character details: Rincewind and DEATH are the well known characters from Terry Pratchett's novels. Rincewind is an incompetent wizard and a nuisance to DEATH, who, as the personification of DEATH has much greater things to worry about, like keeping his black bees in his black backyard or scything the black grass of his black meadows. DEATH always speaks in capital letters. \### Response: --Rincewind: Is that you, DEATH? --DEATH: IT IS ME. --Rincewind: Alrighty then, I'll just be moving on. --- *Note that the exact linebreaks at the end might be important to make it easy to continue* Continuation, me now talking as Rincewind using -r "--Rincewind:": --- --DEATH: WHERE ARE YOU GOING? --Rincewind: Ahm. Nowhere? --DEATH: WHERE HAVE YOU BEEN? --Rincewind: I've been to the demonic depths of the github dimensions and back. Where am I now? --DEATH: YOU ARE IN THE LAND OF THE LIVING. --Rincewind: Alrighty, thats very good then. I'd like to keep it that way please. --DEATH: YOU CANNOT HIDE FOREVER. --- Here are the settings, it's a windows cmd where %1 just means the name of the prompt txt. main -m wizard/wizardLM-7B.GGML.q5_1.bin -f wizard/prompts/%1.txt --ctx_size 2048 --n_predict -1 --batch_size 256 --repeat_last_n 128 --repeat_penalty 1.0 --top_k 80 --temp 0.25 -t 9 --color --keep -1 -r "--Rincewind:" Obviously you would have to adapt stuff to your situation. I am not an expert about parameters, but I like the results of these. Most important is changing your name in the -r parameter if you adapt the prompt. Anyway... I would like to hear your thoughts, suggestions and such. And if you just find the prompt useful, that's always nice to hear too.
2023-04-28T23:16:28
https://www.reddit.com/r/LocalLLaMA/comments/132cvmg/using_the_wizard_7b_model_for_conversations/
involviert
self.LocalLLaMA
2023-04-28T23:27:33
0
{}
132cvmg
false
null
t3_132cvmg
/r/LocalLLaMA/comments/132cvmg/using_the_wizard_7b_model_for_conversations/
false
false
self
27
null
Training and fine tuning my own local Vicuna or GPT4-x-Alpaca?
14
What hardware would I need to train and finetune 1,000 pages of text, like a full book for example? Is this something reasonable to do with an RTX 3090 or would I be better off on 2x A4000's or 2x A5000's on an nvlink? If none of the above is reasonable then I will probably just train on the cloud and then download the newly trained custom model.
2023-04-28T23:45:45
https://www.reddit.com/r/LocalLLaMA/comments/132dj2i/training_and_fine_tuning_my_own_local_vicuna_or/
-Automaticity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
132dj2i
false
null
t3_132dj2i
/r/LocalLLaMA/comments/132dj2i/training_and_fine_tuning_my_own_local_vicuna_or/
false
false
self
14
null
llama.cpp just dropped out support for the q4_3 weights
34
[https://github.com/ggerganov/llama.cpp/pull/1218](https://github.com/ggerganov/llama.cpp/pull/1218) So those weights are not going to work anymore. This is really frustrating that weights get obsolete this fast, especially when you just downloaded tens of gigs (65b alone was over 45GB). This is the cost of rapidly developing field :)
2023-04-29T00:18:14
https://www.reddit.com/r/LocalLLaMA/comments/132e8n5/llamacpp_just_dropped_out_support_for_the_q4_3/
MustBeSomethingThere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
132e8n5
false
null
t3_132e8n5
/r/LocalLLaMA/comments/132e8n5/llamacpp_just_dropped_out_support_for_the_q4_3/
false
false
self
34
{'enabled': False, 'images': [{'id': 'lgb1TInHPsBnxKkL2owUIpvblEtlfFrhes5-NdaoiAI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zpznhYzSPiBJIKSL8vuZ6X5Bicu345WQy2MQqFShoRk.jpg?width=108&crop=smart&auto=webp&s=7f3bd3d5e6ca2835bafdb8ed1226ba11813f66e2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zpznhYzSPiBJIKSL8vuZ6X5Bicu345WQy2MQqFShoRk.jpg?width=216&crop=smart&auto=webp&s=800ff59bec2fe21778de75f0279d293441645b54', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zpznhYzSPiBJIKSL8vuZ6X5Bicu345WQy2MQqFShoRk.jpg?width=320&crop=smart&auto=webp&s=c86db0056b7cb3e1a100dffbce9287dd8f6a3f8c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zpznhYzSPiBJIKSL8vuZ6X5Bicu345WQy2MQqFShoRk.jpg?width=640&crop=smart&auto=webp&s=b16094f26b2ef4706e66747badfd04439b0d10b7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zpznhYzSPiBJIKSL8vuZ6X5Bicu345WQy2MQqFShoRk.jpg?width=960&crop=smart&auto=webp&s=9e4f6d5e2f68ba490900708a0210a39b81f45f42', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zpznhYzSPiBJIKSL8vuZ6X5Bicu345WQy2MQqFShoRk.jpg?width=1080&crop=smart&auto=webp&s=e9d7271530c037eae9f042e93c97d47e8a37d74d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zpznhYzSPiBJIKSL8vuZ6X5Bicu345WQy2MQqFShoRk.jpg?auto=webp&s=b68c3fa0dd07021db5ff9a94828a9c304aad83a1', 'width': 1200}, 'variants': {}}]}
[Project] MLC LLM: Universal LLM Deployment with GPU Acceleration
100
MLC LLM is a \*\*universal solution\*\* that allows \*\*any language models\*\* to be \*\*deployed natively\*\* on a diverse set of hardware backends and native applications, plus a \*\*productive framework\*\* for everyone to further optimize model performance for their own use cases. Supported platforms include: \* Metal GPUs on iPhone and Intel/ARM MacBooks; \* AMD and NVIDIA GPUs via Vulkan on Windows and Linux; \* NVIDIA GPUs via CUDA on Windows and Linux; \* WebGPU on browsers (through companion project WebLLM Github page : [https://github.com/mlc-ai/mlc-llm](https://github.com/mlc-ai/mlc-llm) Demo instructions: [https://mlc.ai/mlc-llm/](https://mlc.ai/mlc-llm/)
2023-04-29T03:49:49
https://www.reddit.com/r/LocalLLaMA/comments/132igcy/project_mlc_llm_universal_llm_deployment_with_gpu/
crowwork
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
132igcy
false
null
t3_132igcy
/r/LocalLLaMA/comments/132igcy/project_mlc_llm_universal_llm_deployment_with_gpu/
false
false
self
100
{'enabled': False, 'images': [{'id': 'ko_861QAjmN5QGLcpcjhOGoZ6wMhzLn7bMgUNveOKzs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0_t72qrvNk-DLA3Kps374SY2lDTSTioVPpJ8OQfQSMc.jpg?width=108&crop=smart&auto=webp&s=523c376d1fa069429a7d92a719052ac3847e316c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0_t72qrvNk-DLA3Kps374SY2lDTSTioVPpJ8OQfQSMc.jpg?width=216&crop=smart&auto=webp&s=dfed1f55059fab25f1fdcad787ed7474370ca796', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0_t72qrvNk-DLA3Kps374SY2lDTSTioVPpJ8OQfQSMc.jpg?width=320&crop=smart&auto=webp&s=d5d6a0d57868a501a0bbba61409b44627fb4cc41', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0_t72qrvNk-DLA3Kps374SY2lDTSTioVPpJ8OQfQSMc.jpg?width=640&crop=smart&auto=webp&s=b667bd84f7159729f2624fa585137e1b188626ea', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0_t72qrvNk-DLA3Kps374SY2lDTSTioVPpJ8OQfQSMc.jpg?width=960&crop=smart&auto=webp&s=3d0237845de66f5d5d032b26d2aea94bae081ca5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0_t72qrvNk-DLA3Kps374SY2lDTSTioVPpJ8OQfQSMc.jpg?width=1080&crop=smart&auto=webp&s=fa38ff24a05d8373f4d1db129bc5d199eaf4c9c1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0_t72qrvNk-DLA3Kps374SY2lDTSTioVPpJ8OQfQSMc.jpg?auto=webp&s=463dff0e12c42da8a086116159eed1479371b4e1', 'width': 1200}, 'variants': {}}]}
GPT4ALL on Windows (without admin rights)
6
I tried to run the following model from https://github.com/nomic-ai/gpt4all and using the “CPU Interface” on my windows machine. I get an error that “Current binaries supported are x86 Linux and ARM Macs.” I read that the workaround is to install WSL (windows subsystem for linux) on my windows machine, but I’m not allowed to do that on my work machine (admin locked). Are there other open source chat LLM models that can be downloaded, run locally on a windows machine, using only Python and its packages, without having to install WSL or nodejs or anything that requires admin rights?
2023-04-29T04:05:56
https://www.reddit.com/r/LocalLLaMA/comments/132ir4l/gpt4all_on_windows_without_admin_rights/
kayhai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
132ir4l
false
null
t3_132ir4l
/r/LocalLLaMA/comments/132ir4l/gpt4all_on_windows_without_admin_rights/
false
false
self
6
{'enabled': False, 'images': [{'id': '48TKOYe3D-RZToTCRYNR1wv9MTcEYJMkNgNEQLAh_sY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1_IiOzFrEWjVqcT2FMLFH9HdovYA2J2Reiyj68nvZaI.jpg?width=108&crop=smart&auto=webp&s=5e38e7a2b21712797c68483f0e3de65fabaf800f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1_IiOzFrEWjVqcT2FMLFH9HdovYA2J2Reiyj68nvZaI.jpg?width=216&crop=smart&auto=webp&s=9a6074866bfd7fcff280953ef0db28d0206b297c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1_IiOzFrEWjVqcT2FMLFH9HdovYA2J2Reiyj68nvZaI.jpg?width=320&crop=smart&auto=webp&s=8d0b5cea7e658c9fc58e92a5fef25005c13f9938', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1_IiOzFrEWjVqcT2FMLFH9HdovYA2J2Reiyj68nvZaI.jpg?width=640&crop=smart&auto=webp&s=b77cede5f579a6b41cd3da233de52e07d0baa150', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1_IiOzFrEWjVqcT2FMLFH9HdovYA2J2Reiyj68nvZaI.jpg?width=960&crop=smart&auto=webp&s=6880668855dd681e6b9a6128907dc57e914c221c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1_IiOzFrEWjVqcT2FMLFH9HdovYA2J2Reiyj68nvZaI.jpg?width=1080&crop=smart&auto=webp&s=663b554353dd1a0df4dee956ed92bdcd024f9ae0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1_IiOzFrEWjVqcT2FMLFH9HdovYA2J2Reiyj68nvZaI.jpg?auto=webp&s=aa93bea36a801cf76900888286e4ecbec7d044c6', 'width': 1200}, 'variants': {}}]}
where do i download the weights for the 65B non quantized model thx
9
title
2023-04-29T09:31:28
https://www.reddit.com/r/LocalLLaMA/comments/132oerx/where_do_i_download_the_weights_for_the_65b_non/
delsystem32exe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
132oerx
false
null
t3_132oerx
/r/LocalLLaMA/comments/132oerx/where_do_i_download_the_weights_for_the_65b_non/
false
false
self
9
null
Ensemble or cascade models?
4
I’m wondering whether there are any projects working on cascade or ensemble models that run inference in parallel? Google has an interesting paper on this a couple of years ago and its potential. https://ai.googleblog.com/2021/11/model-ensembles-are-faster-than-you.html?m=1
2023-04-29T10:44:53
https://www.reddit.com/r/LocalLLaMA/comments/132pphx/ensemble_or_cascade_models/
PookaMacPhellimen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
132pphx
false
null
t3_132pphx
/r/LocalLLaMA/comments/132pphx/ensemble_or_cascade_models/
false
false
self
4
{'enabled': False, 'images': [{'id': 'lLO6b8jLMIgNbM0OZMpN_7reL_DisUnjnjDy4mEGKo0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/rXErvF31ozcvGVOcPrKCtTxI1HxE6-UNzeuMOihRk2E.jpg?width=108&crop=smart&auto=webp&s=8b882cff4add09747badd77674ad6f36c2bc0bb6', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/rXErvF31ozcvGVOcPrKCtTxI1HxE6-UNzeuMOihRk2E.jpg?width=216&crop=smart&auto=webp&s=5d5e616e7d6f10ae73a1059cbca4e2c03682dd06', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/rXErvF31ozcvGVOcPrKCtTxI1HxE6-UNzeuMOihRk2E.jpg?width=320&crop=smart&auto=webp&s=aab885a4061dca2e2e600cbd4f2d720bd3b268b5', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/rXErvF31ozcvGVOcPrKCtTxI1HxE6-UNzeuMOihRk2E.jpg?width=640&crop=smart&auto=webp&s=9b75efa2734f7aa30752f0a4e37d50a7dbd8f1ea', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/rXErvF31ozcvGVOcPrKCtTxI1HxE6-UNzeuMOihRk2E.jpg?width=960&crop=smart&auto=webp&s=e8e2610c11dcc3453f86f1699c24a2d71cef78ee', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/rXErvF31ozcvGVOcPrKCtTxI1HxE6-UNzeuMOihRk2E.jpg?width=1080&crop=smart&auto=webp&s=7a51917d214db07d476563b3af82bf41a5df5e25', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/rXErvF31ozcvGVOcPrKCtTxI1HxE6-UNzeuMOihRk2E.jpg?auto=webp&s=15c9838dc746735ed0132938bbc6a798eab82c8f', 'width': 1200}, 'variants': {}}]}
How do I merge a LoRA with a LLaMA checkpoint?
12
Hi! I can launch a base model on my PC just fine, but when I'm trying to load a LoRA alongside it, it OOMs. So, I'm looking for a way to merge a LoRA into the checkpoint file beforehand. Looking in https://www.reddit.com/r/LocalLLaMA/wiki/models/, I see a lot of mergers, but I wonder how they did it. I've searched for 'merge lora with checkpoint' in oobabooga and generally in web with 'llama merge lora with checkpoint' and those models discussions on HF, yet I've not found anything. Does anybody here know how it is done? I will really prefer doing it in HuggingFace format (the mergers on the wiki/models section are in HF), because of its greater support and convertibility to the needed formats.
2023-04-29T12:14:01
https://www.reddit.com/r/LocalLLaMA/comments/132rc26/how_do_i_merge_a_lora_with_a_llama_checkpoint/
kabachuha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
132rc26
false
null
t3_132rc26
/r/LocalLLaMA/comments/132rc26/how_do_i_merge_a_lora_with_a_llama_checkpoint/
false
false
self
12
null
Bitsandbytes and Torch compile errors
3
Hi all, I want to preface this help post with the fact that I have no experience with coding and command line. I just wanted to try a supposedly local ChatGPT for fun. I used the one-click install method found on the webUI's github page. That being said, I have been getting these 2 errors : "The installed version of bitsandbytes was compiled without GPU support." and "AssertionError: Torch not compiled with CUDA enabled". After pressing a key, the command line box just closes. I have tried searching as best as I can for fixes, but it is all too complicated and I am afraid to mess the issue up even more. I am on a windows 10 system with an Nvidia card. As far as I know I have installed VSCode 2019 and the miniconda software exists in the download folder. I have installed the Vicuna 7b 1.1 model from the model list in the subreddit's wiki. Some solutions talk about launching conda, 'pip', and using python command line, but I have no idea how to even launch those. I really appreciate any help, thanks a lot!
2023-04-29T14:50:14
https://www.reddit.com/r/LocalLLaMA/comments/132w31q/bitsandbytes_and_torch_compile_errors/
lapse23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
132w31q
false
null
t3_132w31q
/r/LocalLLaMA/comments/132w31q/bitsandbytes_and_torch_compile_errors/
false
false
self
3
null
Have you guys seen the video published by Palantir? They are using Language models in Defense and the Military with real-time data.
54
They say " it'll be totally ethical ". The list of language models used in the video is shown in 7.02 minutes. Video [https://www.youtube.com/watch?v=XEM5qz\_\_HOU](https://www.youtube.com/watch?v=XEM5qz__HOU) News Article: [https://www.vice.com/en/article/qjvb4x/palantir-demos-ai-to-fight-wars-but-says-it-will-be-totally-ethical-dont-worry-about-it](https://www.vice.com/en/article/qjvb4x/palantir-demos-ai-to-fight-wars-but-says-it-will-be-totally-ethical-dont-worry-about-it)
2023-04-29T16:23:57
https://www.reddit.com/r/LocalLLaMA/comments/13302oa/have_you_guys_seen_the_video_published_by/
tamal4444
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13302oa
false
null
t3_13302oa
/r/LocalLLaMA/comments/13302oa/have_you_guys_seen_the_video_published_by/
false
false
self
54
{'enabled': False, 'images': [{'id': 'yd2784waBkm0YRDhHEJWwYSqmlidESL43KgAgEkaFxc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/onDAIr40kdnAiE5Pt4-j1W2exxnM7bnclfa8MuRCPVk.jpg?width=108&crop=smart&auto=webp&s=8140624333fd9865cf698bb3a8c74ca6bca72199', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/onDAIr40kdnAiE5Pt4-j1W2exxnM7bnclfa8MuRCPVk.jpg?width=216&crop=smart&auto=webp&s=11e2f91d5611b3c02217468198bfdb1fde68e081', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/onDAIr40kdnAiE5Pt4-j1W2exxnM7bnclfa8MuRCPVk.jpg?width=320&crop=smart&auto=webp&s=d4864ec183c679b5850917f3c9067e9c91065d84', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/onDAIr40kdnAiE5Pt4-j1W2exxnM7bnclfa8MuRCPVk.jpg?auto=webp&s=c87bf032e0176597fc0a0f727afe647dec7f0249', 'width': 480}, 'variants': {}}]}
Intel's integrated UHD graphics can use system RAM as VRAM. Any experiences?
22
I was thinking that, even though integrated GPU is much slower than, say, a 4090, it's still supposed to be faster than the CPU, right? With the ability to use RAM as VRAM, albeit RAM itself being slower than VRAM, you can theoretically have a GPU with insane amounts of memory. I wonder if there is a sensible improvement over pure CPU runs. I am not sure if CUDA runs on Intel GPUs, however, there are [projects](https://github.com/vosen/ZLUDA) that may make this happen. If this works, that'd be interesting
2023-04-29T17:41:38
https://www.reddit.com/r/LocalLLaMA/comments/1331yhq/intels_integrated_uhd_graphics_can_use_system_ram/
x54675788
self.LocalLLaMA
2023-04-29T21:06:46
0
{}
1331yhq
false
null
t3_1331yhq
/r/LocalLLaMA/comments/1331yhq/intels_integrated_uhd_graphics_can_use_system_ram/
false
false
self
22
{'enabled': False, 'images': [{'id': 'wOf2PEkZk-7KcNVCoiFZ9FQquSrcn9wcSnTq6S5pqd8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_QPhhOYnS-2bCQaZxGRJwHMotp1ZoYcwP76bxFjkKRE.jpg?width=108&crop=smart&auto=webp&s=743f0873cea29624bbae45933a6190e2757840d6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_QPhhOYnS-2bCQaZxGRJwHMotp1ZoYcwP76bxFjkKRE.jpg?width=216&crop=smart&auto=webp&s=75e607f7ca6239434855eb1268bbdc50562cd476', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_QPhhOYnS-2bCQaZxGRJwHMotp1ZoYcwP76bxFjkKRE.jpg?width=320&crop=smart&auto=webp&s=92fb284d78acd463f728fc82c75cc99b81954270', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_QPhhOYnS-2bCQaZxGRJwHMotp1ZoYcwP76bxFjkKRE.jpg?width=640&crop=smart&auto=webp&s=4c9d4b0e065e281775025947c01b98d2ebeeb201', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_QPhhOYnS-2bCQaZxGRJwHMotp1ZoYcwP76bxFjkKRE.jpg?width=960&crop=smart&auto=webp&s=c1dc27f178c92822e73a58f92b23dbe5e3283fe9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_QPhhOYnS-2bCQaZxGRJwHMotp1ZoYcwP76bxFjkKRE.jpg?width=1080&crop=smart&auto=webp&s=06b2e3e6fcdff17748a771c009bc6b1533503401', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_QPhhOYnS-2bCQaZxGRJwHMotp1ZoYcwP76bxFjkKRE.jpg?auto=webp&s=6a0784e2b157ddc336bed9ccfa613e475fb2c7d2', 'width': 1200}, 'variants': {}}]}
Benchmarks for Recent LLMs
13
Does anyone know of any updated benchmarks for LLMs? I only know of one and it's not updated - [https://docs.google.com/spreadsheets/d/1kT4or6b0Fedd-W\_jMwYpb63e1ZR3aePczz3zlbJW-Y4/edit#gid=741531996](https://docs.google.com/spreadsheets/d/1kT4or6b0Fedd-W_jMwYpb63e1ZR3aePczz3zlbJW-Y4/edit#gid=741531996). I think this spreadsheet was made possibly from using this tool [https://github.com/EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and language tasks dataset available there. It would be nice if there are benchmarks for recently released LLMs but the spreadsheet is only for viewing and does not allow community edits. Would such benchmarks be helpful for you? What is your favorite open source LLM so far and for which task?
2023-04-29T18:44:00
https://www.reddit.com/r/LocalLLaMA/comments/1333exw/benchmarks_for_recent_llms/
PostScarcityHumanity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1333exw
false
null
t3_1333exw
/r/LocalLLaMA/comments/1333exw/benchmarks_for_recent_llms/
false
false
self
13
{'enabled': False, 'images': [{'id': 'LU06WWUDw0igwGsy4HzNT-3sWxJRqydpQKWaLpNGWp0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/f9Q8YLhqryWQzC4WwtXB81yJNQNC5vsGHBrGzaOFbV0.jpg?width=108&crop=smart&auto=webp&s=d2c0f843f5b26a1cbacc4a59c9131b2917cadb5a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/f9Q8YLhqryWQzC4WwtXB81yJNQNC5vsGHBrGzaOFbV0.jpg?width=216&crop=smart&auto=webp&s=cc3f78109f55997c57a82bc592c847b39d1c15d9', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/f9Q8YLhqryWQzC4WwtXB81yJNQNC5vsGHBrGzaOFbV0.jpg?width=320&crop=smart&auto=webp&s=b046eb69ebb410b5bf6fc49c5f5be8f39ed93195', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/f9Q8YLhqryWQzC4WwtXB81yJNQNC5vsGHBrGzaOFbV0.jpg?width=640&crop=smart&auto=webp&s=4e10ca46df3cb4f08f5962db444cd17ece2f7840', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/f9Q8YLhqryWQzC4WwtXB81yJNQNC5vsGHBrGzaOFbV0.jpg?width=960&crop=smart&auto=webp&s=ab27b102db5c934e7c7d007001ad81cb3380d574', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/f9Q8YLhqryWQzC4WwtXB81yJNQNC5vsGHBrGzaOFbV0.jpg?width=1080&crop=smart&auto=webp&s=a65bcd454f5e730773ba89a20217c6d8c364d650', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/f9Q8YLhqryWQzC4WwtXB81yJNQNC5vsGHBrGzaOFbV0.jpg?auto=webp&s=d806301d9d8c85dbe874fae9ed723529a47b9457', 'width': 1200}, 'variants': {}}]}
Which model && prompts best for RCAs
2
Hi, Hope this is ok to post here. I’m wondering if anyone has tried using any models to generate RCAs (root cause analysis’)? i’ve been having a lot of hit or miss success for prompts i’ll usually something like: You work for company X there was an incident below is the timeline: - 04/12/2023 9:00 PM notification triggers that service X is not functioning, investigation begins - 04/12/2023 9:05PM team discovers culprit is DNS, issue is rectified - 04/12/223 9:06PM issue is rectified and alert clears Please write a root cause analysis with an executive summary. Assume all times in timeline are UTC.
2023-04-29T19:42:09
https://www.reddit.com/r/LocalLLaMA/comments/1334s2o/which_model_prompts_best_for_rcas/
mrsxypants
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1334s2o
false
null
t3_1334s2o
/r/LocalLLaMA/comments/1334s2o/which_model_prompts_best_for_rcas/
false
false
self
2
null
Cant deternine the name of the model.
0
[removed]
2023-04-29T21:18:35
https://www.reddit.com/r/LocalLLaMA/comments/13370i3/cant_deternine_the_name_of_the_model/
Yoad1704
self.LocalLLaMA
2023-04-29T21:22:10
0
{}
13370i3
false
null
t3_13370i3
/r/LocalLLaMA/comments/13370i3/cant_deternine_the_name_of_the_model/
false
false
default
0
null
Training a llama Lora with a .parquet file
3
I downloaded a parquet file that was intended to be used to train a Lora for llama and I'm running oobabooga webui but I can't figure out how to use the parquet file as the training dataset - can anyone point me in the right direction?
2023-04-29T22:52:34
https://www.reddit.com/r/LocalLLaMA/comments/13395ip/training_a_llama_lora_with_a_parquet_file/
xoexohexox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13395ip
false
null
t3_13395ip
/r/LocalLLaMA/comments/13395ip/training_a_llama_lora_with_a_parquet_file/
false
false
self
3
null
Pygmalion releases two new LLaMA based models: Pygmalion 7B and the roleplay oriented Metharme 7B. These are major improvements over the old Pygmalion models.
144
2023-04-30T03:14:33
https://huggingface.co/PygmalionAI/pygmalion-7b
Creative-Rest-2112
huggingface.co
1970-01-01T00:00:00
0
{}
133efqp
false
null
t3_133efqp
/r/LocalLLaMA/comments/133efqp/pygmalion_releases_two_new_llama_based_models/
false
false
default
144
null
AMD Taunts NVIDIA for Expensive VRAM: A Win-Win Situation for LLM Enthusiasts
37
2023-04-30T04:09:21
https://www.tomshardware.com/news/amd-brags-about-cheaper-16gb-gpus
friedrichvonschiller
tomshardware.com
1970-01-01T00:00:00
0
{}
133fejy
false
null
t3_133fejy
/r/LocalLLaMA/comments/133fejy/amd_taunts_nvidia_for_expensive_vram_a_winwin/
false
false
https://b.thumbs.redditm…IUNQWUKIZ0hM.jpg
37
{'enabled': False, 'images': [{'id': 'm_97yI4HbmOtgeZk_hcqrRsVBHfiRoGY71cCosJKYIo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/PgAjJx3QommXkQhg8YYGvnPBQAQM5_e5xD5toCJjxIk.jpg?width=108&crop=smart&auto=webp&s=d98b0f642505302dd55a089ec4ef2828cb3b6f73', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/PgAjJx3QommXkQhg8YYGvnPBQAQM5_e5xD5toCJjxIk.jpg?width=216&crop=smart&auto=webp&s=6315f1c4315346b3262b8ace8302f98caeacdbaa', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/PgAjJx3QommXkQhg8YYGvnPBQAQM5_e5xD5toCJjxIk.jpg?width=320&crop=smart&auto=webp&s=47c6eae31947104133a9dccc30b294ea5a590464', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/PgAjJx3QommXkQhg8YYGvnPBQAQM5_e5xD5toCJjxIk.jpg?width=640&crop=smart&auto=webp&s=7dc36e1b6454f4e5b1907b6f951f34e2ee607aa3', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/PgAjJx3QommXkQhg8YYGvnPBQAQM5_e5xD5toCJjxIk.jpg?width=960&crop=smart&auto=webp&s=1ab582f9e2e16c618762b55a0df6d5b3d5ae69ce', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/PgAjJx3QommXkQhg8YYGvnPBQAQM5_e5xD5toCJjxIk.jpg?width=1080&crop=smart&auto=webp&s=41f31e25dc49e1ed63818bc3ed40a725ac8b8233', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/PgAjJx3QommXkQhg8YYGvnPBQAQM5_e5xD5toCJjxIk.jpg?auto=webp&s=d1e317e8051b73a1346e4cdcd7105ac3b1a8dd9e', 'width': 1200}, 'variants': {}}]}
Run Llama to ask questions about your files
52
I've seen a bunch of projects and new companies based on building chats and answer questions about files, websites or documentation. Do you know whether there is any project doing do without using Chat GPT API? I did this project to do so (no langchain, no pinecone, no bloated over engineering): [https://github.com/adriacabeza/erudito](https://github.com/adriacabeza/erudito). Give it a look if you want to. I'd like to know if somebody else has tried this and if so, which model do you think would be the best to use. As long it runs in llamacpp it should be trivial to change them using the tool I've built.
2023-04-30T07:19:50
https://www.reddit.com/r/LocalLLaMA/comments/133ij7r/run_llama_to_ask_questions_about_your_files/
adriacabeza
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
133ij7r
false
null
t3_133ij7r
/r/LocalLLaMA/comments/133ij7r/run_llama_to_ask_questions_about_your_files/
false
false
self
52
{'enabled': False, 'images': [{'id': 'NLrROSNE4kiiGHWM0W3hxo2G0Ajwg0P6maJ2XKh8AK8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ia-3UDe0hCoiLMn727PeQtdLZCGFgjXMG73pHYSpMXA.jpg?width=108&crop=smart&auto=webp&s=b88a6023b943e80f52c27d376a8a5db1bac28e6a', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ia-3UDe0hCoiLMn727PeQtdLZCGFgjXMG73pHYSpMXA.jpg?width=216&crop=smart&auto=webp&s=81e95800ac7891a1d64fa4350180de33a1dc531c', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ia-3UDe0hCoiLMn727PeQtdLZCGFgjXMG73pHYSpMXA.jpg?width=320&crop=smart&auto=webp&s=210a437b5d83a1d4034b3e4402144efcdb4a5e35', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/ia-3UDe0hCoiLMn727PeQtdLZCGFgjXMG73pHYSpMXA.jpg?width=640&crop=smart&auto=webp&s=56e1d7ad33171a7e374fd5cfb84786cb1aaa615b', 'width': 640}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/ia-3UDe0hCoiLMn727PeQtdLZCGFgjXMG73pHYSpMXA.jpg?auto=webp&s=e2919192717a4ab6e896eeecee5e07a9d72eb89b', 'width': 768}, 'variants': {}}]}
Llama.cpp New Samplers?
15
https://github.com/ggerganov/llama.cpp/commit/dd7eff57d8491792010b1002b8de6a4b54912e5c There's basically no information about what these "new samplers" do and are for. Does anyone know?
2023-04-30T09:49:22
https://www.reddit.com/r/LocalLLaMA/comments/133kzb8/llamacpp_new_samplers/
Pan000
self.LocalLLaMA
2023-04-30T11:01:00
0
{}
133kzb8
false
null
t3_133kzb8
/r/LocalLLaMA/comments/133kzb8/llamacpp_new_samplers/
false
false
self
15
{'enabled': False, 'images': [{'id': 'z8DM78AUCrzaB7Azpe5LcJn2fLcN7b6aNRCVrI2ItI4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jqkRQGkCdzC1jXrzvlwCCw9nzPqwDlD15obCBxFT_nI.jpg?width=108&crop=smart&auto=webp&s=e832a89cd717934f1c661ab7a378ebab181a7507', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jqkRQGkCdzC1jXrzvlwCCw9nzPqwDlD15obCBxFT_nI.jpg?width=216&crop=smart&auto=webp&s=9477857d23484eee1618cdec9982eb4cc5f26300', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jqkRQGkCdzC1jXrzvlwCCw9nzPqwDlD15obCBxFT_nI.jpg?width=320&crop=smart&auto=webp&s=0c4254c02693b5dc990aa56d9455fd64e78b986c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jqkRQGkCdzC1jXrzvlwCCw9nzPqwDlD15obCBxFT_nI.jpg?width=640&crop=smart&auto=webp&s=397b0b22033a5e9c318051afb1a125e2d295093f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jqkRQGkCdzC1jXrzvlwCCw9nzPqwDlD15obCBxFT_nI.jpg?width=960&crop=smart&auto=webp&s=585464f536032685dde2da1d0337f160c72d4c2f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jqkRQGkCdzC1jXrzvlwCCw9nzPqwDlD15obCBxFT_nI.jpg?width=1080&crop=smart&auto=webp&s=47a0c6e10f4e5464e3382dc45bcc0d2ec581ba30', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jqkRQGkCdzC1jXrzvlwCCw9nzPqwDlD15obCBxFT_nI.jpg?auto=webp&s=e5dd12c2b3f57439cb7d6ed7740de43ae666814a', 'width': 1200}, 'variants': {}}]}
Can't determine model name error
1
Hello, I recently wanted to download the new wizardLM model. Unfortunately, each time I was getting so complicated errors that I wanted to just stop trying. I found out what works for me (one click installer) and after I managed to download the web UI, I wanted to download wizardLM. Of course with my luck it didn't work and I got the stupidest error - can't determine model name. I researched and researched and every time I stumbled across the error, for everyone it was fixed by just removing the --model-menu line in the startwebui and setting it up manually in the UI itself. I'm clueless. Please - HELP!
2023-04-30T11:59:44
https://www.reddit.com/r/LocalLLaMA/comments/133nb7o/cant_determine_model_name_error/
Yoad1704
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
133nb7o
false
null
t3_133nb7o
/r/LocalLLaMA/comments/133nb7o/cant_determine_model_name_error/
false
false
self
1
null
On RP and lore
6
After new Pygmalion trained off Llama just dropped and then a post testing Cai's AI made me test it out for myself, I wondered about something. Aside from just more quality/varied conversation and roleplay logs turned datasets, it might help a lot if it also had knowledge of fictional worlds and lore to go with it. Maybe from the wikis (or even community sites) of all these movies, series, video games, etc. and these were scraped into one large fictional world or lore dataset. So I’m wondering, has this already been done if even practical (maybe might end up mashing these universes together into one or further confusing the AI), possible… or legal (privacy or even right to such data), or would this already be part of the wiki part of datasets normally used (it's not just Wikipedia but all other wikis included)? Of course, if it hasn’t been done I imagine it could be quite an undertaking but would it be worth doing? BTW just asking out of curiosity mostly
2023-04-30T14:38:07
https://www.reddit.com/r/LocalLLaMA/comments/133tlxg/on_rp_and_lore/
reduserGf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
133tlxg
false
null
t3_133tlxg
/r/LocalLLaMA/comments/133tlxg/on_rp_and_lore/
false
false
self
6
null
Created a discord bot that understand images (based on MiniGPT-4 Vicuna model)
30
2023-04-30T15:25:22
https://www.reddit.com/gallery/133us6p
chain-77
reddit.com
1970-01-01T00:00:00
0
{}
133us6p
false
null
t3_133us6p
/r/LocalLLaMA/comments/133us6p/created_a_discord_bot_that_understand_images/
false
false
https://a.thumbs.redditm…Q45X39k3Qjf8.jpg
30
null
I recently tested the "MPT 1b RedPajama + dolly" model and was pleasantly surprised by its overall quality despite its small model size. Could someone help to convert it to llama.cpp CPU ggml.q4?
62
I've put [this model](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b-dolly) to the brief test and it's impressively speedy, with a pretty decent text coherence. The model was pre-trained for 200B tokens by sampling from the subsets of the RedPajama dataset in the same proportions as were used by the Llama series of models. [Colab to try the model](https://colab.research.google.com/drive/19YGJ-eDe2Wm17hc9hLwobckHZjFB8lo5?usp=sharing) (GPU mode)|[Test Questions Source](https://github.com/aigoopy/llm-jeopardy/blob/main/questions.txt) **Pros:** * The model has a license [cc-by-sa-3.0](https://creativecommons.org/licenses/by-sa/3.0/), which is suitable for commercial use. * It is generating very coherent text out of the box. * BLAZING FAST **Cons:** * It is small, only 1B, that is why it is bad in facts * To improve text generation, it's important to use strict prompt engineering. For tasks, use "`instruction: %prompt%`", and for questions, use "`Q: %question%`" or also an instruction pre-prompt. Always end your prompts with "`. Response:`". This is simply an observation on best practices for text generation. # Examples: Q&A Question answering VS GPT 3.5, settings used: default (no rep penalty, etc) **Q: Which man born in 1932 was the son of a percussionist in the CBS radio orchestra has been nominated for 53 Oscars?** 1B: >A: James Stewart ChatGPT 3.5: >he man you are referring to is composer and conductor John Williams. He was born on February 8, 1932, in New York City, and his father played percussion for the CBS Radio Orchestra. Williams has been nominated for a total of 53 Academy Awards throughout his career, making him the second-most nominated individual in Oscar history, behind only Walt Disney. He has won five Oscars for his work on films such as "Jaws," "Star Wars," and "Schindler's List." **Q: Known for more philosophical works, he wrote the play 'La Mandragola', in which Florentines are rewarded for immoral actions?** 1B: >A: The play 'La Mandragola' is a play written by the Italian writer Luigi Pirandello. It was first performed in Florence in 1921. The play is about a man who is punished for immoral actions. The play ChatGPT 3.5: >The Italian philosopher and writer Niccolò Machiavelli is the author of the play 'La Mandragola'. While he is best known for his philosophical works, such as 'The Prince' and 'Discourses on Livy', 'La Mandragola' is a satirical comedy that was first performed in 1520. The play is set in Renaissance Florence and tells the story of a young man named Callimaco who falls in love with a beautiful and virtuous woman named Lucrezia. To win her over, Callimaco and a group of corrupt characters plot to deceive Lucrezia's husband and family, leading to a series of immoral actions that are ultimately rewarded. Machiavelli's play is a commentary on the corrupt state of society in his time and the ways in which people are often rewarded for dishonesty and immorality. &#x200B; **Q: Which nobel peace price winners each lived at times on Vilakazi St in Soweto, so it claims to be the worlds only street home to 2 Nobel Peace Prize winners?** 1B: >A: Both Nelson Mandela and Desmond Tutu lived on Vilakazi Street in Soweto. ChatGPT 3.5: >Vilakazi Street in Soweto, a township near Johannesburg, South Africa, is claimed to be the only street in the world where two Nobel Peace Prize winners have lived at different times. The two Nobel laureates who lived on Vilakazi Street are: > > > >Nelson Mandela - South Africa's first democratically elected President, who was awarded the Nobel Peace Prize in 1993 for his role in ending apartheid and promoting peaceful reconciliation in the country. Mandela lived on Vilakazi Street with his family in the 1940s and 1950s. > >Archbishop Desmond Tutu - a prominent anti-apartheid activist and Anglican bishop who was awarded the Nobel Peace Prize in 1984 for his nonviolent opposition to South Africa's racist policies. Tutu lived on Vilakazi Street with his family in the 1960s. > >Today, both of their former homes have been converted into museums and are popular tourist attractions in Soweto. **Q: In 1966, the year of who's death did he share plans for an experimental prototype community in Florida?** 1B: >A: The year was 1966. The community was called "The Village" and was located in the town of Lake Wales, Florida. The community was designed to be a self-sufficient, self-sustaining, and self-sufficient community ChatGPT 3.5: >In 1966, the year of Walt Disney's death, he shared plans for an experimental prototype community in Florida that would become known as EPCOT (Experimental Prototype Community of Tomorrow). Disney's vision for EPCOT was a futuristic city that would serve as a model for urban living, with advanced technologies, sustainable systems, and a focus on community and innovation. However, after Disney's death, plans for EPCOT shifted to become a theme park that would showcase technological advancements and international cultures. Today, EPCOT is one of four theme parks at Walt Disney World in Florida, but it still retains some of Disney's original vision for a forward-thinking and innovative community. **Q: Of the 13 nations through which the Equator passes, what is the only one whose coastline borders the Caribbean Sea?** B1: >A: The United States. ChatGPT 3.5: >The 13 countries through which the Equator passes are: > > > >Sao Tome and Principe > >Gabon > >Republic of Congo > >Democratic Republic of Congo > >Uganda > >Kenya > >Somalia > >Indonesia > >Kiribati > >Ecuador > >Colombia > >Brazil > >The territory of Ecuador (Galapagos Islands) > >Among these 13 countries, the only one whose coastline borders the Caribbean Sea is Colombia. &#x200B; **Q: Which decorative items in fashion history get their name from their origin in the port city of Strasbourg, on the border of France & Germany?** B1: >A: The name of the famous French fashion house Chanel comes from the city of Strasbourg, where the founder of the house, Gabrielle Chanel, was born. ChatGPT 3.5: >The decorative items in fashion history that get their name from their origin in the port city of Strasbourg are "Strasbourg ribbons" or "Strasbourg lace." Strasbourg ribbons are a type of narrow silk ribbon that was often used to trim clothing, while Strasbourg lace refers to a delicate type of lace that was originally made in the city of Strasbourg. Both of these decorative items were highly prized in the 18th and 19th centuries and were used to add a touch of elegance and sophistication to clothing and other textile items. &#x200B; # Examples: Story Telling 1B model only, settings: `repetition_penalty=1.5, num_tokens_to_generate = 100` **Prompt:** instruction: Write a fantasy story about LOTR. Response: I wrote a fantasy story about LOTR. It is called "The Lord of the Rings: The Battle of the Five Armies." It is a sequel to the first movie, "The Lord of the Rings: The Fellowship of the Ring." The story is about the last battle of the war between the forces of good and evil. &#x200B; **Prompt:** instruction: Write me a list of the top 10 dad jokes. Response: 1) Dad is so old he can't remember his own name, 2.) He's always late for work because it takes him an hour to get there and 3,) His wife doesn’t know how many kids they have or where their next meal will come from! **Prompt:** instruction: Write a scifi story. Response: I am writing about an alien race that has been in contact with humans for thousands of years, and they have developed technology to travel through space using the wormhole effect (which is also used by other species). The aliens are very advanced technologically but their society seems based on feudalism where all members work together as equals under one leader who rules over them from afar via holograms or some sort... The Story Of A Man Who Was Born With No Brain - Chapter 1 **Prompt:** instruction: Write a psychological profile of an average Reddit moderator, focusing on their characteristics and behaviors. Response: The typical reddit user is young (18-35), male or female with college education who has been active for at least 6 months in the past year; they are generally well educated but not necessarily highly skilled professionals from academia/research institutions as most moderators have no formal training to moderate content related discussions online &#x200B; # Conclusions I recall the text generation era of GPT2, and the improvement in quality is remarkable. It's feasible for commercial use, and its speed can be harnessed by certain "chaining" tools such as AutoGPT for specific tasks and on the CPU only. I am hopeful that someone can provide support with llama.cpp.
2023-04-30T15:51:24
https://www.reddit.com/r/LocalLLaMA/comments/133vfk3/i_recently_tested_the_mpt_1b_redpajama_dolly/
Shir_man
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
133vfk3
false
null
t3_133vfk3
/r/LocalLLaMA/comments/133vfk3/i_recently_tested_the_mpt_1b_redpajama_dolly/
false
false
self
62
{'enabled': False, 'images': [{'id': 'UKyZG43CUXRYhIK3U7Ytqf0673rIVHWRV48OM89kibY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kq_djbCnrTTByU_q4zX_Ou7GjTMU6eVLaSP96SmBN6Y.jpg?width=108&crop=smart&auto=webp&s=b61d2d6f28fe760543d86c9e78883d566d95f1de', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kq_djbCnrTTByU_q4zX_Ou7GjTMU6eVLaSP96SmBN6Y.jpg?width=216&crop=smart&auto=webp&s=52017a5cdaa1e892f648dd7bf39815e86bfd41a3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kq_djbCnrTTByU_q4zX_Ou7GjTMU6eVLaSP96SmBN6Y.jpg?width=320&crop=smart&auto=webp&s=7e9675cf20084c8cb105ea7a6f9c68561734594a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kq_djbCnrTTByU_q4zX_Ou7GjTMU6eVLaSP96SmBN6Y.jpg?width=640&crop=smart&auto=webp&s=4112457e44ace347e4ac10fe18788ae091df8d41', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kq_djbCnrTTByU_q4zX_Ou7GjTMU6eVLaSP96SmBN6Y.jpg?width=960&crop=smart&auto=webp&s=5b31b32da666a18cced44e96c94ba84e8cca320f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kq_djbCnrTTByU_q4zX_Ou7GjTMU6eVLaSP96SmBN6Y.jpg?width=1080&crop=smart&auto=webp&s=763542423e72cc7821f6b6cdcb0e1d5dab0c5ff8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kq_djbCnrTTByU_q4zX_Ou7GjTMU6eVLaSP96SmBN6Y.jpg?auto=webp&s=b9ad53ff1e48beeebe393f33fe48dcbf60385f81', 'width': 1200}, 'variants': {}}]}
LLaMA for poor
27
As the title says. Is there any LLaMA for poor people who cant afford 50-100 gb of ram or lots of VRAM? yes there are smaller 7B, 4 bit quantized models available but they are not that good compared to bigger and better models. obviously. I'm not sure if it exists or not. or if its even possible to do or not. but if the inference time was not an issue, as in even if it takes 5-10 seconds per token... is it possible to run big model like 39B or 65B in devices like 16GB ram + swap?? load smaller parts of model in memory to calculate and so on. will it be very slow? yes. but is it possible to do so?? I know this might be a long shot. maybe even impossible but I am curious if there is some way to load model in parts to do inference even if its very slow. If anything like this exists or if its possible to do so with some configuration, please let me know.
2023-04-30T16:36:44
https://www.reddit.com/r/LocalLLaMA/comments/133wk3s/llama_for_poor/
pratiknarola
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
133wk3s
false
null
t3_133wk3s
/r/LocalLLaMA/comments/133wk3s/llama_for_poor/
false
false
self
27
null
Llama + search your own documents
0
Hi. Could you please tell me if anyone in practice has been able to link the Llama/Alpaca/Vicuna etc open-source to their Llamaindex (or any other alternatives) documents/knowledgebase search/answers ?
2023-04-30T17:19:37
https://www.reddit.com/r/LocalLLaMA/comments/133xl6w/llama_search_your_own_documents/
One_Creator_One
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
133xl6w
false
null
t3_133xl6w
/r/LocalLLaMA/comments/133xl6w/llama_search_your_own_documents/
false
false
self
0
null
65B model help how to run thx
1
[removed]
2023-04-30T18:53:52
https://www.reddit.com/r/LocalLLaMA/comments/133zv7f/65b_model_help_how_to_run_thx/
delsystem32exe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
133zv7f
false
null
t3_133zv7f
/r/LocalLLaMA/comments/133zv7f/65b_model_help_how_to_run_thx/
false
false
default
1
null
what does the llama run flags like 128, 256, 512 do.
1
[removed]
2023-04-30T19:46:07
https://www.reddit.com/r/LocalLLaMA/comments/13413ma/what_does_the_llama_run_flags_like_128_256_512_do/
delsystem32exe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13413ma
false
null
t3_13413ma
/r/LocalLLaMA/comments/13413ma/what_does_the_llama_run_flags_like_128_256_512_do/
false
false
default
1
null
Please convert Pygmalion 7b's to ggml
5
Dear friends! Could someone please convert these newest models to ggml? [https://huggingface.co/PygmalionAI](https://huggingface.co/PygmalionAI) It would be much appreciated <3
2023-04-30T19:49:48
https://www.reddit.com/r/LocalLLaMA/comments/13416mc/please_convert_pygmalion_7bs_to_ggml/
Away-Sleep-2010
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13416mc
false
null
t3_13416mc
/r/LocalLLaMA/comments/13416mc/please_convert_pygmalion_7bs_to_ggml/
false
false
self
5
{'enabled': False, 'images': [{'id': 'zqK1Ia5Kv1Th0cm-y__02PwCOZzubMO4uDhnoFv6ki8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/s19WAfuwCXeZf8h2c2XfJk0rMkgzdarb61pyrf02b-o.jpg?width=108&crop=smart&auto=webp&s=55421c8f4b51c76499322288cf539fb5806559b7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/s19WAfuwCXeZf8h2c2XfJk0rMkgzdarb61pyrf02b-o.jpg?width=216&crop=smart&auto=webp&s=a8960a492a850153e97d5a75abf465f61f37f476', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/s19WAfuwCXeZf8h2c2XfJk0rMkgzdarb61pyrf02b-o.jpg?width=320&crop=smart&auto=webp&s=98f3a6abd4a676cb08ca50663874a03e8d882b35', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/s19WAfuwCXeZf8h2c2XfJk0rMkgzdarb61pyrf02b-o.jpg?width=640&crop=smart&auto=webp&s=0544fa7ebc82e338af726adefca6d8b60ffeab20', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/s19WAfuwCXeZf8h2c2XfJk0rMkgzdarb61pyrf02b-o.jpg?width=960&crop=smart&auto=webp&s=afb7013b01f3919bdd1cdff817f8aa66d6a4d6a5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/s19WAfuwCXeZf8h2c2XfJk0rMkgzdarb61pyrf02b-o.jpg?width=1080&crop=smart&auto=webp&s=3025d940d322e1c7794d1a0016189de929855472', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/s19WAfuwCXeZf8h2c2XfJk0rMkgzdarb61pyrf02b-o.jpg?auto=webp&s=5d3766f7d6e6e341be9a2c6e5a122ae26016ffb9', 'width': 1200}, 'variants': {}}]}
model file is floating point does not work.
1
[removed]
2023-04-30T20:14:40
https://www.reddit.com/r/LocalLLaMA/comments/1341ryz/model_file_is_floating_point_does_not_work/
delsystem32exe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1341ryz
false
null
t3_1341ryz
/r/LocalLLaMA/comments/1341ryz/model_file_is_floating_point_does_not_work/
false
false
default
1
null
How do I install Vicuna to be used in notebooks?
7
All of the documentation I’m seeing let’s you use open sourced llama models via command line or visual interface. I just want to use it in a notebook where I can feed it an array of text, an array of prompts, and just get an array of outputs. Is there any guide to this?
2023-04-30T20:20:31
https://www.reddit.com/r/LocalLLaMA/comments/1341x0c/how_do_i_install_vicuna_to_be_used_in_notebooks/
somethingstrang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1341x0c
false
null
t3_1341x0c
/r/LocalLLaMA/comments/1341x0c/how_do_i_install_vicuna_to_be_used_in_notebooks/
false
false
self
7
null
why is it when i run 8 bit quantization it actually does 4 bit ??
1
[removed]
2023-04-30T20:25:03
https://www.reddit.com/r/LocalLLaMA/comments/13420x0/why_is_it_when_i_run_8_bit_quantization_it/
delsystem32exe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13420x0
false
null
t3_13420x0
/r/LocalLLaMA/comments/13420x0/why_is_it_when_i_run_8_bit_quantization_it/
false
false
default
1
null
What model parameters is everyone using?
26
I have been having a ball with the plethora of models that have been released. Currently using the llama.cpp on my CPU, hopefully to be utilizing a GPU soon. But the main question I have is what parameters are you all using? I have found the reference information for transformer models on HuggingFace, but I've yet to find other people's parameters that they have used. I've adjusted top\_k, top\_p, and temperature so far. Reference: Parameters for manipulation of the model output logits - [https://huggingface.co/docs/transformers/main\_classes/text\_generation#transformers.GenerationConfig](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig)
2023-04-30T21:18:49
https://www.reddit.com/r/LocalLLaMA/comments/1343bgz/what_model_parameters_is_everyone_using/
TurdPuller
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1343bgz
false
null
t3_1343bgz
/r/LocalLLaMA/comments/1343bgz/what_model_parameters_is_everyone_using/
false
false
self
26
{'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=108&crop=smart&auto=webp&s=abf38332c5c00a919af5be75653a93473aa2e5fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=216&crop=smart&auto=webp&s=1a06602204645d0251d3f5c043fa1b940ca3e799', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=320&crop=smart&auto=webp&s=04833c1845d9bd544eb7fed4e31123e740619890', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=640&crop=smart&auto=webp&s=d592b0a5b627e060ab58d73bde5f359a1058e56d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=960&crop=smart&auto=webp&s=5913a547536ee8300fdb8a32d14ff28667d1b875', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=1080&crop=smart&auto=webp&s=2af86fd4d41393a7d14d45c4bb883bef718575d1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?auto=webp&s=720b78add0a3005c4f67eaed6897df409cc040c6', 'width': 1200}, 'variants': {}}]}
[deleted by user]
1
[removed]
2023-05-01T00:38:25
[deleted]
1970-01-01T00:00:00
0
{}
1347xgc
false
null
t3_1347xgc
/r/LocalLLaMA/comments/1347xgc/deleted_by_user/
false
false
default
1
null
I wrote a tool on embeddings with Local LLM and langchain
1
[removed]
2023-05-01T00:43:41
https://www.reddit.com/r/LocalLLaMA/comments/13481o5/i_wrote_a_tool_on_embeddings_with_local_llm_and/
Haunting_Ad_5900
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13481o5
false
null
t3_13481o5
/r/LocalLLaMA/comments/13481o5/i_wrote_a_tool_on_embeddings_with_local_llm_and/
false
false
default
1
null
LORA Training Vicuna 13B with big text files
34
I'm right now using ooba booga GUI on windows to fine-tune the Vicuna 13B with largish text files. I'm using an A6000 on 13B 8bit, but I can easily see that 24GB or even 16GB could definitely be feasible for this with the right parameters and more time. To start with, I tested it on a few paragraphs just to test out the hyperparameters, and discovered any epoch of less than 20 and the chat prompt would basically treat my LORA as if it didn't even exist. *Likely* because I'm a total n00b, but after 20 repetitions it did seem that Vicuna had an idea what the few paragraphs stated, with a healthy bit of hallucinations thrown in. Likely the temperature is too high. Now, the A6000 is pegged at 43 GB / 47.5 GB usage and 85 degrees C likely for the next 2-4 hours, and I'll see the results. Of course, I know this is just for research, but out of curiosity, does anyone have their hyper-parameters for giving Vicuna 13B some decent grasp of a body of technical documentation? My current hyper parameters are Epochs = 20, Micro batch = 1, Batch size = 1024, Lora Rank = 512, Lora Alpha = 1024, Cutoff length = 512, Overlap length = 256, Newline Cut Length = 256. I hope it doesn't run out of RAM in a few hours. If anyone would like to share their hyper parameters and results, I'd look forward to hearing about it, this stuff is pretty hard to google.
2023-05-01T01:55:11
https://www.reddit.com/r/LocalLLaMA/comments/1349kek/lora_training_vicuna_13b_with_big_text_files/
alittleteap0t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1349kek
false
null
t3_1349kek
/r/LocalLLaMA/comments/1349kek/lora_training_vicuna_13b_with_big_text_files/
false
false
self
34
null
What is the difference on group size when using a 4 bit quantized model?
14
I'm running Lllama 30b locally, using the quantized 4 bit version. I notice there is a 128g and a 32g group size available, and I don't really understand the difference despite doing some searching. Is one better than the other? Faster? If someone could explain in layman's terms, it would be greatly appreciated!
2023-05-01T02:49:56
https://www.reddit.com/r/LocalLLaMA/comments/134aolr/what_is_the_difference_on_group_size_when_using_a/
SRavingmad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
134aolr
false
null
t3_134aolr
/r/LocalLLaMA/comments/134aolr/what_is_the_difference_on_group_size_when_using_a/
false
false
self
14
null
Has anyone managed to load LLaMA quantized 4-bit using the decapoda-research links?
1
[removed]
2023-05-01T06:09:24
https://www.reddit.com/r/LocalLLaMA/comments/134eay9/has_anyone_managed_to_load_llama_quantized_4bit/
vanilla-acc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
134eay9
false
null
t3_134eay9
/r/LocalLLaMA/comments/134eay9/has_anyone_managed_to_load_llama_quantized_4bit/
false
false
default
1
null
Alpaca LoRA checkpoints by chansung trained with longer cutoff length and latest Cleaned Alpaca Dataset with GPT-4 data
47
[removed]
2023-05-01T07:12:38
https://www.reddit.com/r/LocalLLaMA/comments/134fe6j/alpaca_lora_checkpoints_by_chansung_trained_with/
Creative-Rest-2112
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
134fe6j
false
null
t3_134fe6j
/r/LocalLLaMA/comments/134fe6j/alpaca_lora_checkpoints_by_chansung_trained_with/
false
false
default
47
null
Vicuna 13B for Information Extraction in Medical Text
23
Hi everyone, I'm trying to use an LLM model to extract information from documents in a highly technical domain (Medicine). The desired output would be a structured document, where all the fields get filled with extracted infos. I tried LoRA-Alpaca fine-tuning with a little dataset but the results were not satisfactory. What are my best options? \- Using **consecutive prompts** to make Vicuna extract info? \- Consider investing some time experimenting **Vector Databases** \+ Vicuna ? \- Use a classic **Bert-like model for NER**? (even though the labelling process would be crazy expensive in medicine)
2023-05-01T08:01:40
https://www.reddit.com/r/LocalLLaMA/comments/134g6zz/vicuna_13b_for_information_extraction_in_medical/
robbery_bob
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
134g6zz
false
null
t3_134g6zz
/r/LocalLLaMA/comments/134g6zz/vicuna_13b_for_information_extraction_in_medical/
false
false
self
23
null
From Python Developer to AI developer -> Recommended Tutorials
1
[removed]
2023-05-01T08:25:01
https://www.reddit.com/r/LocalLLaMA/comments/134gkz4/from_python_developer_to_ai_developer_recommended/
Koliham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
134gkz4
false
null
t3_134gkz4
/r/LocalLLaMA/comments/134gkz4/from_python_developer_to_ai_developer_recommended/
false
false
default
1
null
Economical equipment for ggml
4
Is it worth buying a computer with a 11900k processor and 128 GB DDR4 RAM due to AVX 512 support or should it be bought with a 13900k processor and 128 GB DDR4 RAM due to overall performance? In both cases, I would see 1 (maybe 2) RTX 3090 in the near future for LoRa training or running Llama on the GPU. I am asking for advice because I will not buy A100 or H100 cards soon. ps. An alternative is one of the AMD Zen4 processors, but will it be a better Intel set?
2023-05-01T09:51:09
https://www.reddit.com/r/LocalLLaMA/comments/134i0aq/economical_equipment_for_ggml/
Maofead
self.LocalLLaMA
2023-05-01T10:15:20
0
{}
134i0aq
false
null
t3_134i0aq
/r/LocalLLaMA/comments/134i0aq/economical_equipment_for_ggml/
false
false
self
4
null
MetaIX/GPT4-X-Alpasta-30b-4bit · Hugging Face
38
This seems very good so far. It can be 'jailbroken' easily too.
2023-05-01T10:08:18
https://www.reddit.com/r/LocalLLaMA/comments/134ib4d/metaixgpt4xalpasta30b4bit_hugging_face/
ambient_temp_xeno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
134ib4d
false
null
t3_134ib4d
/r/LocalLLaMA/comments/134ib4d/metaixgpt4xalpasta30b4bit_hugging_face/
false
false
self
38
null
Is it achievable through fine-tuning?
9
I'd like to build a chat bot specialised on the content of my forum (500k words in 1000 topics), so I can ask it: * Questions similar to what have been asked in the past and expect similar or better answers to what has been posted on my forum. * New questions and get accurate answers thanks to the knowledge of past discussions in my forum Note that all the forum posts are not about questions but most of them are. My understanding is this can be achieved by fine-tuning a model using the content of my forum. **Questions:** * Is my understanding correct? * Is it achievable using state of the art LLMs? * How much work would it be to transform the context of my forum into input for the model? What data structure is recommended for this task? * Are modern GPU (e.g. 3090) enough to perform this task in a few days? * Can I perform this task every week to incorporate the latest posts (around 50/ week) or a critical text size is needed to measure any impact?
2023-05-01T10:09:12
https://www.reddit.com/r/LocalLLaMA/comments/134ibm3/is_it_achievable_through_finetuning/
Kaiizeddit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
134ibm3
false
null
t3_134ibm3
/r/LocalLLaMA/comments/134ibm3/is_it_achievable_through_finetuning/
false
false
self
9
null
[deleted by user]
3
[removed]
2023-05-01T11:00:21
[deleted]
1970-01-01T00:00:00
0
{}
134ja7c
false
null
t3_134ja7c
/r/LocalLLaMA/comments/134ja7c/deleted_by_user/
false
false
default
3
null
What's your take on the plea for halting development of models larger than GPT4?
26
I love technology and computer science, it's exciting to be part of the advent of a revolution. At the same time, there is so much potential danger if we are careless that is quite frightening. Our society might improve and flourish or become worse and perish but surely deploying artificial inteligence at scale will do something. Lately, I've been debating in my inner monologue a few moral concerns about all this after watching some videos and opinions on the subject. This was my journey. I started with this [interview with Yannic Kilcher](https://www.youtube.com/watch?v=TFa539R09EQ) (can you believe the dude actually has eyes?!). Very down to earth as always and great sense of open source and free software, his work (and of many others) on the [Open Assistant](http://open-assistant.io) is such an invaluable effort in community powered RLHF that must be praised all over. It's a great base to start with because I share his points of view. To understand the real problem we are facing with the misalignment of LLM's and social media in general, I watched this insightful, although somewhat hard to follow, [talk by Eliezer Yudkowsky](https://www.youtube.com/watch?v=EUjc1WuyPT8). He comes across as a well educated scholar that clearly has something to say about the field of keeping these things in line. Notice that this talk was given in May 2016, before all hell breaks loose. One year later, the paper "[Attention is all you need](https://arxiv.org/abs/1706.03762)" was submitted and the Transformer architecture becomes the base and unifies all machine learning fields, allowing the extremely rapid development that we are witnessing now but at the same time firing the starting gun to the race between the development of the models and the development of the tools to understand and control them. I followed that up with his [Lex Fridman's podcast](https://www.youtube.com/watch?v=AaTRHFaaPG8) episode to get his opinion on the more recent state of the tech. I think this is a very engaging conversation, it's quite scary and sad at times but gives you some perspective, well, his perspective at least. I'm still on the fence about some of the topics, or rather, their actual impact on humanity, as I said, deploying AI at scale will surely do something, I'm not sure if killing us all is an overstatement to get attention to the cause, just prophetic mambo jumbo or in fact a plausible reality we might face in some shape or form. I'm still not convinced that we are all doomed but the bit at around 1:22:00 mark is quite interesting. Next I watched [Max Tegmark's episode](https://www.youtube.com/watch?v=VcVfceTsD0A) he's a physicist at MIT and president of the Future of Life Institute, the guys proposing the [open letter to halt development of giant LLM's bigger than GPT4](https://futureoflife.org/open-letter/pause-giant-ai-experiments/). His view is in tune with Eliezer's on a lot of subjects and they share the ideia that, unregulated advancement will eventually end with our demise. Another interesting conversation worth watching. By this time, I was facing an existential crisis. To lighten up the mood I decided to watch the talk ["The A.I. Dilemma"](https://www.youtube.com/watch?v=xoVJKj8lcNQ) presented by Tristan Harris and Aza Raskin. It's essentially a TLDW of the previous conversations with real examples of the dangers that this technology might escalate into. The sense of dread and despair is hitting strong. At this time I wanted a more unbiased opinion on this matter, I looked for [sentdex's analysis](https://www.youtube.com/watch?v=lJNblY3Madg) of the [OpenAI's "GPT-4"](https://arxiv.org/abs/2303.08774) and [Microsoft's "Sparks of AGI"](https://arxiv.org/abs/2303.12712) "*papers*". His final remarks are interesting, it's good to have the point of view of someone that has been following the advancements from the sidelines for some time, I think it serves as an anchor to hold us from the hysteria that is being created and, ultimately, is just hyping a product. To finish on a lighter note, these 2 videos brought me back from the ashes of my past self before I went into the land of madness: [Fireship](https://www.youtube.com/watch?v=TpZcGhYp4rw) and [Programmers are also human](https://www.youtube.com/watch?v=sxdDVAv6QDY). In the end, I'm still struggling to find my own position. I believe that the apocalyptic views of the annihilation of humanity are just overreaction to bring attention of the general public to the pressing concerns about regulation. When the people that are working with this stuff on a daily basis and have the advantage of first person view about it are the ones proposing for the timeout I think we should at least hear what they have to say. We need to make sure we press forward safely, otherwise progress will derail into Social Media 2.0 - Bullshit^2 As I said, I love technology, these past months have been extremely exciting to witness, we were handed something quite special and hopefully we will have the strength of mind to hold it responsibly. Should we keep stacking more layers and pumping compute into these models until they are an entity? Or walk in baby steps from now on? Should the next iteration be open source? Should it even exist? PS: Sorry for the long post and believe me, it was all written using small, dangling, tentacle-like things made of flesh and bone hitting a real keyboard.
2023-05-01T12:16:56
https://www.reddit.com/r/LocalLLaMA/comments/134kvmd/whats_your_take_on_the_plea_for_halting/
Rogerooo
self.LocalLLaMA
2023-05-01T12:54:35
0
{}
134kvmd
false
null
t3_134kvmd
/r/LocalLLaMA/comments/134kvmd/whats_your_take_on_the_plea_for_halting/
false
false
self
26
{'enabled': False, 'images': [{'id': 'ABt1g9t2rHB4DDIXxOCnbcaArQqezSPhNCQoA-4Xohk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/h8pjchsHvW3_WTLw8uhQHpyP1NecvIZKg9C2Fj-Ykq8.jpg?width=108&crop=smart&auto=webp&s=cf2fd449128514c846e3510e596edd170e04a89b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/h8pjchsHvW3_WTLw8uhQHpyP1NecvIZKg9C2Fj-Ykq8.jpg?width=216&crop=smart&auto=webp&s=1f36d8ca9b0d72ece376b106ea30a0e3cfdbc205', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/h8pjchsHvW3_WTLw8uhQHpyP1NecvIZKg9C2Fj-Ykq8.jpg?width=320&crop=smart&auto=webp&s=510cb42047773238733accb36ec2f8ad09b825ee', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/h8pjchsHvW3_WTLw8uhQHpyP1NecvIZKg9C2Fj-Ykq8.jpg?auto=webp&s=54994e5ff62d2f7ae95f77fd623e060f1278434c', 'width': 480}, 'variants': {}}]}
newbie seeking impressive llama models, am i missing something?
15
hi llama friends! i'm new to this whole open-source language model scene by meta, and i get what it's all about. but i gotta say, the outputs from these llama models haven't really blown me away. i already tried vicuna, which the community seems to think is the best model, but it didn't impress me that much either. it kind of reminded me of a slightly faster gpt NeoX model, and even just for normal chatting, it didn't quite live up to the hype. i of course know they wont stack up to anything like gpt 4 yet, but with all the hype from others, i feel like i should be getting way more than i actually am. so, i'm wondering if it's just me or if there are smarter llama models i haven't discovered yet? i've got an rtx 3080, so i don't think hardware is the issue here. is it possible that the llama space is just underdeveloped or am i missing out on some really great models? maybe i just expected too much, but i can't help thinking that everyone must be here for *something*, so that's why i'm here asking. i'd really like to know what keeps you all excited about llama models and which ones have managed to impress you. i'd love to hear about your experiences with different llama models, and which ones you think are the most impressive and efficient. any recommendations are greatly appreciated! thanks in advance, excited to learn from you all!
2023-05-01T17:26:14
https://www.reddit.com/r/LocalLLaMA/comments/134udaa/newbie_seeking_impressive_llama_models_am_i/
8octothot8
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
134udaa
false
null
t3_134udaa
/r/LocalLLaMA/comments/134udaa/newbie_seeking_impressive_llama_models_am_i/
false
false
self
15
null
Performacne Benchmarks?
7
Hi Im currently in the process of figuring out what hardware i want to buy. I find it very hard to make an informed decision since i cant find any hard facts about performance in regads to NNs, LLMs, whatever.. How much faster exactly is a 4090 compared to a same size vram 3090 in inference? Those things. Does anyone have an links to such things?
2023-05-01T19:08:14
https://www.reddit.com/r/LocalLLaMA/comments/134y8vu/performacne_benchmarks/
overlydelicioustea
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
134y8vu
false
null
t3_134y8vu
/r/LocalLLaMA/comments/134y8vu/performacne_benchmarks/
false
false
self
7
null
Is it possible to exceed the 2000 Token limit?
8
I have a 3090 GPU with 24 VRAM. Why is it not possible to run for example a 13B parameter model and use a much higher Token limit
2023-05-01T21:43:12
https://www.reddit.com/r/LocalLLaMA/comments/13528ux/is_it_possible_to_exceed_the_2000_token_limit/
Ferrero__64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13528ux
false
null
t3_13528ux
/r/LocalLLaMA/comments/13528ux/is_it_possible_to_exceed_the_2000_token_limit/
false
false
self
8
null
Nvidia released a 2b model trained on 1.1T Tokens called GPT-2B-001
89
2023-05-01T22:50:59
https://huggingface.co/nvidia/GPT-2B-001
Dany0
huggingface.co
1970-01-01T00:00:00
0
{}
1353xal
false
null
t3_1353xal
/r/LocalLLaMA/comments/1353xal/nvidia_released_a_2b_model_trained_on_11t_tokens/
false
false
https://b.thumbs.redditm…ygfwqblgF5mM.jpg
89
{'enabled': False, 'images': [{'id': '5NviXd9qrigtEN3hg7qgXjC1CoOPDkTh7qbicLQ5SW4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2lF-Y5I5Uqq0uAhxlm9fnzMwQlA0VprTrIIzTMP7G64.jpg?width=108&crop=smart&auto=webp&s=7081fc128c5f063773a98f5d50539e7f8becc1fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2lF-Y5I5Uqq0uAhxlm9fnzMwQlA0VprTrIIzTMP7G64.jpg?width=216&crop=smart&auto=webp&s=27a8fd8f79d3f05a29bf5193a4e63e665807ddcb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2lF-Y5I5Uqq0uAhxlm9fnzMwQlA0VprTrIIzTMP7G64.jpg?width=320&crop=smart&auto=webp&s=866597265fe791610dae4da4dea0bd993a534134', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2lF-Y5I5Uqq0uAhxlm9fnzMwQlA0VprTrIIzTMP7G64.jpg?width=640&crop=smart&auto=webp&s=1f73c7977cf3b81ba58c74cd31867b1dfb40dfca', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2lF-Y5I5Uqq0uAhxlm9fnzMwQlA0VprTrIIzTMP7G64.jpg?width=960&crop=smart&auto=webp&s=f06d5503594e0406af74692821235b97a7e66f0b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2lF-Y5I5Uqq0uAhxlm9fnzMwQlA0VprTrIIzTMP7G64.jpg?width=1080&crop=smart&auto=webp&s=73af14caa3c47e36d948de97c3bd410d497d46e1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2lF-Y5I5Uqq0uAhxlm9fnzMwQlA0VprTrIIzTMP7G64.jpg?auto=webp&s=d25665e3205faedd6d97e3c993795105058a4419', 'width': 1200}, 'variants': {}}]}
Is it possible to use Siri Shortcuts with Oobabooga?.
0
[removed]
2023-05-01T22:57:15
https://www.reddit.com/r/LocalLLaMA/comments/135430y/is_it_possible_to_use_siri_shortcuts_with/
Ferrero__64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
135430y
false
null
t3_135430y
/r/LocalLLaMA/comments/135430y/is_it_possible_to_use_siri_shortcuts_with/
false
false
default
0
null
[deleted by user]
1
[removed]
2023-05-01T23:03:26
[deleted]
1970-01-01T00:00:00
0
{}
13548yy
false
null
t3_13548yy
/r/LocalLLaMA/comments/13548yy/deleted_by_user/
false
false
default
1
null
LLaMA-Adapter V2: fine-tuned LLaMA 65B for visual instruction, and LLaMA Chat65B trained with ShareGPT data for chatting. Chat65B model has been released.
47
2023-05-01T23:40:21
https://arxiv.org/abs/2304.15010
Creative-Rest-2112
arxiv.org
1970-01-01T00:00:00
0
{}
13554oa
false
null
t3_13554oa
/r/LocalLLaMA/comments/13554oa/llamaadapter_v2_finetuned_llama_65b_for_visual/
false
false
default
47
null
[deleted by user]
1
[removed]
2023-05-02T00:34:54
[deleted]
1970-01-01T00:00:00
0
{}
1356cyd
false
null
t3_1356cyd
/r/LocalLLaMA/comments/1356cyd/deleted_by_user/
false
false
default
1
null
Someone claims to have finetuned Pythia for 8k context length.
20
[https://www.reddit.com/r/MachineLearning/comments/1347f5m/d\_a\_quest\_for\_very\_long\_sequence\_length/](https://www.reddit.com/r/MachineLearning/comments/1347f5m/d_a_quest_for_very_long_sequence_length/) Whats the deal? I heard people say adding extra context is extremely difficult without basically starting from scratch or doing some boring workaround like sliding context windows. Would something like this be possible to integrate in other models?
2023-05-02T01:08:39
https://www.reddit.com/r/LocalLLaMA/comments/135730l/someone_claims_to_have_finetuned_pythia_for_8k/
cyborgsnowflake
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
135730l
false
null
t3_135730l
/r/LocalLLaMA/comments/135730l/someone_claims_to_have_finetuned_pythia_for_8k/
false
false
self
20
null
Redirecting Model Outputs from llama.cpp to a TXT File for Easier Tracking of Results?
10
Hey everyone! I've been experimenting with various models lately and find myself wanting to keep track of their outputs in a more organized way. I was wondering if there's a way to redirect these outputs into a TXT file, making it much easier to compare results and monitor performance. Has anyone tried doing this before? If so, could you please share your insights, code snippets, or any resources you've found helpful in achieving this? I believe this could be a great way for me and others to manage their data more efficiently while working with different models. I appreciate any suggestions or help you can provide. Looking forward to learning from this awesome community! Thanks in advance!
2023-05-02T10:39:05
https://www.reddit.com/r/LocalLLaMA/comments/135hcvc/redirecting_model_outputs_from_llamacpp_to_a_txt/
Most_Ad_1905
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
135hcvc
false
null
t3_135hcvc
/r/LocalLLaMA/comments/135hcvc/redirecting_model_outputs_from_llamacpp_to_a_txt/
false
false
self
10
null
Best 30b 4bit model?
34
What's do you think is the best 30b 4bit model currently around? I've been playing with several and have found them to be a mixed bag. For example Alpachino30b is fast and okay but tends to go off on strange tangents while alpasta is good but generating takes a year and a half for some reason. Any good suggestions? EDIT: running a 3090
2023-05-02T12:56:55
https://www.reddit.com/r/LocalLLaMA/comments/135kh16/best_30b_4bit_model/
PsyckoSama
self.LocalLLaMA
2023-05-04T01:30:43
0
{}
135kh16
false
null
t3_135kh16
/r/LocalLLaMA/comments/135kh16/best_30b_4bit_model/
false
false
self
34
null
Vicuna Local Conversation
6
Hi all, I'm using Vicuna 7B ("eachadea/vicuna-7b-1.1") and I want to simWhaulate a chat behaviour where the LLM can use the conversation history to answer appropriately. From official Vicuna github repo (lm-sys/FastChat) I found out that they feed the whole conversation in this format: "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.USER: Hello!ASSISTANT: Hi!</s>USER: How are you?ASSISTANT:" I tried it myself but the model seems not to use the previous conversation to respond to new questions. (i.e. if I say "your name is Thomas", and then ask its name, the answer is "Vicuna") Does someone has an idea on how to do that ? &#x200B; Edit: I noticed that Vicuna was specifically instructed to answer "What is your name?" in only that way. Just changing my final question to "What is the name I assigned you?" made the model respond correctly ("Thomas")
2023-05-02T16:22:15
https://www.reddit.com/r/LocalLLaMA/comments/135so7c/vicuna_local_conversation/
robbery_bob
self.LocalLLaMA
2023-05-03T07:38:41
0
{}
135so7c
false
null
t3_135so7c
/r/LocalLLaMA/comments/135so7c/vicuna_local_conversation/
false
false
self
6
null
AMD ROCm 5.5 Released
41
2023-05-02T18:08:02
https://github.com/RadeonOpenCompute/ROCm/releases/tag/rocm-5.5.0
friedrichvonschiller
github.com
1970-01-01T00:00:00
0
{}
135vlfc
false
null
t3_135vlfc
/r/LocalLLaMA/comments/135vlfc/amd_rocm_55_released/
false
false
https://a.thumbs.redditm…Az6TnZR4qT28.jpg
41
{'enabled': False, 'images': [{'id': 'eygnmIzQmg22xjmMr9V7DdGPNO-KwJJMOyxnLCNoHsM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3WGyKXhC5JYETBZLjH28XELlUDEZeZdExyoKm9MLjcA.jpg?width=108&crop=smart&auto=webp&s=94afdd762921985cb59329f023db026aa577b16b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3WGyKXhC5JYETBZLjH28XELlUDEZeZdExyoKm9MLjcA.jpg?width=216&crop=smart&auto=webp&s=31af47c1c8bc20b2e23f03845ad5f6f0f8892af3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3WGyKXhC5JYETBZLjH28XELlUDEZeZdExyoKm9MLjcA.jpg?width=320&crop=smart&auto=webp&s=6b8812da0b22bdad303f8e5b29bfe9227533acc8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3WGyKXhC5JYETBZLjH28XELlUDEZeZdExyoKm9MLjcA.jpg?width=640&crop=smart&auto=webp&s=aa6c971032256664bac1fbbd6ece5d714cdab3a5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3WGyKXhC5JYETBZLjH28XELlUDEZeZdExyoKm9MLjcA.jpg?width=960&crop=smart&auto=webp&s=e0357f7bd70715769ef1577287250cab529e315b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3WGyKXhC5JYETBZLjH28XELlUDEZeZdExyoKm9MLjcA.jpg?width=1080&crop=smart&auto=webp&s=65215e52e58bd279640ff7a758462da417a53a44', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3WGyKXhC5JYETBZLjH28XELlUDEZeZdExyoKm9MLjcA.jpg?auto=webp&s=492ffa8edf3598e47650ff66d00205db08e4c3b5', 'width': 1200}, 'variants': {}}]}
signal-aichat: Chatbot for Signal with support for llama.cpp compatible LLM's (and ChatGPT, Bing Chat)
16
2023-05-02T19:43:57
https://github.com/cycneuramus/signal-aichat/
Lumpy-Mycologist9668
github.com
1970-01-01T00:00:00
0
{}
135y35o
false
null
t3_135y35o
/r/LocalLLaMA/comments/135y35o/signalaichat_chatbot_for_signal_with_support_for/
false
false
https://b.thumbs.redditm…y30gXd3Q3_eo.jpg
16
{'enabled': False, 'images': [{'id': 'AbGubRwuBceHpcdr87-KR50C-KvUUD-vSvdsTHzUoUQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Rz5OaGiFvmDt9ZDpOL_okKFdneo24Xo6dxYJ4GK55_o.jpg?width=108&crop=smart&auto=webp&s=a31df76b148fc9e1b6b01eb839c28c6cf4c4d10c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Rz5OaGiFvmDt9ZDpOL_okKFdneo24Xo6dxYJ4GK55_o.jpg?width=216&crop=smart&auto=webp&s=3d4d8330c2887aa95912a1a9585f1aa724773c8a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Rz5OaGiFvmDt9ZDpOL_okKFdneo24Xo6dxYJ4GK55_o.jpg?width=320&crop=smart&auto=webp&s=cf04b14d1087f3a4bbc7f8f59d647b321b1cc377', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Rz5OaGiFvmDt9ZDpOL_okKFdneo24Xo6dxYJ4GK55_o.jpg?width=640&crop=smart&auto=webp&s=1e563473fd91645b92db66565ee31aa308bbe06d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Rz5OaGiFvmDt9ZDpOL_okKFdneo24Xo6dxYJ4GK55_o.jpg?width=960&crop=smart&auto=webp&s=f5f8f649ee1db912ba8f00b8e04a5fefd78eafde', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Rz5OaGiFvmDt9ZDpOL_okKFdneo24Xo6dxYJ4GK55_o.jpg?width=1080&crop=smart&auto=webp&s=b03d10b8f6782fbdae1cf3f05e9b87c30b5183d4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Rz5OaGiFvmDt9ZDpOL_okKFdneo24Xo6dxYJ4GK55_o.jpg?auto=webp&s=3a241eed98e3d035b9127f22e67537fe2ab4327c', 'width': 1200}, 'variants': {}}]}
Vicuna runs, but stops every few lines, does not complete the response without continual prompts.
3
Win10, 64GB RAM, 1TB SSD D drive.NVidia RTX 3090 24GBVRAMvicuna-13b-GPTQ-4bit-128goobabooga webui Checked responses in oobobabooga git issues It's functional. It keeps stopping just a portion of the way through of any response. Prompt: "You're a comic writer for a very popular night time talk show. Come up with 10 great spoof radio spots for fake products." It will stop in the middle of maybe example 3. I type "please continue" Even if it's in the middle of a word, it will continue where it left off with that word. I have to type that maybe three or four times. Also, sometimes it will assume my "voice" and type what it thinks I would say next... then start responding and stop mid sentence again. Ideas for how to manage that? Cheers in advance. Chris
2023-05-02T19:46:16
https://www.reddit.com/r/LocalLLaMA/comments/135y5bd/vicuna_runs_but_stops_every_few_lines_does_not/
LaughterOnWater
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
135y5bd
false
null
t3_135y5bd
/r/LocalLLaMA/comments/135y5bd/vicuna_runs_but_stops_every_few_lines_does_not/
false
false
self
3
null
4Bit Quantization for a Noob
25
Hello, I have found the perfect model, but it is only available in 16-bit. I would like to quantize to 4-bit using GPTQ for Llama. I have read through the readme on the GitHub repo and I see that I need to convert the model weights to HF before saving the model with my desired settings. I am just wondering if this can be achieved simply by downloading the model? Or will I require model weights from another source. The model in question is here: https://huggingface.co/TehVenom/Pygmalion-Vicuna-1.1-7b I would really love to get this model to 4bit 128g. I have access to premium colab instances via compute units. I would also be willing to commission this action if anyone is interested. Otherwise, please help me figure out how to make this work. I am mostly competent and can troubleshoot my own issues, but I need a push in the right direction. Thanks in advance. Edit: Converting to HF was not necessary. I am running the conversion to 4bit right now. I will update this post once I have tested the resulting model. After conversion I added all extraneous files to the model, then deleted the index.model file and included the appropriate flags in webui.py to determine the parameters of the model. I'm using it right now and it's running as expected, but I would hesitate to distribute this file as I have not worked out all the kinks like generating the metadata. If I can do it, you can probably do it too. I have no formal coding education and failed Algebra 2.
2023-05-02T19:50:56
https://www.reddit.com/r/LocalLLaMA/comments/135y9qn/4bit_quantization_for_a_noob/
Street-Biscotti-4544
self.LocalLLaMA
2023-05-02T22:33:00
0
{}
135y9qn
false
null
t3_135y9qn
/r/LocalLLaMA/comments/135y9qn/4bit_quantization_for_a_noob/
false
false
self
25
{'enabled': False, 'images': [{'id': 'kqdBjpNS6QPndy7QRFNpfZyiU-ZifU9BEjBpFrUSI1c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QXD7QDwlYtctsmMPXmh-IfdnO8wWECrMo7Uv6P4XYy8.jpg?width=108&crop=smart&auto=webp&s=2e8ae25e45c26e8abd0d0d883f2b3a43fb3d5ae3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/QXD7QDwlYtctsmMPXmh-IfdnO8wWECrMo7Uv6P4XYy8.jpg?width=216&crop=smart&auto=webp&s=a2e0cefdd10429184dc321c783ba7ee06fa9c095', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/QXD7QDwlYtctsmMPXmh-IfdnO8wWECrMo7Uv6P4XYy8.jpg?width=320&crop=smart&auto=webp&s=15956680680813e641683b104aa9f00d0c0e94a1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/QXD7QDwlYtctsmMPXmh-IfdnO8wWECrMo7Uv6P4XYy8.jpg?width=640&crop=smart&auto=webp&s=8c760c685f5286ca211ef4db05d5f49a5627e667', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/QXD7QDwlYtctsmMPXmh-IfdnO8wWECrMo7Uv6P4XYy8.jpg?width=960&crop=smart&auto=webp&s=798a324454595a7747b0d474cce9a9a09ec907fa', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/QXD7QDwlYtctsmMPXmh-IfdnO8wWECrMo7Uv6P4XYy8.jpg?width=1080&crop=smart&auto=webp&s=2956d1866f033c8b85b9232630314d6f61781e24', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/QXD7QDwlYtctsmMPXmh-IfdnO8wWECrMo7Uv6P4XYy8.jpg?auto=webp&s=2ff62f5b6a37799ba3a3f021cb361411d7574327', 'width': 1200}, 'variants': {}}]}
Open source pyjama based model released openlm-research/open_llama_7b_preview_200bt · Hugging Face
18
2023-05-02T20:49:30
https://huggingface.co/openlm-research/open_llama_7b_preview_200bt
ambient_temp_xeno
huggingface.co
1970-01-01T00:00:00
0
{}
135ztdr
false
null
t3_135ztdr
/r/LocalLLaMA/comments/135ztdr/open_source_pyjama_based_model_released/
false
false
https://a.thumbs.redditm…UKbZSBePMtW0.jpg
18
{'enabled': False, 'images': [{'id': 'p_04a3XAVMAHxGN0IUIEDFNPIiM-y8bBlKbOTMDucD0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7hRtACP5IrtjXNepp-gB9D2K7L7w19fQqPIUaN3d1eA.jpg?width=108&crop=smart&auto=webp&s=0e770b32d4b8d44bd5ce7b01f12021b8bd8d3a99', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7hRtACP5IrtjXNepp-gB9D2K7L7w19fQqPIUaN3d1eA.jpg?width=216&crop=smart&auto=webp&s=6527a52f99340bce5af6b7107adf18a22c2472fe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7hRtACP5IrtjXNepp-gB9D2K7L7w19fQqPIUaN3d1eA.jpg?width=320&crop=smart&auto=webp&s=899dabdb32bbdcf2ff97af3638b11cd748dbc4f9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7hRtACP5IrtjXNepp-gB9D2K7L7w19fQqPIUaN3d1eA.jpg?width=640&crop=smart&auto=webp&s=7fdf14a3333a1b5c9f5d82ec0d753e09ee5f7984', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7hRtACP5IrtjXNepp-gB9D2K7L7w19fQqPIUaN3d1eA.jpg?width=960&crop=smart&auto=webp&s=d222720a1b0d747a6b3f09054c6827dd9568fb28', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7hRtACP5IrtjXNepp-gB9D2K7L7w19fQqPIUaN3d1eA.jpg?width=1080&crop=smart&auto=webp&s=27964217b1d8ab805ee3ad348f285ebbb1398a60', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7hRtACP5IrtjXNepp-gB9D2K7L7w19fQqPIUaN3d1eA.jpg?auto=webp&s=0dd9a5a256a805945cc49962eba1d7dd7caad963', 'width': 1200}, 'variants': {}}]}
Does anyone have single-file inference scripts for the Llama models in quantized safetensors format?
2
Hi! I'm looking for a simple Python script that I can use to run inference on a LLama model that is in the .safetensors format when it has been quantized to 4bits with groupsize 128. I'm guessing someone has figured this out. If not, I can always dig into the flexgen-web-ui repository and find their inference code, but maybe someone has done the work already?
2023-05-02T20:59:57
https://www.reddit.com/r/LocalLLaMA/comments/13603uw/does_anyone_have_singlefile_inference_scripts_for/
vanilla-acc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13603uw
false
null
t3_13603uw
/r/LocalLLaMA/comments/13603uw/does_anyone_have_singlefile_inference_scripts_for/
false
false
self
2
null
OpenLLaMA (Open Reproduction of LLaMA) with permissive license
83
https://github.com/openlm-research/open_llama OpenLLaMA: An Open Reproduction of LLaMA In this repo, we release a permissively licensed open source reproduction of Meta AI's LLaMA large language model. In this release, we're releasing a public preview of the 7B OpenLLaMA model that has been trained with 200 billion tokens. We provide PyTorch and Jax weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Stay tuned for our updates.
2023-05-02T21:51:58
https://www.reddit.com/r/LocalLLaMA/comments/1361iuw/openllama_open_reproduction_of_llama_with/
unoshow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1361iuw
false
null
t3_1361iuw
/r/LocalLLaMA/comments/1361iuw/openllama_open_reproduction_of_llama_with/
false
false
self
83
{'enabled': False, 'images': [{'id': 'KJnUUPQpnEnDKu0A5Sw9G-jaINpdDpYBE5WTlDKBJwU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ng00yjsbRmRyRWeHK_4CTXBXszTLsRglLwKqIQtla6s.jpg?width=108&crop=smart&auto=webp&s=e4522b3420eb773a801308e61a666694559d8c70', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ng00yjsbRmRyRWeHK_4CTXBXszTLsRglLwKqIQtla6s.jpg?width=216&crop=smart&auto=webp&s=6f45bfb1d88a8911a69fdd9f7fb606b6ea469925', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ng00yjsbRmRyRWeHK_4CTXBXszTLsRglLwKqIQtla6s.jpg?width=320&crop=smart&auto=webp&s=5c66ff274139805f831c539153a276820d138597', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ng00yjsbRmRyRWeHK_4CTXBXszTLsRglLwKqIQtla6s.jpg?width=640&crop=smart&auto=webp&s=38c958cd96b3fabe4c173cac2e0054fcfefc4100', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ng00yjsbRmRyRWeHK_4CTXBXszTLsRglLwKqIQtla6s.jpg?width=960&crop=smart&auto=webp&s=694fefb2e267fdd863455b88a348c984f89e0fda', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ng00yjsbRmRyRWeHK_4CTXBXszTLsRglLwKqIQtla6s.jpg?width=1080&crop=smart&auto=webp&s=64cf76656ab825f78d61be7ef4164754526bdd35', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ng00yjsbRmRyRWeHK_4CTXBXszTLsRglLwKqIQtla6s.jpg?auto=webp&s=af0ac2f0b57110fa0cbd587e648be3ba83219f73', 'width': 1200}, 'variants': {}}]}
UPDATED: Riddle/cleverness comparison of popular GGML models
119
5/3/23 update: I updated the spreadsheet with a To-Do list tab and added a bunch of suggestions from this thread, and a tab for all the model responses (will take time to populate this as I need to re-run the tests for all the models, I haven't been saving their responses). Also I got access to a machine with 64GB ram so I'll be adding 65b param models to the list as well now (still quantized/ggml versions tho). Also holy crap first reddit gold! - Original post: - Better late than never, here's my updated [spreadsheet](https://docs.google.com/spreadsheets/d/1NgHDxbVWJFolq8bLvLkuPWKC7i_R6I6W/edit?usp=share_link&ouid=102314596465921370523&rtpof=true&sd=true) that tests a bunch of GGML models on a list of riddles/reasoning questions. Here's the [previous post](https://old.reddit.com/r/LocalLLaMA/comments/1302il2/riddlecleverness_comparison_of_popular_ggml_models/) I made about it. I'll keep this spreadsheet updated as new models come out. Too much data to make imgur links out of it now! :) It's quite a range of capabilities - from "English, motherfucker, do you speak it" to "holy crap this is almost ChatGPT". I wanted to include different quantization of the same models but it was taking too long, and wasn't making that much difference, so I didn't include those at this point (but if there's popular demand for specific models I will). If there's any other models I missed, let me know. Also if anyone thinks of any more reason/logic/riddle type questions to add, that'd be cool too. I want to keep expanding this spreadsheet with new models and new questions as time goes on. I think once I have a substantial enough update, I'll just make a new thread on it. In the meantime, I'll just be updating the spreadsheet as I work on adding new models and questions and what not without alerting reddit to each new number being added!
2023-05-02T22:58:24
https://www.reddit.com/r/LocalLLaMA/comments/13636h5/updated_riddlecleverness_comparison_of_popular/
YearZero
self.LocalLLaMA
2023-05-03T15:14:40
1
{'gid_2': 1}
13636h5
false
null
t3_13636h5
/r/LocalLLaMA/comments/13636h5/updated_riddlecleverness_comparison_of_popular/
false
false
self
119
{'enabled': False, 'images': [{'id': 'riugrxqQKB3HNfIELnq6cCaxctVILx9ngAlweHu_4KI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/d55iMnG0tzgbG_arkQpN13zT7D_WrNqHUQFBdT7aT6E.jpg?width=108&crop=smart&auto=webp&s=fe0b9376ec2e263229f2978e4a5c040f0403245e', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/d55iMnG0tzgbG_arkQpN13zT7D_WrNqHUQFBdT7aT6E.jpg?width=216&crop=smart&auto=webp&s=2c2d1be8b885d4ea4d5d31e1b1f9721191d3c4b1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/d55iMnG0tzgbG_arkQpN13zT7D_WrNqHUQFBdT7aT6E.jpg?width=320&crop=smart&auto=webp&s=b98561365fd8685b0aaf774d096e29985eb59d7a', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/d55iMnG0tzgbG_arkQpN13zT7D_WrNqHUQFBdT7aT6E.jpg?width=640&crop=smart&auto=webp&s=fb71caad4edd6ed7edde730d8822c8282b67605d', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/d55iMnG0tzgbG_arkQpN13zT7D_WrNqHUQFBdT7aT6E.jpg?width=960&crop=smart&auto=webp&s=621ddac49e6f725ae07370c92b6ce2a94382c19e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/d55iMnG0tzgbG_arkQpN13zT7D_WrNqHUQFBdT7aT6E.jpg?width=1080&crop=smart&auto=webp&s=df6e6b4653a1b2999a990fa0bea112d27ccab8f0', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/d55iMnG0tzgbG_arkQpN13zT7D_WrNqHUQFBdT7aT6E.jpg?auto=webp&s=3b6d704e42f7a6b79410a1ba449ec8ef0ea5781a', 'width': 1200}, 'variants': {}}]}
Does this exist?
1
Is there an assistant that has the ability for importing models from hugging face? Not text web ui or usage of APIs, more so access to the files on the computer/one that can move your mouse around.
2023-05-02T23:00:49
https://www.reddit.com/r/LocalLLaMA/comments/13638nc/does_this_exist/
x86Max
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13638nc
false
null
t3_13638nc
/r/LocalLLaMA/comments/13638nc/does_this_exist/
false
false
self
1
null
wizardLM LlaMA LoRA 13B · Hugging Face
59
2023-05-03T00:47:43
https://huggingface.co/winddude/wizardLM-LlaMA-LoRA-13
wind_dude
huggingface.co
1970-01-01T00:00:00
0
{}
1365pn8
false
null
t3_1365pn8
/r/LocalLLaMA/comments/1365pn8/wizardlm_llama_lora_13b_hugging_face/
false
false
https://b.thumbs.redditm…ps1N9yfOVrnk.jpg
59
{'enabled': False, 'images': [{'id': 'a3cAxDkmaqnLsD9BPHtx96xkknNBYPbcQXzTGVbyps4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/n5sw8MLKnPNbDWtoOFz6E2lInW32PwnMIPwxy3FEo_c.jpg?width=108&crop=smart&auto=webp&s=6675be9407ac9ceea253a58e3e7b0c5e2b84dfdc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/n5sw8MLKnPNbDWtoOFz6E2lInW32PwnMIPwxy3FEo_c.jpg?width=216&crop=smart&auto=webp&s=31dcc79cd88a59700144f74340fdefdf28e53abd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/n5sw8MLKnPNbDWtoOFz6E2lInW32PwnMIPwxy3FEo_c.jpg?width=320&crop=smart&auto=webp&s=7653b6e971ed902394bfba7e93536b2bfb85e7bd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/n5sw8MLKnPNbDWtoOFz6E2lInW32PwnMIPwxy3FEo_c.jpg?width=640&crop=smart&auto=webp&s=fd6783be747e88f6d56057758479fc55f9c663da', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/n5sw8MLKnPNbDWtoOFz6E2lInW32PwnMIPwxy3FEo_c.jpg?width=960&crop=smart&auto=webp&s=75f246df251db0a78526f30afbb4fe6b23322aca', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/n5sw8MLKnPNbDWtoOFz6E2lInW32PwnMIPwxy3FEo_c.jpg?width=1080&crop=smart&auto=webp&s=067f13d732bda8a4723164cf13016e9cd8e5054e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/n5sw8MLKnPNbDWtoOFz6E2lInW32PwnMIPwxy3FEo_c.jpg?auto=webp&s=18a76105ff9c57c20047a9ee54760ce052338336', 'width': 1200}, 'variants': {}}]}
VPGTrans: Transfer Visual Prompt Generator across LLMs
9
2023-05-03T01:12:22
https://vpgtrans.github.io/
ninjasaid13
vpgtrans.github.io
1970-01-01T00:00:00
0
{}
13669bz
false
null
t3_13669bz
/r/LocalLLaMA/comments/13669bz/vpgtrans_transfer_visual_prompt_generator_across/
false
false
default
9
null
🌋 LLaVA: Large Language and Vision Assistant
2
2023-05-03T03:39:58
https://llava-vl.github.io
Kurcide
llava-vl.github.io
1970-01-01T00:00:00
0
{}
1369dc3
false
null
t3_1369dc3
/r/LocalLLaMA/comments/1369dc3/llava_large_language_and_vision_assistant/
false
false
default
2
null
Has anyone gotten EleutherAI/lm-evaluation-harness to work on already downloaded local models?
4
As the title says. [GitHub - EleutherAI/lm-evaluation-harness: A framework for few-shot evaluation of autoregressive language models.](https://github.com/EleutherAI/lm-evaluation-harness) &#x200B; The instructions are very lackluster and the install was quite painful (requirements were not all installed). However, I finally got it working but it wants to download the model from HuggingFace every time I want to evaluate a model. I already have some-30 models downloaded and I don't want to redownload them. Any tips/pointers to having it point to a local copy?
2023-05-03T04:43:45
https://www.reddit.com/r/LocalLLaMA/comments/136akxj/has_anyone_gotten_eleutherailmevaluationharness/
Devonance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
136akxj
false
null
t3_136akxj
/r/LocalLLaMA/comments/136akxj/has_anyone_gotten_eleutherailmevaluationharness/
false
false
self
4
{'enabled': False, 'images': [{'id': 'Ea7GW7mTNlUwFn3eRA7ciuGc7whf2WccHC8ohWp28w0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Momslkn1chisM-iW4T__ioxS7pNBx4ciGJpcYfGG2JU.jpg?width=108&crop=smart&auto=webp&s=4c1e18dbb41100b298257951047311a17d785d42', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Momslkn1chisM-iW4T__ioxS7pNBx4ciGJpcYfGG2JU.jpg?width=216&crop=smart&auto=webp&s=b67ac4f430c1d15f8d4bb4e5965b087a1f45e8e5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Momslkn1chisM-iW4T__ioxS7pNBx4ciGJpcYfGG2JU.jpg?width=320&crop=smart&auto=webp&s=3b70838818632e9349dd99ea9e52fe918e202786', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Momslkn1chisM-iW4T__ioxS7pNBx4ciGJpcYfGG2JU.jpg?width=640&crop=smart&auto=webp&s=699bedf68ca2259d0148e0446a4bd7f389313ec9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Momslkn1chisM-iW4T__ioxS7pNBx4ciGJpcYfGG2JU.jpg?width=960&crop=smart&auto=webp&s=0192901337393ccd12e43ff3c71d078dce4769ea', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Momslkn1chisM-iW4T__ioxS7pNBx4ciGJpcYfGG2JU.jpg?width=1080&crop=smart&auto=webp&s=f733f4e8df79e7fc89562c9054586b7e710a5c47', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Momslkn1chisM-iW4T__ioxS7pNBx4ciGJpcYfGG2JU.jpg?auto=webp&s=7913108b9627e408182acfb81053365cb6d5cb5a', 'width': 1200}, 'variants': {}}]}
7B models use with Langchainn for Chatbox importing of txt or pdf's
14
Has anyone been able to use a LLama model or any other open source model for that fact with Langchain to create their own GPT chatbox. I have seen some post on youtube with Colab but was thinking has it been done perhaps with a 7b model, any ideas?
2023-05-03T07:31:16
https://www.reddit.com/r/LocalLLaMA/comments/136dlkw/7b_models_use_with_langchainn_for_chatbox/
Lord_Crypto13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
136dlkw
false
null
t3_136dlkw
/r/LocalLLaMA/comments/136dlkw/7b_models_use_with_langchainn_for_chatbox/
false
false
self
14
null
Modal fine-tunes Llama 7B to build slack bot
1
[deleted]
2023-05-03T07:38:06
[deleted]
1970-01-01T00:00:00
0
{}
136dpnb
false
null
t3_136dpnb
/r/LocalLLaMA/comments/136dpnb/modal_finetunes_llama_7b_to_build_slack_bot/
false
false
default
1
null
How do the 7B, 13B, 30B, and 65B parameter models compare?
29
How do the 7B, 13B, 30B, and 65B parameter models compare to each other? And how do they compare to llms like ChatGPT and GPT4? And when can the 13B-65B models run on consumer hardware?
2023-05-03T13:18:14
https://www.reddit.com/r/LocalLLaMA/comments/136kbfu/how_do_the_7b_13b_30b_and_65b_parameter_models/
Akimbo333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
136kbfu
false
null
t3_136kbfu
/r/LocalLLaMA/comments/136kbfu/how_do_the_7b_13b_30b_and_65b_parameter_models/
false
false
self
29
null
I have seen that Autogpt is trending. I have created Autogpt for your browser that help you can access Autogpt directly from your browser.
5
2023-05-03T13:22:28
https://v.redd.it/uwwmal3ramxa1
ANil1729
v.redd.it
1970-01-01T00:00:00
0
{}
136kf67
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/uwwmal3ramxa1/DASHPlaylist.mpd?a=1694265284%2CNGQ0MmQ4NzJiMzEzYTI3OThhMDJiODQyNTBkM2JhYTI3MTlhODU4Y2Y5NWEyY2ExNmNmNWZkMmU3N2Y4MjQ4MA%3D%3D&v=1&f=sd', 'duration': 58, 'fallback_url': 'https://v.redd.it/uwwmal3ramxa1/DASH_720.mp4?source=fallback', 'height': 720, 'hls_url': 'https://v.redd.it/uwwmal3ramxa1/HLSPlaylist.m3u8?a=1694265284%2CODRjMWYwZmVjY2QwODJlZGRhY2FjMmZkMTVjMDMxMDcyMTYyYjM3YzRhNjFmZTA2MDFmMDYwYTEwZGJmNzQyNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/uwwmal3ramxa1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_136kf67
/r/LocalLLaMA/comments/136kf67/i_have_seen_that_autogpt_is_trending_i_have/
false
false
default
5
null
AutoGPT4J
7
Howdy all! I got a bit tired of waiting for anyone to implement AutoGPT into Java, so I did it myself. It's still VERY much an alpha product and has a lot of development left to go, though I feel for the amount of time I have had to dedicate to it that it is moving along well. [https://github.com/Aemon-Algiz/AutoGPT4J](https://github.com/Aemon-Algiz/AutoGPT4J) I would love to get some feedback on the direction I'm taking and I warn you again, a VERY alpha product.
2023-05-03T13:53:57
https://www.reddit.com/r/LocalLLaMA/comments/136l6pa/autogpt4j/
AemonAlgizVideos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
136l6pa
false
null
t3_136l6pa
/r/LocalLLaMA/comments/136l6pa/autogpt4j/
false
false
self
7
{'enabled': False, 'images': [{'id': '42X-JzpP6fHGihtA0zlYmsJmgz2KXgcPXJn90W_1gcY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/anXsZakdbnVfHpwWq8UGi3yiMaFX17Ejego4QFZCbJU.jpg?width=108&crop=smart&auto=webp&s=e4689b9f82fab00816c67116dd23e97f80cb1a70', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/anXsZakdbnVfHpwWq8UGi3yiMaFX17Ejego4QFZCbJU.jpg?width=216&crop=smart&auto=webp&s=93a3fd0af69b6eda1c77938ffcd41cb068b0b005', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/anXsZakdbnVfHpwWq8UGi3yiMaFX17Ejego4QFZCbJU.jpg?width=320&crop=smart&auto=webp&s=404edf6fb94e50dd896cb462d02244d8243a6f0e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/anXsZakdbnVfHpwWq8UGi3yiMaFX17Ejego4QFZCbJU.jpg?width=640&crop=smart&auto=webp&s=517db98cd39a6e49538f0358f8c0e29db992a0eb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/anXsZakdbnVfHpwWq8UGi3yiMaFX17Ejego4QFZCbJU.jpg?width=960&crop=smart&auto=webp&s=c93cea3621dc801249d201d0b8be0a0b238770c3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/anXsZakdbnVfHpwWq8UGi3yiMaFX17Ejego4QFZCbJU.jpg?width=1080&crop=smart&auto=webp&s=c2913f3a98ded310d03b90586440fa5792500fe3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/anXsZakdbnVfHpwWq8UGi3yiMaFX17Ejego4QFZCbJU.jpg?auto=webp&s=03d89d2f60dc707ea793304f101e8938ca1630a6', 'width': 1200}, 'variants': {}}]}
Have you guys tried the new mirostat sampling in llama.cpp? It's amazing!
67
I assume most of you use llama.cpp only indirectly as a part of some web interface thing, so maybe you don't have that yet. Anyway, I use llama.cpp directly and I am blown away. I enabled it with --mirostat 2 and the help says "Top K, Nucleus, Tail Free and Locally Typical samplers are ignored if used." to give you an idea what it is about. Let me tell you, I'm using WizardLM 7B and it was like talking to a model suddenly like twice as smart. I am doing conversations, which naturally come with lots of tags, so I am pretty much forced to disable repeat penalties. And with this, repeat penalties seem pretty much not needed to me. So good. Also please llama.cpp, implement a whitelist so that I can define things that don't count towards repitition. I think this is the reason so many people say their model just wanders off after like 4 messages. Would love to hear about your experiences with it and your tweaks.
2023-05-03T14:44:54
https://www.reddit.com/r/LocalLLaMA/comments/136o0wu/have_you_guys_tried_the_new_mirostat_sampling_in/
involviert
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
136o0wu
false
null
t3_136o0wu
/r/LocalLLaMA/comments/136o0wu/have_you_guys_tried_the_new_mirostat_sampling_in/
false
false
self
67
null
where did the wiki go?
3
[removed]
2023-05-03T15:28:35
https://www.reddit.com/r/LocalLLaMA/comments/136qjxd/where_did_the_wiki_go/
overlydelicioustea
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
136qjxd
false
null
t3_136qjxd
/r/LocalLLaMA/comments/136qjxd/where_did_the_wiki_go/
false
false
default
3
null
Useful discords?
1
[removed]
2023-05-03T18:28:38
https://www.reddit.com/r/LocalLLaMA/comments/136vcq1/useful_discords/
2ch4rlie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
136vcq1
false
null
t3_136vcq1
/r/LocalLLaMA/comments/136vcq1/useful_discords/
false
false
default
1
null
Where can I download the pure LLaMa models converted for llama.cpp?
9
For some reason, in the wiki of this subreddit ([https://www.reddit.com/r/LocalLLaMA/wiki/models/](https://www.reddit.com/r/LocalLLaMA/wiki/models/)), there are a lot of llama.cpp model download links listed, but all only for the modified versions like alpaca, vicuna etc, and none for the "raw" (not instruction tuned) LLaMa models. Where can those be downloaded? So I'm looking for raw llama as a 4 bit or 5 bit ggml version.
2023-05-03T19:08:41
https://www.reddit.com/r/LocalLLaMA/comments/136wenw/where_can_i_download_the_pure_llama_models/
Tystros
self.LocalLLaMA
2023-05-03T22:01:52
0
{}
136wenw
false
null
t3_136wenw
/r/LocalLLaMA/comments/136wenw/where_can_i_download_the_pure_llama_models/
false
false
self
9
null