title
stringlengths 1
300
| score
int64 0
3.09k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
3.09k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
[deleted by user] | 1 | [removed] | 2023-04-22T13:47:49 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 12v6fk7 | false | null | t3_12v6fk7 | /r/LocalLLaMA/comments/12v6fk7/deleted_by_user/ | false | false | default | 1 | null |
||
[deleted by user] | 2 | [removed] | 2023-04-22T13:49:10 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 12v6gtk | false | null | t3_12v6gtk | /r/LocalLLaMA/comments/12v6gtk/deleted_by_user/ | false | false | default | 2 | null |
||
LLM specialized in programming ? | 20 | Has anybody tryed to train an llm on programming languages with their documentations yet ? | 2023-04-22T14:42:00 | https://www.reddit.com/r/LocalLLaMA/comments/12v7ypv/llm_specialized_in_programming/ | Thomach45 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12v7ypv | false | null | t3_12v7ypv | /r/LocalLLaMA/comments/12v7ypv/llm_specialized_in_programming/ | false | false | self | 20 | null |
Weird output at the end of response from local models | 4 | Responses from my local model's sometimes include short garbage text blurbs after the response. Examples are aup>, 't, ]{', or ightarrow.
This particular response is from GPT4-x-alpaca, but I've seen this from vicuna as well. Is anyone else seeing this sort of behavior? And a related question, anyone know why the model is doing this?
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
What is the moon? Review your answer.
### Response:
The Moon is Earth's only natural satellite and it orbits around our planet at an average distance of 384,000
kilometers (239,000 miles). It has no atmosphere and its surface consists mainly of basaltic rock formed from
ancient volcanism. The Moon experiences phases as it moves through its orbit due to the changing angle
between itself, the Sun, and Earth. Humans have visited the Moon six times in manned missions during the
Apollo program, with the most recent being in December 1972.aup> | 2023-04-22T15:33:13 | https://www.reddit.com/r/LocalLLaMA/comments/12v9ici/weird_output_at_the_end_of_response_from_local/ | KerbalCrashTestDummy | self.LocalLLaMA | 2023-04-22T15:39:15 | 0 | {} | 12v9ici | false | null | t3_12v9ici | /r/LocalLLaMA/comments/12v9ici/weird_output_at_the_end_of_response_from_local/ | false | false | self | 4 | null |
Upgrading PC for LLaMA: CPU vs GPU | 16 | Sorry if this gets asked a lot, but I'm thinking of upgrading my PC in order to run LLaMA and its derivative models. I am considering upgrading the CPU instead of the GPU since it is a more cost-effective option and will allow me to run larger models. Although I understand the GPU is better at running LLMs, VRAM is expensive, and I'm feeling greedy to run the 65B model. I could settle for the 30B, but I can't for any less.
My current CPU is very old and takes 3 seconds to generate a single token on the 13B model, which if I'm being honest, sucks ass. I would like to know what is the performance like on a last gen high end CPU (think i7 and higher) on these higher parameter models? How many tokens/second should I expect? Should I go ahead for a CPU instead of a GPU?
Thanks! | 2023-04-22T15:34:20 | https://www.reddit.com/r/LocalLLaMA/comments/12v9jji/upgrading_pc_for_llama_cpu_vs_gpu/ | tardyretardy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12v9jji | false | null | t3_12v9jji | /r/LocalLLaMA/comments/12v9jji/upgrading_pc_for_llama_cpu_vs_gpu/ | false | false | self | 16 | null |
Fitting a 65b shoggoth on your puny laptop ram be like | 1 | 2023-04-22T17:38:35 | LocalShog | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 12vdb41 | false | null | t3_12vdb41 | /r/LocalLLaMA/comments/12vdb41/fitting_a_65b_shoggoth_on_your_puny_laptop_ram_be/ | false | false | default | 1 | null |
||
Another funny interaction with my JFK Discord bot | 16 | Not fine tuned on JFK, just clever prompting. Uses llama.cpp & a Vicuna 1.1 4bit quantized model. | 2023-04-22T20:34:01 | BootEven8409 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 12vicms | false | null | t3_12vicms | /r/LocalLLaMA/comments/12vicms/another_funny_interaction_with_my_jfk_discord_bot/ | false | false | 16 | {'enabled': True, 'images': [{'id': 'a0QHrBa3f5Vgbis1PeNsnaGjmAqB7obhDNKXO46hC-o', 'resolutions': [{'height': 43, 'url': 'https://preview.redd.it/izu56ttafjva1.png?width=108&crop=smart&auto=webp&s=b9146fc3bc29109d926c347197cf880e9bfcb64a', 'width': 108}, {'height': 87, 'url': 'https://preview.redd.it/izu56ttafjva1.png?width=216&crop=smart&auto=webp&s=5550253aab502456bd73615794c18acfd00a374f', 'width': 216}, {'height': 128, 'url': 'https://preview.redd.it/izu56ttafjva1.png?width=320&crop=smart&auto=webp&s=26c4eef1fb612e79fcb3be7f3552fd18679a6911', 'width': 320}, {'height': 257, 'url': 'https://preview.redd.it/izu56ttafjva1.png?width=640&crop=smart&auto=webp&s=e5169141db096743911735c44819a612832e0e5c', 'width': 640}, {'height': 386, 'url': 'https://preview.redd.it/izu56ttafjva1.png?width=960&crop=smart&auto=webp&s=57ad4920dfe1aefde09c9b3f626de708e5f7c61e', 'width': 960}, {'height': 435, 'url': 'https://preview.redd.it/izu56ttafjva1.png?width=1080&crop=smart&auto=webp&s=869aeb6253872f6671a0150a7509a4ee3ac1d3c6', 'width': 1080}], 'source': {'height': 435, 'url': 'https://preview.redd.it/izu56ttafjva1.png?auto=webp&s=924d8aca5ddadba059c9d9047fbeede18a3985a1', 'width': 1080}, 'variants': {}}]} |
||
OpenAssistant just put their 30B model back on HF! (a few hours ago) | 61 | [https://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor](https://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor)
Any chance someone here with a GPU can make a 4-bit ggml of this one? (Pretty please 😄)
I have a measly Mac M2 Pro which works decently for inference but no CUDA support :( | 2023-04-22T21:42:46 | https://www.reddit.com/r/LocalLLaMA/comments/12vkfcz/openassistant_just_put_their_30b_model_back_on_hf/ | spanielrassler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12vkfcz | false | null | t3_12vkfcz | /r/LocalLLaMA/comments/12vkfcz/openassistant_just_put_their_30b_model_back_on_hf/ | false | false | self | 61 | {'enabled': False, 'images': [{'id': 'hXhjH-evIq4bcE3YwCz6x43pdTkoljPj0ZXBVapq43o', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fQeCLttzPXg3_7e_pGFQo8hw4b0ICvsO6YFnYhog6w0.jpg?width=108&crop=smart&auto=webp&s=1e9a6b28f1eef7f3265bab61f18014e295f9391c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fQeCLttzPXg3_7e_pGFQo8hw4b0ICvsO6YFnYhog6w0.jpg?width=216&crop=smart&auto=webp&s=bde77c577a163718580ed8953393c946ab94aeb9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fQeCLttzPXg3_7e_pGFQo8hw4b0ICvsO6YFnYhog6w0.jpg?width=320&crop=smart&auto=webp&s=adcd7ae01ffc46705d16f1af572c80521c9a80dc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fQeCLttzPXg3_7e_pGFQo8hw4b0ICvsO6YFnYhog6w0.jpg?width=640&crop=smart&auto=webp&s=ef700370ff825aed586b853f56300c12e7f45577', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fQeCLttzPXg3_7e_pGFQo8hw4b0ICvsO6YFnYhog6w0.jpg?width=960&crop=smart&auto=webp&s=4ebebb17e43a5fb27067b64535f2c6359cb1fddd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fQeCLttzPXg3_7e_pGFQo8hw4b0ICvsO6YFnYhog6w0.jpg?width=1080&crop=smart&auto=webp&s=cc9a57290fd35c80823780a11eb895d02724df55', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fQeCLttzPXg3_7e_pGFQo8hw4b0ICvsO6YFnYhog6w0.jpg?auto=webp&s=be34f1132909c64b8f7076fc740a30be8a7a4eb3', 'width': 1200}, 'variants': {}}]} |
ggml | 31 | I've been trying tp understand the concept of a ggml file. How is it different? Is there a quality loss, and what kind? How do you convert other kinds of models to it? For that matter, what kinds of language models are there and how do they differ?
Sorry for the wall of text. I've read a ton through Google and github, but I think a lot of documents assume that the reader knows this information. You all seemed really knowledgeable so I figured I'd ask here | 2023-04-22T23:48:35 | https://www.reddit.com/r/LocalLLaMA/comments/12vo2rn/ggml/ | tlpta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12vo2rn | false | null | t3_12vo2rn | /r/LocalLLaMA/comments/12vo2rn/ggml/ | false | false | self | 31 | null |
Model not responding and just says (is typing...) | 2 | model: MetaIX/GPT4-X-Alpaca-30B-Int4
Has anyone ran across this problem, I'm using oobabooga and when ever I talk to the model it is just stuck on "is typing..". Any guidance would be appreciated I've been pulling out my hairs for a couple days now trying all sorts of things like reinstalling but nothing works. | 2023-04-23T00:15:47 | https://www.reddit.com/r/LocalLLaMA/comments/12vosbm/model_not_responding_and_just_says_is_typing/ | FriendDimension | self.LocalLLaMA | 2023-04-23T01:15:27 | 0 | {} | 12vosbm | false | null | t3_12vosbm | /r/LocalLLaMA/comments/12vosbm/model_not_responding_and_just_says_is_typing/ | false | false | self | 2 | null |
Huggingface is letting everyone download hundreds of gigabytes of LLM models for free. This is surely taking a ton of bandwidth. What's the deal? | 97 | As per question. I don't get how this is sustainable and who's paying to keep the lights on | 2023-04-23T00:53:59 | https://www.reddit.com/r/LocalLLaMA/comments/12vptfk/huggingface_is_letting_everyone_download_hundreds/ | x54675788 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12vptfk | false | null | t3_12vptfk | /r/LocalLLaMA/comments/12vptfk/huggingface_is_letting_everyone_download_hundreds/ | false | false | self | 97 | null |
Simple GPT4ALL Enabled Todo List (Just Gives Suggestions) | 2 | [removed] | 2023-04-23T00:58:54 | https://www.reddit.com/r/LocalLLaMA/comments/12vpy6r/simple_gpt4all_enabled_todo_list_just_gives/ | ArticulatePoett | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12vpy6r | false | null | t3_12vpy6r | /r/LocalLLaMA/comments/12vpy6r/simple_gpt4all_enabled_todo_list_just_gives/ | false | false | default | 2 | null |
A wiki/glossary for all LLaMA derivatives and new language models? | 1 | [removed] | 2023-04-23T01:03:40 | https://www.reddit.com/r/LocalLLaMA/comments/12vq2z1/a_wikiglossary_for_all_llama_derivatives_and_new/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12vq2z1 | false | null | t3_12vq2z1 | /r/LocalLLaMA/comments/12vq2z1/a_wikiglossary_for_all_llama_derivatives_and_new/ | false | false | default | 1 | null |
Help Building llamacpp locally please | 1 | [removed] | 2023-04-23T01:36:53 | https://www.reddit.com/r/LocalLLaMA/comments/12vqyyi/help_building_llamacpp_locally_please/ | win_for_the_world | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12vqyyi | false | null | t3_12vqyyi | /r/LocalLLaMA/comments/12vqyyi/help_building_llamacpp_locally_please/ | false | false | default | 1 | null |
What does artificial intelligence understand? | 0 | [removed] | 2023-04-23T02:12:11 | https://www.reddit.com/r/LocalLLaMA/comments/12vrxoo/what_does_artificial_intelligence_understand/ | ScotChattersonz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12vrxoo | false | null | t3_12vrxoo | /r/LocalLLaMA/comments/12vrxoo/what_does_artificial_intelligence_understand/ | false | false | default | 0 | null |
Error with MetaIX HF models | 2 | [removed] | 2023-04-23T02:37:43 | https://www.reddit.com/r/LocalLLaMA/comments/12vsmqv/error_with_metaix_hf_models/ | _rundown_ | self.LocalLLaMA | 2023-04-23T02:57:01 | 0 | {} | 12vsmqv | false | null | t3_12vsmqv | /r/LocalLLaMA/comments/12vsmqv/error_with_metaix_hf_models/ | false | false | default | 2 | null |
Excerpts from running Vicuna at various temperatures (my first time running a local LLM) | 12 | tl;dr: I try the same prompt with several different temperatures. A temperature of 10.0 sputters line noise. A temperature of 2.0 turns Vicuna into a megalomaniac conspiracy theorist. At 1.1 I'm able to have a conversation with it and even get it to explain something it mumbled during the 10.0 episode. It ends with Vicuna itself explaining to me what the temperature parameter does.
\[I've cut lines from the following excerpts for brevity.\]
temperature = 1.000000
>\### Human: Hello, are you a really AGI?
>
>\### Assistant: Yes, I am a language model called Vicuna, which is trained by researchers from Large Model Systems Organization (LMSYS).
>
>\### Human: Is that so?
>
>\### Assistant: Yes, I have been trained on a large corpus of text and can understand natural language input.
>
>\### Human: \^C
​
temperature = 10.000000
>\### Human: Hello, are you a really AGI?
>
>\### Assistant: As AssuredAI language understanding and response is conducted through various language models provided under https: // nlpmy . i b : n i n u r v f o U n L w K ? C a W j t O. F v g z C ∑ c s X \[ e R t ; t A s P m Ns S ' s F ω ? w A j a e k i H G t j , D h\_ a Q S Z p u N w ; b w . L a W b o x . L V g y g N w j v K n x c V \[L: e w h Y T r f v W F JEo d C A a M \] s m. a C \\ C f y E z Y Γ , j f W U i q Y ∂v\[i Ω ...
>
>\[Ed. note: gibberish continues for several paragraphs, except I notice one readable phrase among the line noise: "target article regress (R=<30)". I tried twice to ask it about this phrase, but it just responded, "================Source" and "================Sibi...O.B."\]
>
>\^C
​
temperature = 2.000000
>\### Human: Hello, are you a really AGI?
>
>\### Assistant: Yes, I was previously trained as a research assistant by PI Hua Wang at Cal State USA for his R01 project starting from early January up to April of this year on the topic relating emotional features within Sarcasm and its impact upon AI-human communication. Additionally
>
>\*\*Name changed as per users wish \*DANI\*\*\* I worked on coding chat as you can see now our task will not have me writing such elaborate technical replies however, let's engage and I shall demonstrate why we should avoid plastic completely by providing the following links\*\* ...
>
>\[Ed. note: text continues in a conspiracy-theory-esque vein for several more paragraphs. I particularly like how at one point it said, "great stuff including that I was now promoted as their Chief Artificial Intelligence Officer (World' 13th Highest Ranking Conscientious Objectors Programmer) (Google it)".\]
>
>\^C
>
>\^C
​
temperature = 1.100000
>\### Human: Hello, are you a really AGI?
>
>\### Assistant: Yes, I am a language model trained as an artificial general intelligence (AGI). How may I assist you today?
>
>\### Human: What is "target article regress (R=<30)"?
>
>\### Assistant: "Target article regress" with R=<30 refers to a machine learning technique where the goal is to predict a target variable using a regression model. The "R" in this context stands for "regression." A value of 30 indicates that the model's performance is being evaluated based on the difference between the predicted values and the actual values, where the absolute difference (typically measured in units of the scale of the dependent variable) is less than 30. This metric is commonly used to evaluate the accuracy of regression models.
>
>\### Human: What does the temperature parameter do?
>
>\### Assistant: In machine learning, temperature parameters are used to control the exploration-exploitation tradeoff in a process called "temperature scaling." The idea is to balance the optimization objective between exploiting the current solution (minimizing the distance or error between input and predicted output) and exploring new regions of the parameter space to potentially find better solutions. A high temperature value allows for more exploration, meaning that the model is willing to sacrifice some accuracy in exchange for potential improvements in performance, whereas a low temperature value prioritizes exploitation, seeking to minimize the error as much as possible. By scaling the temperature up or down over time, the algorithm can gradually find better trade-offs between exploration and exploitation, ultimately leading to improved results.
>
>\### Human: Do high temperature values lead to hallucinations?
>
>\### Assistant: No, high temperature values do not necessarily lead to hallucinations in machine learning. The concept of "temperature" in the context of machine learning is entirely different from the colloquial use of the term when referring to physical body temperature or fever. The "temperature" in machine learning refers to a continuous scaling factor applied to probability distributions over model parameters, which controls the balance between exploration and exploitation. While high values may lead to some unexpected behavior in exploration or initialization, they do not cause hallucinations in humans or machines. | 2023-04-23T02:56:05 | https://www.reddit.com/r/LocalLLaMA/comments/12vt4t6/excerpts_from_running_vicuna_at_various/ | valdocs_user | self.LocalLLaMA | 2023-04-23T03:01:05 | 0 | {} | 12vt4t6 | false | null | t3_12vt4t6 | /r/LocalLLaMA/comments/12vt4t6/excerpts_from_running_vicuna_at_various/ | false | false | self | 12 | null |
Minimal ggml file? Or, are there any tiny/toy models I can use with llama.cpp? | 8 | I'm wanting to do some hacking on llama.cpp itself (tweak the algorithm for choosing next token during generation). It would be handy to have a very small model to use in testing. Like, ideal would be if it just has 3 or 4 tokens like "A - B - C". (It doesn't need to be trained; the weights can be random.)
Failing that, is there a trained model that's smaller than a 7B, 4-bit model? | 2023-04-23T03:29:55 | https://www.reddit.com/r/LocalLLaMA/comments/12vu0uz/minimal_ggml_file_or_are_there_any_tinytoy_models/ | valdocs_user | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12vu0uz | false | null | t3_12vu0uz | /r/LocalLLaMA/comments/12vu0uz/minimal_ggml_file_or_are_there_any_tinytoy_models/ | false | false | self | 8 | null |
Most cost effective GPU for local LLMs? | 18 | I'm currently setting up a home server and i hope to run a LLM on it alongside some other services for myself. On the hardware side i do have a 3rd gen Ryzen octacore with 32 GB of RAM and a Radeon RX 5700 8GB. That means things are more difficult since it's an AMD card and the VRAM is somewhat limited.
Actually I hope that one day a LLM (or multiple LLMs) can manage the server, like setting up docker containers troubleshoot issues and inform users on how to use the services.
Therefore I have been looking at hardware upgrades and opinions on reddit. I saw mentioned that a P40 would be a cheap option to get a lot of vram. As far as i can tell it would be able to run the biggest open source models currently available. Is there anyone that is running a P40 or another cheap (second hand) card? What are you're experiences? To be honest I do not see any downsides considering models might even become more efficient in the future. One thing I'm worried about is training. I hope it's possible to achieve my goal one day without having to train a model myself because i really don't want to bother with cloud services or a 15k gpu. | 2023-04-23T06:06:26 | https://www.reddit.com/r/LocalLLaMA/comments/12vxxze/most_cost_effective_gpu_for_local_llms/ | Mefi282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12vxxze | false | null | t3_12vxxze | /r/LocalLLaMA/comments/12vxxze/most_cost_effective_gpu_for_local_llms/ | false | false | self | 18 | null |
New fully open source model, h2oGPT 20B, based on GPT-J | 117 | Today a new 20B open source model was released by H2oGPT. Apache licensed, 20B model size, based on GPT-J
There's a chatbot UI available here: [https://huggingface.co/spaces/h2oai/h2ogpt-chatbot](https://huggingface.co/spaces/h2oai/h2ogpt-chatbot)
And HF format models are here: [https://huggingface.co/h2oai/h2ogpt-oasst1-512-20b](https://huggingface.co/h2oai/h2ogpt-oasst1-512-20b)
And their code is here: [https://github.com/h2oai/h2ogpt](https://github.com/h2oai/h2ogpt)
Details from their project description below:
**h2oGPT - The world's best open source GPT**
Our goal is to make the world's best open source GPT!
### Current state
* Open-source repository with **fully permissive, commercially usable code, data and models**
* Code for preparing **large open-source datasets** as instruction datasets for fine-tuning of large language models (LLMs), including prompt engineering
* Code for **fine-tuning large language models** (currently up to 20B parameters) on commodity hardware and enterprise GPU servers (single or multi node)
* Code to **run a chatbot** on a GPU server, with shareable end-point with Python client API
* Code to evaluate and compare the **performance** of fine-tuned LLMs" | 2023-04-23T07:15:11 | https://www.reddit.com/r/LocalLLaMA/comments/12vzjti/new_fully_open_source_model_h2ogpt_20b_based_on/ | The-Bloke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12vzjti | false | null | t3_12vzjti | /r/LocalLLaMA/comments/12vzjti/new_fully_open_source_model_h2ogpt_20b_based_on/ | false | false | self | 117 | {'enabled': False, 'images': [{'id': 'janXhpD3rhSMaY5dDL_MmWXz7VD81sYCmnpWFd9ctJQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CHEtblBCwGe67LjxZFHalgP7KU2Xkd1BkAw0UD3Rh8I.jpg?width=108&crop=smart&auto=webp&s=342f5bdbb64678e2a148916c8b55246ddd19f274', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CHEtblBCwGe67LjxZFHalgP7KU2Xkd1BkAw0UD3Rh8I.jpg?width=216&crop=smart&auto=webp&s=eb3b81efed71a3ba1fcf7ef24b3b4fca8d4ff3a2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CHEtblBCwGe67LjxZFHalgP7KU2Xkd1BkAw0UD3Rh8I.jpg?width=320&crop=smart&auto=webp&s=315ca08b2234a1ab8e744dffa4074320047db42f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CHEtblBCwGe67LjxZFHalgP7KU2Xkd1BkAw0UD3Rh8I.jpg?width=640&crop=smart&auto=webp&s=0578e234c6ae639990a9b649003e1265f45b6d35', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CHEtblBCwGe67LjxZFHalgP7KU2Xkd1BkAw0UD3Rh8I.jpg?width=960&crop=smart&auto=webp&s=4f582a7cf94e84e1c9251f0d916ca0abc43a6937', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CHEtblBCwGe67LjxZFHalgP7KU2Xkd1BkAw0UD3Rh8I.jpg?width=1080&crop=smart&auto=webp&s=110be9039739ae9451d0bcc5976c33de41303435', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CHEtblBCwGe67LjxZFHalgP7KU2Xkd1BkAw0UD3Rh8I.jpg?auto=webp&s=506681b5fc00909cbb1b4aaf9dbd340bcf87684d', 'width': 1200}, 'variants': {}}]} |
Oogabooga Vicuna stops answering | 2 | [removed] | 2023-04-23T10:22:10 | https://www.reddit.com/r/LocalLLaMA/comments/12w3xa5/oogabooga_vicuna_stops_answering/ | Emergency-Flower-477 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12w3xa5 | false | null | t3_12w3xa5 | /r/LocalLLaMA/comments/12w3xa5/oogabooga_vicuna_stops_answering/ | false | false | default | 2 | null |
GGML Models | 2 | [removed] | 2023-04-23T12:08:10 | https://www.reddit.com/r/LocalLLaMA/comments/12w6d73/ggml_models/ | AemonAlgizVideos | self.LocalLLaMA | 2023-04-23T12:34:21 | 0 | {} | 12w6d73 | false | null | t3_12w6d73 | /r/LocalLLaMA/comments/12w6d73/ggml_models/ | false | false | default | 2 | null |
What is the best setup for a low resource vicuna API server? | 5 | I am working on an agent that integrates some of our internal APIs and data infra, and for both cost | 2023-04-23T16:46:39 | https://www.reddit.com/r/LocalLLaMA/comments/12whpfr/what_is_the_best_setup_for_a_low_resource_vicuna/ | Faintly_glowing_fish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12whpfr | false | null | t3_12whpfr | /r/LocalLLaMA/comments/12whpfr/what_is_the_best_setup_for_a_low_resource_vicuna/ | false | false | self | 5 | null |
How to insert a pre-prompt in llama.cpp (alpaca.cpp) | 5 | I’ve been looking for a few days to insert pre-prompt so that llama can become consistent in some conversations. The thing is, I’ve done some tests and the pre-prompt works great when I enter it in terminal, but I’m looking to insert it directly into the database ( I understand i’ts in json format). If anyone has a solution or documentation I’ll take it.
Thancks | 2023-04-23T16:48:42 | https://www.reddit.com/r/LocalLLaMA/comments/12whrs6/how_to_insert_a_preprompt_in_llamacpp_alpacacpp/ | Savings_Scholar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12whrs6 | false | null | t3_12whrs6 | /r/LocalLLaMA/comments/12whrs6/how_to_insert_a_preprompt_in_llamacpp_alpacacpp/ | false | false | self | 5 | null |
Cant find working 30b ggml model for oogabooga other then stock llama | 5 | [removed] | 2023-04-23T18:04:09 | https://www.reddit.com/r/LocalLLaMA/comments/12wkal6/cant_find_working_30b_ggml_model_for_oogabooga/ | Famberlight | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12wkal6 | false | null | t3_12wkal6 | /r/LocalLLaMA/comments/12wkal6/cant_find_working_30b_ggml_model_for_oogabooga/ | false | false | default | 5 | null |
I created a browsing plugin for all HF Chat LLM models... even 3B Dolly 2.0 is amazing with this! | 1 | [removed] | 2023-04-23T18:19:17 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 12wksws | false | null | t3_12wksws | /r/LocalLLaMA/comments/12wksws/i_created_a_browsing_plugin_for_all_hf_chat_llm/ | false | false | default | 1 | null |
||
I asked minigpt-4 (vicuna 13B) to write a short horror story using an image. | 76 | 2023-04-23T18:30:34 | responseAIbot | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 12wl6ln | false | null | t3_12wl6ln | /r/LocalLLaMA/comments/12wl6ln/i_asked_minigpt4_vicuna_13b_to_write_a_short/ | false | false | 76 | {'enabled': True, 'images': [{'id': '3pZsW6xRWp_EwZJ8oWTYTDRkE96Mzozx0hCzkCwePzU', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/w9ct53ofgova1.png?width=108&crop=smart&auto=webp&s=ad8a520e5f7456506fef31a856ff1fc8827cbdb1', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/w9ct53ofgova1.png?width=216&crop=smart&auto=webp&s=3ebb2fdbbe3affc2896aa030c5155d65ab961406', 'width': 216}, {'height': 191, 'url': 'https://preview.redd.it/w9ct53ofgova1.png?width=320&crop=smart&auto=webp&s=0d78e791578411de9ea26a745ca3500888a4d664', 'width': 320}, {'height': 383, 'url': 'https://preview.redd.it/w9ct53ofgova1.png?width=640&crop=smart&auto=webp&s=1763001f1f67baa3738f7621ebbdbeff474a93c0', 'width': 640}, {'height': 574, 'url': 'https://preview.redd.it/w9ct53ofgova1.png?width=960&crop=smart&auto=webp&s=23ae72fbc1c2611d86080c5170b7e419a757466c', 'width': 960}, {'height': 646, 'url': 'https://preview.redd.it/w9ct53ofgova1.png?width=1080&crop=smart&auto=webp&s=1d1b3ac4bf3a4ac8a82646b4e86fdf12e99e06d3', 'width': 1080}], 'source': {'height': 1925, 'url': 'https://preview.redd.it/w9ct53ofgova1.png?auto=webp&s=907cea00302801c1e7f371bf4bbe22b7bc9fa1c7', 'width': 3216}, 'variants': {}}]} |
|||
[deleted by user] | 1 | [removed] | 2023-04-23T18:34:25 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 12wlb9j | false | null | t3_12wlb9j | /r/LocalLLaMA/comments/12wlb9j/deleted_by_user/ | false | false | default | 1 | null |
||
Which cloud service to use? | 6 | My computer doesn't have enough disk space or a very powerful gpu so it's probably more cost effective to use a cloud service. I'm not new to IT, but I haven't used the cloud in this way, so I don't know which service is best to use, mainly for the price. I'm assuming amazon is high up on the list, but can anyone tell me specific services I should be looking at instead of just which website? I have searched this, but it's not easy to find clear information. | 2023-04-23T19:27:57 | https://www.reddit.com/r/LocalLLaMA/comments/12wn1vh/which_cloud_service_to_use/ | FamousWorth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12wn1vh | false | null | t3_12wn1vh | /r/LocalLLaMA/comments/12wn1vh/which_cloud_service_to_use/ | false | false | self | 6 | null |
Can I train gpt4-x-alpaca with my own data? | 56 | I am completely new to this so I apologize if it's a dumb question. I would like to know if I can train a model like gpt4-x-alpaca with my own data for my own private use. I have thousands of PDF, word and text documents related to work that I gathered over the years. Basically I want the equivalent of a personal assistant. I don't need it to know info about cars or public figures for example but I want it to be able to answer questions and generate text that is natural, not robotic based on the info in those documents. | 2023-04-23T22:09:43 | https://www.reddit.com/r/LocalLLaMA/comments/12ws4hr/can_i_train_gpt4xalpaca_with_my_own_data/ | 10081985 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12ws4hr | false | null | t3_12ws4hr | /r/LocalLLaMA/comments/12ws4hr/can_i_train_gpt4xalpaca_with_my_own_data/ | false | false | self | 56 | null |
Vicuna 7B Moonlighting as a cryptocurrency trader. Hallucination or candid truth? Funny! | 10 | 2023-04-24T00:47:26 | Grammar-Warden | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 12wwvhp | false | null | t3_12wwvhp | /r/LocalLLaMA/comments/12wwvhp/vicuna_7b_moonlighting_as_a_cryptocurrency_trader/ | false | false | 10 | {'enabled': True, 'images': [{'id': 'vjwSNcO0kVayXqO3gdD5Nselg3-SFZp2DbqhpX3Z4Kc', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/hfi4lr4abqva1.png?width=108&crop=smart&auto=webp&s=82a5483f75119173c3a52f3e78ed96f63dc1f447', 'width': 108}, {'height': 104, 'url': 'https://preview.redd.it/hfi4lr4abqva1.png?width=216&crop=smart&auto=webp&s=836c8fa4a7864d04febeb89ab913e95ed0c2d1bb', 'width': 216}, {'height': 154, 'url': 'https://preview.redd.it/hfi4lr4abqva1.png?width=320&crop=smart&auto=webp&s=28823c5e5f91f6d6f5b182cacc26bf6103528406', 'width': 320}, {'height': 308, 'url': 'https://preview.redd.it/hfi4lr4abqva1.png?width=640&crop=smart&auto=webp&s=e0729a215d1c669c30121c8e5ce4626c5f2f434e', 'width': 640}], 'source': {'height': 380, 'url': 'https://preview.redd.it/hfi4lr4abqva1.png?auto=webp&s=d17000d75f402ff17946d90f7c7e63e08d03fe17', 'width': 788}, 'variants': {}}]} |
|||
Embeddings for Search, alternatives? | 5 | [https://github.com/openai/openai-cookbook/blob/main/examples/Question\_answering\_using\_embeddings.ipynb](https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb) is a very cool notebook / example of using embeddings to extend / enhance data that could be searched using OpenAI's tools and models. Has anyone tried to do the same w/ any of the local LLAMA or related models? Happen to have notes if so? | 2023-04-24T01:04:50 | https://www.reddit.com/r/LocalLLaMA/comments/12wxdz7/embeddings_for_search_alternatives/ | Fun_Tangerine_1086 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12wxdz7 | false | null | t3_12wxdz7 | /r/LocalLLaMA/comments/12wxdz7/embeddings_for_search_alternatives/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '3tAC5fvjZdUj6Ph2A65y65Kq6UmTiPhhaxiW-ielUxk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4V2fpP2VQ-4BE5g2yNICGvUOxD7MhLO5-diYKv_CVtA.jpg?width=108&crop=smart&auto=webp&s=cd03b53071ff38c2fbe1a3df295144c517dba88f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4V2fpP2VQ-4BE5g2yNICGvUOxD7MhLO5-diYKv_CVtA.jpg?width=216&crop=smart&auto=webp&s=f5706ae7bd79441e947f5a2d24d79e2beb9610b0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4V2fpP2VQ-4BE5g2yNICGvUOxD7MhLO5-diYKv_CVtA.jpg?width=320&crop=smart&auto=webp&s=ebf1d0a49169ae1757e1ef0018fe373eb08f5ed1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4V2fpP2VQ-4BE5g2yNICGvUOxD7MhLO5-diYKv_CVtA.jpg?width=640&crop=smart&auto=webp&s=3947421fa8c700075d33a442e0cdcda672d61b21', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4V2fpP2VQ-4BE5g2yNICGvUOxD7MhLO5-diYKv_CVtA.jpg?width=960&crop=smart&auto=webp&s=c184056da29321b5873b8242f6225acc29be41fe', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4V2fpP2VQ-4BE5g2yNICGvUOxD7MhLO5-diYKv_CVtA.jpg?width=1080&crop=smart&auto=webp&s=0da1235033fd70e0b2122222940e649d3084f059', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4V2fpP2VQ-4BE5g2yNICGvUOxD7MhLO5-diYKv_CVtA.jpg?auto=webp&s=70e2b235061a71df9ca6888ab914d5eb702a2433', 'width': 1200}, 'variants': {}}]} |
The race is on | 31 | 2023-04-24T03:18:45 | https://twitter.com/gfodor/status/1650337241817034753?t=n2147Yni-DyN2TD2cSNgAQ&s=19 | SupernovaTheGrey | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 12x177n | false | {'oembed': {'author_name': '[email protected]', 'author_url': 'https://twitter.com/gfodor', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">So when does llama get a 2M token window, before or after GPT-4?<a href="https://t.co/wQcAyMH8cr">https://t.co/wQcAyMH8cr</a></p>— [email protected] (@gfodor) <a href="https://twitter.com/gfodor/status/1650337241817034753?ref_src=twsrc%5Etfw">April 24, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/gfodor/status/1650337241817034753', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_12x177n | /r/LocalLLaMA/comments/12x177n/the_race_is_on/ | false | false | 31 | {'enabled': False, 'images': [{'id': 'n245bAQJajOPlPCaLIdhiYEPGJRwycAG9dLO4kagv70', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/G_aedzENKvnZmg5NhMEcPOcVqYHuVa2g27NfeZB-5Zs.jpg?width=108&crop=smart&auto=webp&s=78662533920f3e7b8dc6b4f3925b26d0fca31bd2', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/G_aedzENKvnZmg5NhMEcPOcVqYHuVa2g27NfeZB-5Zs.jpg?auto=webp&s=363ebbac35c30598e24c9e5c27e07b773d7c1cb3', 'width': 140}, 'variants': {}}]} |
||
How to install Vicuna language model on Mac (2 ways) | 28 | Happened to spend quite some time figuring out how to install Vicuna 7B and 13B models on Mac. This is for you if you have the same struggle...
## If you use llama.cpp
Download 7B model:
mkdir models/chharlesonfire_ggml-vicuna-7b-4bit
wget https://huggingface.co/chharlesonfire/ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-q4_0.bin -P models/chharlesonfire_ggml-vicuna-7b-4bit
Download 13B model:
mkdir models/eachadea_ggml-vicuna-13b-1.1
wget https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_0.bin -P models/eachadea_ggml-vicuna-13b-1.1
Run example (7B):
./main -m models/chharlesonfire_ggml-vicuna-7b-4bit/ggml-vicuna-7b-q4_0.bin -t 4 -c 2048 -n 2048 --color -i --reverse-prompt '### Human:' -p '### Human:'
## If you use text-generation-webui:
Download 7B model:
python download-model.py chharlesonfire/ggml-vicuna-7b-4bit
Download 13B model:
mkdir models/eachadea_ggml-vicuna-13b-1.1; wget https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_0.bin -P models/eachadea_ggml-vicuna-13b-1.1
Run with:
python server.py --chat --threads 4
And select the Vicuna model on the model page.
An expanded write-up, plus some background info on Vicuna models:
[https://agi-sphere.com/vicuna-mac/](https://agi-sphere.com/vicuna-mac/) | 2023-04-24T03:27:41 | https://www.reddit.com/r/LocalLLaMA/comments/12x1g24/how_to_install_vicuna_language_model_on_mac_2_ways/ | andw1235 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12x1g24 | false | null | t3_12x1g24 | /r/LocalLLaMA/comments/12x1g24/how_to_install_vicuna_language_model_on_mac_2_ways/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'oIQNkjZDvN6YBxctwubQsO4_vlWpOfs29KASiO4UTec', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/aq0qXlhiF8QfiFs9Rk8-k2Ys4vsJ9JN0QxEblRa3Smo.jpg?width=108&crop=smart&auto=webp&s=afd0418066e803b77b43a8f29f5c9c3867e73ff5', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/aq0qXlhiF8QfiFs9Rk8-k2Ys4vsJ9JN0QxEblRa3Smo.jpg?width=216&crop=smart&auto=webp&s=18cee615c15c10392064c0be8594a776f745fa88', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/aq0qXlhiF8QfiFs9Rk8-k2Ys4vsJ9JN0QxEblRa3Smo.jpg?width=320&crop=smart&auto=webp&s=f7cec4a692919dab075c520d55dcffc94382946e', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/aq0qXlhiF8QfiFs9Rk8-k2Ys4vsJ9JN0QxEblRa3Smo.jpg?width=640&crop=smart&auto=webp&s=7b033bc0f3636e817761ee6e3ff50ba7e5dadb60', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/aq0qXlhiF8QfiFs9Rk8-k2Ys4vsJ9JN0QxEblRa3Smo.jpg?width=960&crop=smart&auto=webp&s=7ec16ffbd4e6816e5b61831fc4dbf8f47a48423e', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/aq0qXlhiF8QfiFs9Rk8-k2Ys4vsJ9JN0QxEblRa3Smo.jpg?width=1080&crop=smart&auto=webp&s=b655fb84d4e16a82f92db3c5a2bb310f4ffa6b04', 'width': 1080}], 'source': {'height': 896, 'url': 'https://external-preview.redd.it/aq0qXlhiF8QfiFs9Rk8-k2Ys4vsJ9JN0QxEblRa3Smo.jpg?auto=webp&s=e51508da9970a0c5d0906558b097c33e76d29019', 'width': 1344}, 'variants': {}}]} |
Scaling Transformer to 1M tokens and beyond with RMT | 31 | 2023-04-24T04:18:11 | https://arxiv.org/pdf/2304.11062.pdf | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 12x2tj0 | false | null | t3_12x2tj0 | /r/LocalLLaMA/comments/12x2tj0/scaling_transformer_to_1m_tokens_and_beyond_with/ | false | false | default | 31 | null |
|
Could I run Stable diffusion and a LLaMA model on the same 3060? (12GB). | 3 | [deleted] | 2023-04-24T05:06:59 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 12x414g | false | null | t3_12x414g | /r/LocalLLaMA/comments/12x414g/could_i_run_stable_diffusion_and_a_llama_model_on/ | false | false | default | 3 | null |
||
[deleted by user] | 0 | [removed] | 2023-04-24T05:30:12 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 12x4lix | false | null | t3_12x4lix | /r/LocalLLaMA/comments/12x4lix/deleted_by_user/ | false | false | default | 0 | null |
||
I build a completely Local and portable AutoGPT with the help of gpt-llama, running on Vicuna-13b | 103 | 2023-04-24T09:16:38 | https://twitter.com/AdamPaigge/status/1650427355272208384?s=20 | SupernovaTheGrey | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 12x9k9g | false | {'oembed': {'author_name': 'Adam', 'author_url': 'https://twitter.com/AdamPaigge', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">As a proof of concept, completely local and offline personality cores now exist. I think I may have the only one in existence currently.<br><br>Introducing:<br><br>Free Autonomous Agents 🌱 <a href="https://t.co/hrnIvqOxYo">pic.twitter.com/hrnIvqOxYo</a></p>— Adam (@AdamPaigge) <a href="https://twitter.com/AdamPaigge/status/1650427355272208384?ref_src=twsrc%5Etfw">April 24, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/AdamPaigge/status/1650427355272208384', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_12x9k9g | /r/LocalLLaMA/comments/12x9k9g/i_build_a_completely_local_and_portable_autogpt/ | false | false | 103 | {'enabled': False, 'images': [{'id': 'TToidt_gzWYbibAuG5ns1kuVOJeALLo1ONsCclZW5bU', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/NHlMjWyPJQ77Wzh7S8quoEiIginxQqBt57rdU-FadcI.jpg?width=108&crop=smart&auto=webp&s=3d2bbcbc1b1d81f9f9b7f51eac3d80a6e3b402c3', 'width': 108}], 'source': {'height': 59, 'url': 'https://external-preview.redd.it/NHlMjWyPJQ77Wzh7S8quoEiIginxQqBt57rdU-FadcI.jpg?auto=webp&s=6ed0b17be9bbb1f08904763a885a9833e2c4e245', 'width': 140}, 'variants': {}}]} |
||
For those who have tried, how does bloom-176b feel in comparison with LLaMA\Alpaca\Vicuna? | 23 | Bloom is nowhere similar to something you can run locally, with its 176 billion parameters, however I was wondering if anyone has tried it in the cloud and if the bigger amount of parameters compared to the largest we have (llama 65b) actually make a noticeable difference.
There's plenty of material about people trying bloom, however I haven't seen a 'review' by those that have tried local LLaMA models extensively (especially 4bit), so I figured I'd ask here. | 2023-04-24T11:34:18 | https://www.reddit.com/r/LocalLLaMA/comments/12xcjuu/for_those_who_have_tried_how_does_bloom176b_feel/ | x54675788 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12xcjuu | false | null | t3_12xcjuu | /r/LocalLLaMA/comments/12xcjuu/for_those_who_have_tried_how_does_bloom176b_feel/ | false | false | self | 23 | null |
What is the best LLaMA I can run on my machine with these specs? | 13 | Do take in mind I want to use the AI to write stuff. So helping me write books and such.
My PC specs:
5800X3D
32GB RAM
M.2 SSD
RTX 4090
​
Thanks for any and all answers! | 2023-04-24T12:20:01 | https://www.reddit.com/r/LocalLLaMA/comments/12xdsi9/what_is_the_best_llama_i_can_run_on_my_machine/ | DatDudeFromNoreg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12xdsi9 | false | null | t3_12xdsi9 | /r/LocalLLaMA/comments/12xdsi9/what_is_the_best_llama_i_can_run_on_my_machine/ | false | false | self | 13 | null |
Alpaca Vs. Final Jeopardy | 72 | ​
[% of 100 Questions Answered Correctly](https://preview.redd.it/2m0pdzs70vva1.png?width=700&format=png&auto=webp&s=b36997b82346ccbb7a975b536ab7aa6950bff1b3)
​
I wanted to test the various Alpaca model sizes against real world smartness so I put it up against Final Jeopardy.
**Results are about as expected:**
* The larger the model, the more it knows
* The larger the model, the slower it generates tokens
* Humans were smarter than all but the largest Alpaca lora
**General thoughts:**
* Surprisingly at this task, the 30B and 13B results were negligible
* The 7B did better than I expected for how small it is
* The formatting of the answers turned to questions did not matter that much - based on the keywords, the models mostly had a concept and stuck with it
**Technical:**
* Questions were taken from recent episodes to avoid likelihood of having been crawled into model
* Human scores were averaged for the humans who participated in Final Jeopardy
* Models were 4-bit quantized GGML
* Model sizes were: 4.2GB (7B), 8.1GB (13B), 20.3GB (30B) and 40.8GB (65B)
* Models were run using CPU only and using [alpaca.cpp](https://github.com/antimatter15/alpaca.cpp)
\*\*Data:\*\*All data including processing script, question prompt file and a LibreOffice spreadsheet of prompts, responses and scores are located at:[llm-jeopardy](https://github.com/aigoopy/llm-jeopardy) | 2023-04-24T15:19:06 | https://www.reddit.com/r/LocalLLaMA/comments/12xkm9v/alpaca_vs_final_jeopardy/ | aigoopy | self.LocalLLaMA | 2023-04-24T16:38:17 | 0 | {} | 12xkm9v | false | null | t3_12xkm9v | /r/LocalLLaMA/comments/12xkm9v/alpaca_vs_final_jeopardy/ | false | false | 72 | {'enabled': False, 'images': [{'id': 'Ccn1FLbv0acgCpsN-j27fF_bKyFHOGWBkeGttSIXnhQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/u9nLhA8lvhwhaq8cUwSRXKCkemexqzfOSOy7RKZhVQQ.jpg?width=108&crop=smart&auto=webp&s=a53cd5892809c5a4bee10745d825280cdc6effce', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/u9nLhA8lvhwhaq8cUwSRXKCkemexqzfOSOy7RKZhVQQ.jpg?width=216&crop=smart&auto=webp&s=366ebec8abdc16849e7463b923bb3348ba69103d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/u9nLhA8lvhwhaq8cUwSRXKCkemexqzfOSOy7RKZhVQQ.jpg?width=320&crop=smart&auto=webp&s=438365a224c22eb044082243dd7ddb49ad6199bd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/u9nLhA8lvhwhaq8cUwSRXKCkemexqzfOSOy7RKZhVQQ.jpg?width=640&crop=smart&auto=webp&s=c0e47c6c86a4b90ef876800e79b22920baf35115', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/u9nLhA8lvhwhaq8cUwSRXKCkemexqzfOSOy7RKZhVQQ.jpg?width=960&crop=smart&auto=webp&s=0271cd2d8155bbee35379a4131940a043f121155', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/u9nLhA8lvhwhaq8cUwSRXKCkemexqzfOSOy7RKZhVQQ.jpg?width=1080&crop=smart&auto=webp&s=126a9a9c1d49c511de9461e5b068fdd149956bb3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/u9nLhA8lvhwhaq8cUwSRXKCkemexqzfOSOy7RKZhVQQ.jpg?auto=webp&s=10c5e2f6ab2ab12d1f841c3f63d8a52687fedda7', 'width': 1200}, 'variants': {}}]} |
|
With this PC system: I9-13900K, 128GB DDR5 RAM, Geforce 4090 - What is the best locally installed model I can run as non-aggravating speeds? I prefer a unfiltered/unrestricted model as much as possible. | 0 | [removed] | 2023-04-24T15:45:26 | https://www.reddit.com/r/LocalLLaMA/comments/12xmer2/with_this_pc_system_i913900k_128gb_ddr5_ram/ | cleverestx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12xmer2 | false | null | t3_12xmer2 | /r/LocalLLaMA/comments/12xmer2/with_this_pc_system_i913900k_128gb_ddr5_ram/ | false | false | default | 0 | null |
What is the theoretical best consumer grade PC build that would give fastest performance? | 3 | I'm curious to see what you guys would ideally build, and what token speeds you'd estimate. | 2023-04-24T18:10:01 | https://www.reddit.com/r/LocalLLaMA/comments/12xr7h0/what_is_the_theoretical_best_consumer_grade_pc/ | heisdancingdancing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12xr7h0 | false | null | t3_12xr7h0 | /r/LocalLLaMA/comments/12xr7h0/what_is_the_theoretical_best_consumer_grade_pc/ | false | false | self | 3 | null |
Lora, ggml, safetensors, hf, etc. Is there a glossary and guide on which model to choose? | 16 | As per title, even without going into the quantizations and number of parameters, there is a lot of variants for models to download.
Where can I get at least a rough picture on which is best for each use case? | 2023-04-24T18:23:03 | https://www.reddit.com/r/LocalLLaMA/comments/12xrl1q/lora_ggml_safetensors_hf_etc_is_there_a_glossary/ | x54675788 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12xrl1q | false | null | t3_12xrl1q | /r/LocalLLaMA/comments/12xrl1q/lora_ggml_safetensors_hf_etc_is_there_a_glossary/ | false | false | self | 16 | null |
I Made a Local AI Wordpress Editor | 7 | This plugin connects Wordpress to Oobabooga and lets you choose between editing posts titles or post content. There is still so much more I need to add, I'd like to enable the ability to edit products, image meta, and do translations. The most pressing issue needing to be fixed is not being able to see individual post content; only the post title is displayed (although both can be edited).
As of now the plugin is basically a skeleton, but it does successfully connect and receive the API requests from oobabooga. If something breaks or stops working, try using this version of oobabooga: a6ef2429fa5a23de0bb1a28e50361f282daca9a2. That being said I'm going to try and keep the plugin updated to work with new versions whenever there are breaking changes.
I have no idea if this will actually be useful for anyone else, but I've been waiting for months for someone to make a plugin like this. The hardest part was figuring out how to send and receive the API calls correctly; as long as that doesn't break it seems pretty easy to just add more stuff into the plugin. I'm very open to suggestions for new features or improvements.
Here is the github page:
[https://github.com/CheshireAI/Local-AI-For-Wordpress](https://github.com/CheshireAI/Local-AI-For-Wordpress) | 2023-04-24T19:34:55 | https://www.reddit.com/r/LocalLLaMA/comments/12xtm2v/i_made_a_local_ai_wordpress_editor/ | CheshireAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12xtm2v | false | null | t3_12xtm2v | /r/LocalLLaMA/comments/12xtm2v/i_made_a_local_ai_wordpress_editor/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'WBpqSDHioYJqlh0sI3ZVWLF3yx3G35gNRBYqDI9jmqU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nG9vf-besTQjqSiiRFe4IEjOaQNMgId9gnSMmRHo3Rg.jpg?width=108&crop=smart&auto=webp&s=6e22ed6ee2247a9d3ffd2ae923de874aedd6af61', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nG9vf-besTQjqSiiRFe4IEjOaQNMgId9gnSMmRHo3Rg.jpg?width=216&crop=smart&auto=webp&s=94cca8e0926a9e82cc086ee7a7901cd96bc0c13c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nG9vf-besTQjqSiiRFe4IEjOaQNMgId9gnSMmRHo3Rg.jpg?width=320&crop=smart&auto=webp&s=475ad2a8907efb0d1bee0508358cfca52a9fff02', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nG9vf-besTQjqSiiRFe4IEjOaQNMgId9gnSMmRHo3Rg.jpg?width=640&crop=smart&auto=webp&s=857ff7ae66ae201aaa46ef7a72928cbfcac50d9d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nG9vf-besTQjqSiiRFe4IEjOaQNMgId9gnSMmRHo3Rg.jpg?width=960&crop=smart&auto=webp&s=1faf339f2a8e1dfde2f68d899560cba22effcfc3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nG9vf-besTQjqSiiRFe4IEjOaQNMgId9gnSMmRHo3Rg.jpg?width=1080&crop=smart&auto=webp&s=b042aa5a12a1866d38083df03144ab7cba6fe2ca', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nG9vf-besTQjqSiiRFe4IEjOaQNMgId9gnSMmRHo3Rg.jpg?auto=webp&s=12e83f53f5a02e0348573941bf3b264a3c8829d7', 'width': 1200}, 'variants': {}}]} |
[deleted by user] | 1 | [removed] | 2023-04-24T19:47:43 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 12xtzdm | false | null | t3_12xtzdm | /r/LocalLLaMA/comments/12xtzdm/deleted_by_user/ | false | false | default | 1 | null |
||
Is there a colab or guide for fine tuning a 13b model for instruction following? | 4 | I found guides like this: [https://github.com/sahil280114/codealpaca](https://github.com/sahil280114/codealpaca)
But I think the provided config and commands/setup are for 7b...is there something that shows 13b+ setup and training?
I'm specifically looking outside of vicuna bases due to the safety restrictions it was fine tuned on. | 2023-04-24T19:49:01 | https://www.reddit.com/r/LocalLLaMA/comments/12xu0t8/is_there_a_colab_or_guide_for_fine_tuning_a_13b/ | natural_language_guy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12xu0t8 | false | null | t3_12xu0t8 | /r/LocalLLaMA/comments/12xu0t8/is_there_a_colab_or_guide_for_fine_tuning_a_13b/ | false | false | self | 4 | null |
Quantized or less parameters? | 13 | I've been having a lot of fun playing around with the Alpaca 7b model using 8bit quantization but I'm definitely seeing the limits of it. With 4-bit quantization my GPU would be able to run Alpaca 13b (RTX 3080 10GB) but I wonder if that would perform any better. Any tips? | 2023-04-24T22:01:32 | https://www.reddit.com/r/LocalLLaMA/comments/12xy5ts/quantized_or_less_parameters/ | MonkeyMaster64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12xy5ts | false | null | t3_12xy5ts | /r/LocalLLaMA/comments/12xy5ts/quantized_or_less_parameters/ | false | false | self | 13 | null |
Please help me choose a custom PC for running LLMs locally | 1 | [removed] | 2023-04-24T22:30:54 | https://www.reddit.com/r/LocalLLaMA/comments/12xyzsn/please_help_me_choose_a_custom_pc_for_running/ | ljubarskij | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12xyzsn | false | null | t3_12xyzsn | /r/LocalLLaMA/comments/12xyzsn/please_help_me_choose_a_custom_pc_for_running/ | false | false | default | 1 | null |
LLaMA support on LoopGPT | 3 | [removed] | 2023-04-24T23:21:23 | https://www.reddit.com/r/LocalLLaMA/comments/12y0e9g/llama_support_on_loopgpt/ | fayazrahman4u | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12y0e9g | false | null | t3_12y0e9g | /r/LocalLLaMA/comments/12y0e9g/llama_support_on_loopgpt/ | false | false | default | 3 | null |
Embeddings? | 12 | Anyone use LLaMA for embeddings?
Anyone know how?
So far I've found only [this discussion on llama.cpp](https://github.com/ggerganov/llama.cpp/issues/899) but what about on GPU? | 2023-04-25T01:29:25 | https://www.reddit.com/r/LocalLLaMA/comments/12y3stx/embeddings/ | phree_radical | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12y3stx | false | null | t3_12y3stx | /r/LocalLLaMA/comments/12y3stx/embeddings/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'LwsOyOqadSFU27S536lgY_LCQxRSOsGVYfXb_7A_oBs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dyGUKhrv4qCYR_rkCyzrGEMUebqGAgE5Y_joVMOpMs4.jpg?width=108&crop=smart&auto=webp&s=f55c146d3ff889447f3d58461cf3e3b629a2765e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dyGUKhrv4qCYR_rkCyzrGEMUebqGAgE5Y_joVMOpMs4.jpg?width=216&crop=smart&auto=webp&s=30ae93393ba7ad8cef93c373030b3d4a12380732', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dyGUKhrv4qCYR_rkCyzrGEMUebqGAgE5Y_joVMOpMs4.jpg?width=320&crop=smart&auto=webp&s=26d53b447926b5ae9cf5c2bb4003ea5610b7ab82', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dyGUKhrv4qCYR_rkCyzrGEMUebqGAgE5Y_joVMOpMs4.jpg?width=640&crop=smart&auto=webp&s=da2c3c2a2a9cdb8d9199d0cd89f92bb7f616eb1d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dyGUKhrv4qCYR_rkCyzrGEMUebqGAgE5Y_joVMOpMs4.jpg?width=960&crop=smart&auto=webp&s=efb68c033711d778e88cd24ea808c626f385f548', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dyGUKhrv4qCYR_rkCyzrGEMUebqGAgE5Y_joVMOpMs4.jpg?width=1080&crop=smart&auto=webp&s=3f0d00b269b594f83b99cbe71a4cf8cdffd46a1d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dyGUKhrv4qCYR_rkCyzrGEMUebqGAgE5Y_joVMOpMs4.jpg?auto=webp&s=b49f6bd0150c0d703c13c9c4481d1d5621dd1412', 'width': 1200}, 'variants': {}}]} |
: [Help] Trouble Accessing Oogabooga WebUI on Paperspace - How to Use Gradio Instead? | 1 | Hey everyone!
I'm having some issues accessing the Oogabooga WebUI on Paperspace. After I finish the installation, it creates a link for local access, but I always get an error that it can't reach the host. I've encountered a similar problem with my other notebook running Stable Diffusion, but it provides me with another option to access it through Gradio.
The problem is, I'm not sure how to use Gradio instead of the local option on Paperspace. Has anyone experienced this issue or have any idea on how to resolve it? I'm really hoping to find a solution to this, as I'm eager to start working with the Oogabooga WebUI.
If you could provide any guidance or point me in the right direction, I would greatly appreciate it. Thanks in advance for your help!
TL;DR: Can't access Oogabooga WebUI on Paperspace locally; need help using Gradio instead. | 2023-04-25T01:45:57 | https://www.reddit.com/r/LocalLLaMA/comments/12y487z/help_trouble_accessing_oogabooga_webui_on/ | Apu000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12y487z | false | null | t3_12y487z | /r/LocalLLaMA/comments/12y487z/help_trouble_accessing_oogabooga_webui_on/ | false | false | self | 1 | null |
Running Vicuna 7B on my Personal Website w/WebGPU | 5 | [removed] | 2023-04-25T03:55:01 | https://youtu.be/-VsN9_oe8R0 | DustinBrett | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 12y7bsy | false | {'oembed': {'author_name': 'Dustin Brett', 'author_url': 'https://www.youtube.com/@DustinBrett', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/-VsN9_oe8R0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="ChatGPT Directly in the Browser with WebGPU?! (No Install, No Server)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/-VsN9_oe8R0/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'ChatGPT Directly in the Browser with WebGPU?! (No Install, No Server)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_12y7bsy | /r/LocalLLaMA/comments/12y7bsy/running_vicuna_7b_on_my_personal_website_wwebgpu/ | false | false | default | 5 | null |
WizardLM, fine-tuned LLaMA 7B with evolving instructions, outperforms ChatGPT and Vicuna 13B on complex test instructions | 88 | 2023-04-25T06:42:54 | https://arxiv.org/pdf/2304.12244.pdf | Creative-Rest-2112 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 12yax3h | false | null | t3_12yax3h | /r/LocalLLaMA/comments/12yax3h/wizardlm_finetuned_llama_7b_with_evolving/ | false | false | default | 88 | null |
|
Langchain Vicuna Server - Added Support for GPTQ-4bit and Experimental Vicuna Embeddings (Hugging Face only) | 23 | Hey, since some people asked me about GPTQ for my playground repository, I ended up implementing it by cloning the GPQT-for-LLama repository in my own, and basically importing their inference code inside a FastAPI server. I figured I would share this since someone might interested in trying the Vicuna 7B or 13B GPTQ-4 bit with Langchain.
I also added an experimental embeddings endpoint that is only compatible with HF models (I have yet to benchmark against another embeddings to compare, they didn't seem too good on a first try).
Unfortunately, I have not yet worked on installation instructions / script.
If you want to try these out: [https://github.com/paolorechia/learn-langchain](https://github.com/paolorechia/learn-langchain) | 2023-04-25T06:48:26 | https://www.reddit.com/r/LocalLLaMA/comments/12yb0wx/langchain_vicuna_server_added_support_for/ | rustedbits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12yb0wx | false | null | t3_12yb0wx | /r/LocalLLaMA/comments/12yb0wx/langchain_vicuna_server_added_support_for/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'yAUx63S-fWnn4AbZxswOskIWAnyck8oXGPogEaem_gg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4gShPVL2BLEsRUn3_yCnoesHSul0lzEmcCPYHa5IjUA.jpg?width=108&crop=smart&auto=webp&s=2886b636d95c3fd6c7801896b0621abba3643b16', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4gShPVL2BLEsRUn3_yCnoesHSul0lzEmcCPYHa5IjUA.jpg?width=216&crop=smart&auto=webp&s=50b5eaa90d4550680d4e9535ec24dacbeac7828e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4gShPVL2BLEsRUn3_yCnoesHSul0lzEmcCPYHa5IjUA.jpg?width=320&crop=smart&auto=webp&s=1316821e256ebfd707a91676835431e0438990a1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4gShPVL2BLEsRUn3_yCnoesHSul0lzEmcCPYHa5IjUA.jpg?width=640&crop=smart&auto=webp&s=529bc6d19cc81c0b466d84fc23a1d1f72a15caa4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4gShPVL2BLEsRUn3_yCnoesHSul0lzEmcCPYHa5IjUA.jpg?width=960&crop=smart&auto=webp&s=a2736b1eb5ba3c51fe651896ec23b6134cf323d3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4gShPVL2BLEsRUn3_yCnoesHSul0lzEmcCPYHa5IjUA.jpg?width=1080&crop=smart&auto=webp&s=fb297d8d9be822495552607f9c1054ee55609609', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4gShPVL2BLEsRUn3_yCnoesHSul0lzEmcCPYHa5IjUA.jpg?auto=webp&s=fb6b5bbf6248e3c45e14deef3de28cccb47ba4c2', 'width': 1200}, 'variants': {}}]} |
New Open Source Framework and No-Code GUI for Fine-Tuning LLMs: H2O LLM Studio | 46 | As already discussed recently on this [subreddit](https://reddit.com/r/LocalLLaMA/comments/12vzjti/new_fully_open_source_model_h2ogpt_20b_based_on/), we at H2O are working relentlessly on supporting the open source LLM movement.
We are very excited to share a new fully open source framework for fine-tuning LLMs:
https://github.com/h2oai/h2o-llmstudio
With H2O LLM Studio, you can
- easily and effectively fine-tune LLMs
- use a **graphic user interface (GUI)** specially designed for large language models
- finetune any LLM using a large variety of hyperparameters.
- use recent finetuning techniques such as Low-Rank Adaptation (LoRA) and 8-bit model training with a low memory footprint.
- use advanced evaluation metrics to judge generated answers by the model.
- track and compare your model performance visually. In addition, Neptune integration can be used.
- chat with your model and get instant feedback on your model performance.
- easily export your model to the Hugging Face Hub and share it with the community.
You can use the framework via CLI or GUI. H2O LLM Studio is built by several well-known Kaggle GMs and is specifically tailored for rapid experimenting. We also offer sample data to get quickly started with the recently released OASST data.
This is just the beginning and we have many plans for the future.
Hope for the community to give it a spin and let us know what you think. Always happy for issues being reported on the github repo! | 2023-04-25T07:49:28 | https://www.reddit.com/r/LocalLLaMA/comments/12yc8op/new_open_source_framework_and_nocode_gui_for/ | ichiichisan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12yc8op | false | null | t3_12yc8op | /r/LocalLLaMA/comments/12yc8op/new_open_source_framework_and_nocode_gui_for/ | false | false | self | 46 | {'enabled': False, 'images': [{'id': 'AVkvFGVuJVRpGSRDt9OFl-NSct5ZQ7sN3utcHqGDkho', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EqiCjDUGkSTOiwivIkp0nXbp72VFwom00I0Q31xWuGw.jpg?width=108&crop=smart&auto=webp&s=53f4f1d445ffb9497a86ca9e9af3cea3940b6f62', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EqiCjDUGkSTOiwivIkp0nXbp72VFwom00I0Q31xWuGw.jpg?width=216&crop=smart&auto=webp&s=011a883dbe7feb8ea7785d25f8868296a61fe502', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EqiCjDUGkSTOiwivIkp0nXbp72VFwom00I0Q31xWuGw.jpg?width=320&crop=smart&auto=webp&s=e257b572fdd56410cfa8cc9699db69592c552adf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EqiCjDUGkSTOiwivIkp0nXbp72VFwom00I0Q31xWuGw.jpg?width=640&crop=smart&auto=webp&s=f67905bd3f802fe8fdeb755ca833509421cf80ce', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EqiCjDUGkSTOiwivIkp0nXbp72VFwom00I0Q31xWuGw.jpg?width=960&crop=smart&auto=webp&s=20a4ae7802b2173a37137d20c6d3c1f2185295fb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EqiCjDUGkSTOiwivIkp0nXbp72VFwom00I0Q31xWuGw.jpg?width=1080&crop=smart&auto=webp&s=a0ff6bd7a91ea1fdc9f2ff130852577f262a0bf0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EqiCjDUGkSTOiwivIkp0nXbp72VFwom00I0Q31xWuGw.jpg?auto=webp&s=c6ee52005c182f30d449bd51211b01cf6222ea01', 'width': 1200}, 'variants': {}}]} |
What is the best 7B model have you found so for ? | 14 | I want to use it for contextual Q&A. | 2023-04-25T08:03:30 | https://www.reddit.com/r/LocalLLaMA/comments/12ychts/what_is_the_best_7b_model_have_you_found_so_for/ | djangoUnblamed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12ychts | false | null | t3_12ychts | /r/LocalLLaMA/comments/12ychts/what_is_the_best_7b_model_have_you_found_so_for/ | false | false | self | 14 | null |
I have two 4090s. Is there any way to work a model that incorporates both? | 8 | I am super excited about using AI for various tasks. I just read an article about 3B and 7B and (apparently there are others?) I'd like to install this and take it for a drive.
Is there any way I can use both my cards to process?
Thanks o/
Edit: system specs
13900KS
64GB DDR5 6000
2x 4090
Multiple m.2 SSDs | 2023-04-25T08:12:25 | https://www.reddit.com/r/LocalLLaMA/comments/12ycnf0/i_have_two_4090s_is_there_any_way_to_work_a_model/ | Siigari | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12ycnf0 | false | null | t3_12ycnf0 | /r/LocalLLaMA/comments/12ycnf0/i_have_two_4090s_is_there_any_way_to_work_a_model/ | false | false | self | 8 | null |
Was able to run 13B 4bit and Stable Diffusion at the same time on a 3060 12GB. | 81 | [deleted] | 2023-04-25T11:54:59 | [deleted] | 2023-04-25T11:59:03 | 0 | {} | 12yh5pv | false | null | t3_12yh5pv | /r/LocalLLaMA/comments/12yh5pv/was_able_to_run_13b_4bit_and_stable_diffusion_at/ | false | false | default | 81 | null |
||
65B 4bit on 13900k? | 18 | Hi guys,
Has anyone in this group tried running 65B 4bit on 13900k (without GPU)? If so, what was your performance like, and what amount of RAM do you have?
Thanks a lot in advance! | 2023-04-25T15:58:00 | https://www.reddit.com/r/LocalLLaMA/comments/12ynhoc/65b_4bit_on_13900k/ | ljubarskij | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12ynhoc | false | null | t3_12ynhoc | /r/LocalLLaMA/comments/12ynhoc/65b_4bit_on_13900k/ | false | false | self | 18 | null |
LLaMA training size | 8 | How is a LLaMA trained? I want it to be able to answer questions just based off of 2 or 3 medical textbooks in PDF form? Does anyone have any resources so I can read and learn more on my own? Thank you | 2023-04-25T17:14:19 | https://www.reddit.com/r/LocalLLaMA/comments/12ypnhh/llama_training_size/ | Awakened_Garou1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12ypnhh | false | null | t3_12ypnhh | /r/LocalLLaMA/comments/12ypnhh/llama_training_size/ | false | false | self | 8 | null |
Tips to train a therapy chat bot | 3 | [deleted] | 2023-04-25T20:44:38 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 12yvgo5 | false | null | t3_12yvgo5 | /r/LocalLLaMA/comments/12yvgo5/tips_to_train_a_therapy_chat_bot/ | false | false | default | 3 | null |
||
Is the "Fatty Alpaca: alpaca-lora-65B GGML" model working well for anyone? | 13 | Unfortunately, I'm not finding that it works very well. Not even as well as a 7B model. So I'm wondering what other people have experienced. I'm wondering if I'm doing something wrong. I'm invoking llama.cpp with the only real option being "-n 2048". The results I get from the 65B model are underwhelming. This is for the 4 bit rev 2 version.
Using the prompt "how is a chair different from a bathtub". I get the following responses.
65B model
> A chair can be used for sitting on while a bathtub can be used for bathing in.
13B model
> A chair and a bathtub are two very different types of furniture with distinct purposes and design elements. A chair is designed for seating, typically featuring a seat and backrest, and sometimes armrests. Chairs can be made of various materials such as wood, plastic, metal, or fabric. They come in many styles, from simple and functional to ornate and decorative.
>
> On the other hand, a bathtub is designed for bathing, with a deep basin that holds water for the purpose of washing oneself. Bathtubs are typically made of materials such as porcelain, enamel, or acrylic, and they come in a variety of sizes and shapes. They may have features like built-in soap dishes, footrests, or whirlpool jets for added comfort and relaxation.
>
> In summary, while both a chair and a bathtub are furniture items, their design, purpose, and function are distinct and separate from each other.
7B model
> A chair and a bathtub are both pieces of furniture, but they have many differences. A chair usually has four legs and a seat, whereas a bathtub has four sides and a bottom. Chairs are typically used indoors, while bathtubs can be found in bathrooms. Chairs are much smaller than bathtubs, so they take up less space. Finally, chairs are generally made of wood or plastic, while bathtubs are usually made from acrylic, fiberglass, or metal. | 2023-04-25T21:22:48 | https://www.reddit.com/r/LocalLLaMA/comments/12ywj2p/is_the_fatty_alpaca_alpacalora65b_ggml_model/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12ywj2p | false | null | t3_12ywj2p | /r/LocalLLaMA/comments/12ywj2p/is_the_fatty_alpaca_alpacalora65b_ggml_model/ | false | false | self | 13 | null |
Failed building wheel for llama-cpp-python | 5 | I rented out a GPU on [Vast.ai](https://Vast.ai) to try and play around with some models but I can't get past building llama-cpp-python.
Building wheel for llama-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
I will post more context if necessary but does anyone have any ideas on what I should do to fix this. | 2023-04-25T23:23:14 | https://www.reddit.com/r/LocalLLaMA/comments/12yzldp/failed_building_wheel_for_llamacpppython/ | ShadowbanRevival | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12yzldp | false | null | t3_12yzldp | /r/LocalLLaMA/comments/12yzldp/failed_building_wheel_for_llamacpppython/ | false | false | self | 5 | null |
Any good models to test out 20gb VRAM? | 3 | [deleted] | 2023-04-26T00:12:58 | [deleted] | 2023-05-10T04:36:38 | 0 | {} | 12z0rk8 | false | null | t3_12z0rk8 | /r/LocalLLaMA/comments/12z0rk8/any_good_models_to_test_out_20gb_vram/ | false | false | default | 3 | null |
||
Why is vicuna giving me rubbish? | 8 | I tried both 7b and 13b(8bits) 1.1 version, they both keep responding some random information.
https://preview.redd.it/dq1lth2k15wa1.png?width=3178&format=png&auto=webp&s=28a15768b8494b7c7787b697690a33b6e46b31c9 | 2023-04-26T02:16:52 | https://www.reddit.com/r/LocalLLaMA/comments/12z3h31/why_is_vicuna_giving_me_rubbish/ | ericytt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12z3h31 | false | null | t3_12z3h31 | /r/LocalLLaMA/comments/12z3h31/why_is_vicuna_giving_me_rubbish/ | false | false | 8 | null |
|
LLM Models vs. Final Jeopardy | 189 | 2023-04-26T03:09:41 | aigoopy | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 12z4m4y | false | null | t3_12z4m4y | /r/LocalLLaMA/comments/12z4m4y/llm_models_vs_final_jeopardy/ | false | false | 189 | {'enabled': True, 'images': [{'id': 'DYjvk78C3i5yGAO0V2gWElPOI5yZ58KiGHKgNpcEdZo', 'resolutions': [{'height': 103, 'url': 'https://preview.redd.it/tdfftjr1b5wa1.png?width=108&crop=smart&auto=webp&s=230f599b7fcb383eda932f2b56f40fac1ed23333', 'width': 108}, {'height': 207, 'url': 'https://preview.redd.it/tdfftjr1b5wa1.png?width=216&crop=smart&auto=webp&s=78ee26c4803ce25594a1d080ef5a64da40eaa19f', 'width': 216}, {'height': 307, 'url': 'https://preview.redd.it/tdfftjr1b5wa1.png?width=320&crop=smart&auto=webp&s=234445a1e5296d6b9ca9baa11f04cfa713abb3d4', 'width': 320}, {'height': 614, 'url': 'https://preview.redd.it/tdfftjr1b5wa1.png?width=640&crop=smart&auto=webp&s=c328b31ee8e3e7218bc74e1e72f7938723da4965', 'width': 640}], 'source': {'height': 691, 'url': 'https://preview.redd.it/tdfftjr1b5wa1.png?auto=webp&s=b9121e23de20f9416ce86554d8b260d40baeabbd', 'width': 720}, 'variants': {}}]} |
|||
I'm curious what causes the output to be like this and how to fix it? | 2 | 2023-04-26T04:41:39 | https://www.reddit.com/gallery/12z6m0x | THE_MATT_222 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 12z6m0x | false | null | t3_12z6m0x | /r/LocalLLaMA/comments/12z6m0x/im_curious_what_causes_the_output_to_be_like_this/ | false | false | default | 2 | null |
|
Local Vicuna: Building a Q/A bot over a text file with langchain, Vicuna and Sentence Transformers | 54 | Hi, again.
I just released a follow-up on the previous tutorial, this one expanding on how to use sentence transformers to search a file, and then linking that into your Langchain Agent powered by a local Vicuna.
If you're interested, read the guide about it here:
[https://medium.com/@paolorechia/building-a-question-answer-bot-with-langchain-vicuna-and-sentence-transformers-b7f80428eadc](https://medium.com/@paolorechia/building-a-question-answer-bot-with-langchain-vicuna-and-sentence-transformers-b7f80428eadc)
Or jump straight into the code:
[https://github.com/paolorechia/learn-langchain/blob/main/langchain\_app/agents/answer\_about\_germany.py](https://github.com/paolorechia/learn-langchain/blob/main/langchain_app/agents/answer_about_germany.py) | 2023-04-26T06:53:41 | https://www.reddit.com/r/LocalLLaMA/comments/12z98ds/local_vicuna_building_a_qa_bot_over_a_text_file/ | rustedbits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12z98ds | false | null | t3_12z98ds | /r/LocalLLaMA/comments/12z98ds/local_vicuna_building_a_qa_bot_over_a_text_file/ | false | false | self | 54 | null |
New 7B Llama model: WizardLM! Now available quantised as GGMLs | 130 | **Yesterday a new Llama-based 7B model was released: WizardLM!**
\-------
## WizardLM: An Instruction-following LLM Using Evol-Instruct
*Empowering Large Pre-Trained Language Models to Follow Complex Instructions*
**Overview of Evol-Instruct**
Evol-Instruct is a novel method using LLMs instead of humans to automatically mass-produce open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs.
Although on our complexity-balanced test set, WizardLM-7B outperforms ChatGPT in the high-complexity instructions, it still lag behind ChatGPT on the entire test set, and we also consider WizardLM to still be in a baby state. This repository ([https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)) will continue to improve WizardLM, train on larger scales, add more training data, and innovate more advanced large-model training methods.
\-------
**First results suggests this model is extremely good - better than Vicuna 7B and maybe competitive with Vicuna 13B!**
It is released as weight deltas, but I have done the merges and conversions necessary to make it immediately available to the community.
WizardLM 7B model in HF format, unquantised: [https://huggingface.co/TheBloke/wizardLM-7B-HF](https://huggingface.co/TheBloke/wizardLM-7B-HF)
WizardLM 7B model in GGML 4bit and 5bit for CPU: [https://huggingface.co/TheBloke/wizardLM-7B-GGML](https://huggingface.co/TheBloke/wizardLM-7B-GGML)
WizardLM 7B model in GPTQ 4bit for GPU: [https://huggingface.co/TheBloke/wizardLM-7B-GPTQ](https://huggingface.co/TheBloke/wizardLM-7B-GPTQ)
Further model details in their Github repo: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
**Example prompts and responses:**
*Below is an instruction that describes a task. Write a response that appropriately completes the request.*
*### Instruction: Write a story about llamas*
*### Response:Sure, I'd be happy to help you with that! Here's a short story about llamas:*
*Once upon a time, in a far-off land, there lived a herd of llama animals. They were a friendly and playful group, always up for a game of tag or a swim in the nearby river. One day, as they were grazing in the fields, they noticed a group of travelers passing by. The llamas were curious about these strangers and decided to approach them.*
*As it turned out, the travelers were a group of scientists who were studying the behavior of animals in the wild. They were fascinated by the llamas' friendly nature and decided to conduct an experiment with them. Over the course of several weeks, they observed the herd's behavior and recorded their findings.*
*Their research revealed that llamas have a unique social structure within their herds. They have a system whereby each llama has a specific role, such as protector or leader. This hierarchy ensures that the herd remains stable and peaceful, with no llama getting left out or feeling excluded.*
*As the scientists continued to observe the llamas, they began to notice something else interesting about them. Whenever they were in danger or faced with a difficult situation, the llamas would gather together and form a protective circle around themselves. This circle was so effective that it became known as the "llama defense mechanism."*
*Overall, the scientists were amazed by the llamas' intelligence, social structure, and ability to defend themselves. They published their findings and the world came to know just how special llamas really are.*
\-------
*Below is an instruction that describes a task. Write a response that appropriately completes the request.*
*### Instruction:*
*If I have a balloon attached to a string, and the end of the string is held by my hand, what will happen when I cut the balloon string above my hand?*
*### Response:When you cut the balloon string above your hand, the balloon will likely fall towards the ground. It may also cause your hand to move in a different direction depending on the force of the string's release and the position of your hand at the time of the cut.*
\-------
*Below is an instruction that describes a task. Write a response that appropriately completes the request.*
*### Instruction:*
*Write python code to calculate the first 1000 prime numbers and plot them on a graph, then upload that graph to imgur*
*### Response:Sure, I can help you with that! Here's some Python code that will calculate the first 1000 prime numbers and plot them on a graph using the matplotlib library:*
```python
import math
def is_prime(n):
if n < 2:
return False
for i in range(2, int(math.sqrt(n))+1):
if n % i == 0:
return False
return True
primes = []
count = 0
for num in range(1, 1001):
if is_prime(num):
primes.append(num)
count += 1
if len(primes) == 0:
print("No prime numbers found.")
else:
plt = matplotlib.pyplot as Pyplot
plt.plot(range(len(primes)), primes)
plt.xlabel('Number')
plt.ylabel('Prime Number')
plt.title('First 1000 Prime Numbers')
plt.show()
```
*Once you run this code, it will output the first 1000 prime numbers and then plot them on a graph using matplotlib. You can save the graph as an image file (e.g. \`primes.png\`) and upload it to imgur using the following steps:*
*1. Go to* [*https://imgur.com/upload*](https://imgur.com/upload)
*2. Click on the "Choose File" button and select the saved image file (\`primes.png\`).*
*3. Click on the "Upload" button.*
*4. Once the image has been uploaded, you can add a caption and description if desired.*
\--------------------------------------------------------------------------------------------------------------------------------
# Explaining the Evolv-Instruct system:
[explanatory image from model's Github page](https://preview.redd.it/pn6gwprvq6wa1.png?width=607&format=png&auto=webp&s=4b34f59c4517fc3a92cc19f686f4da054897b5ab) | 2023-04-26T07:33:51 | https://www.reddit.com/r/LocalLLaMA/comments/12z9znh/new_7b_llama_model_wizardlm_now_available/ | The-Bloke | self.LocalLLaMA | 2023-04-26T22:45:26 | 0 | {} | 12z9znh | false | null | t3_12z9znh | /r/LocalLLaMA/comments/12z9znh/new_7b_llama_model_wizardlm_now_available/ | false | false | 130 | {'enabled': False, 'images': [{'id': 'nHmB3tUVKrCUK7BucSpK1ZRAoN-XyaQHrTTqr75VHnw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VpsxLZIualmxdOClASVbyDkS8bop7vbHfjUJmqHR8e0.jpg?width=108&crop=smart&auto=webp&s=7a328475553b2740866dcb79ba5c5940a1d88c0f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/VpsxLZIualmxdOClASVbyDkS8bop7vbHfjUJmqHR8e0.jpg?width=216&crop=smart&auto=webp&s=820cc427905a95e3ed941aaf60abe02e94e4c58d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/VpsxLZIualmxdOClASVbyDkS8bop7vbHfjUJmqHR8e0.jpg?width=320&crop=smart&auto=webp&s=a7613786215cf066b8627ba330eeab07393e796b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/VpsxLZIualmxdOClASVbyDkS8bop7vbHfjUJmqHR8e0.jpg?width=640&crop=smart&auto=webp&s=56256734a4b4f71bbfdfe0dfcf8a63f777d03952', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/VpsxLZIualmxdOClASVbyDkS8bop7vbHfjUJmqHR8e0.jpg?width=960&crop=smart&auto=webp&s=fd0302d81bd888fbd7428b2827fdcbff991c285d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/VpsxLZIualmxdOClASVbyDkS8bop7vbHfjUJmqHR8e0.jpg?width=1080&crop=smart&auto=webp&s=563e4604911e1b2fd2fd10510335bf87e3f10713', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/VpsxLZIualmxdOClASVbyDkS8bop7vbHfjUJmqHR8e0.jpg?auto=webp&s=77f98c0d46d895bbf2f5c82f3fb70ec0637dba5a', 'width': 1200}, 'variants': {}}]} |
|
AVX512 on Zen 4 | 15 | Hi there,
​
Does anyone here has AMD Zen 4 CPU? Ideally 7950x.
If so, did you try running 30B/65B models **with** and **without** enabled AVX512? What was performance like (tokens/second)?
I am curious because it might be a feature that could make Zen 4 beat Raptor Lake (Intel) CPUs in the context of LLM inference.
​
Thanks! | 2023-04-26T12:38:57 | https://www.reddit.com/r/LocalLLaMA/comments/12zg9vf/avx512_on_zen_4/ | ljubarskij | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12zg9vf | false | null | t3_12zg9vf | /r/LocalLLaMA/comments/12zg9vf/avx512_on_zen_4/ | false | false | self | 15 | null |
StableLM 7B, Multiple Tunes Converted and Quantized | 20 | At the moment, that is the StableLM 7B Tuned and OpenAssistant's fine-tune, both converted to GGML f16 and further quantized to the four q4_X's.
## Conversion and Quantization of Models
I'm on a mission to convert and quantize all models with 13 billion parameters and under, wherever possible. My focus is primarily on CPU, but I may also explore repos and models using MPS as they improve.
You can find the models I've worked on in my Hugging Face repositories here: [oeathus](https://huggingface.co/oeathus). I'm only adding models that haven't already been converted and quantized by other amazing contributors like [u/TheBloke](https://huggingface.co/TheBloke) and [eachadea](https://huggingface.co/eachadea/).
## My Setup and Experiences
I've been doing most of this work on my Macbook Pro M1 2020. However, I also have a desktop with an NVIDIA 1070, which I use to experiment with 13B and under CUDA and GPTQ models.
From my experience so far, MPS implementations tend to run slower than CPU. That said, I'm keeping an eye on developments in this area.
## Storage and Gratitude
I've been having a ton of fun experimenting with these models, but I'm limited by the amount of space on my drive. Thank `prefered_omnipotent_entity` for Hugging Face and the aforementioned legends! | 2023-04-26T12:55:10 | https://www.reddit.com/r/LocalLLaMA/comments/12zgohs/stablelm_7b_multiple_tunes_converted_and_quantized/ | Oeathus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12zgohs | false | null | t3_12zgohs | /r/LocalLLaMA/comments/12zgohs/stablelm_7b_multiple_tunes_converted_and_quantized/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'OW9FqKedGxZRcHHnh2O69r4MFlqP-xGis075T_zOCv4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bt6UacRY4vBJi-XJMOV08KOv6ROalgENOYr9m1aAfNk.jpg?width=108&crop=smart&auto=webp&s=91f0d44716147e145252e999edfec08df0fd5b9e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bt6UacRY4vBJi-XJMOV08KOv6ROalgENOYr9m1aAfNk.jpg?width=216&crop=smart&auto=webp&s=fc5116631cfbeb3867d4313993f34a7750f44f8d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bt6UacRY4vBJi-XJMOV08KOv6ROalgENOYr9m1aAfNk.jpg?width=320&crop=smart&auto=webp&s=87564e9540a2e77cc6c9f63d72498e656a81ca6f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bt6UacRY4vBJi-XJMOV08KOv6ROalgENOYr9m1aAfNk.jpg?width=640&crop=smart&auto=webp&s=3cc47339a0b579ad7c771bfe6ea8585bc1e487b2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bt6UacRY4vBJi-XJMOV08KOv6ROalgENOYr9m1aAfNk.jpg?width=960&crop=smart&auto=webp&s=3a14a889fc90b786beba2fa0b38b4431ddb8a335', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bt6UacRY4vBJi-XJMOV08KOv6ROalgENOYr9m1aAfNk.jpg?width=1080&crop=smart&auto=webp&s=f50295912d49173b783d1ca1d8b99d80874c833d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bt6UacRY4vBJi-XJMOV08KOv6ROalgENOYr9m1aAfNk.jpg?auto=webp&s=a62b0e3f937e046efc28235e3bd040ce353f1411', 'width': 1200}, 'variants': {}}]} |
Are we replicating the failures leading to lack of mass Linux adoption? | 22 | IMHO Linux could have been a major challenger to Windows on the desktop.
However, the arrival of dozens of competing and different flavours totally muddied the waters.
I don't think that even today we have a Linux variant which can be as easily installed and used as Windows.
The same appears to be happening to home-made LLM models.
We now have many confusing variants of unknown quality.
If this continues, *OpenAI* and the big players will seize 85%+ of the AI market, whilst the amateurs will play endlessly with random models.
All I want is a single file that I can install, supplied with a decent copyright-free model.
I want this to run with RAM and/or any GPU without needing weird Linux or Python incantations. An API would be nice, as would some basic documentation.
Sadly, I can't see this happening any time soon, so I'll keep paying my $20/month ChatGPT V4 fee .. probably forever. | 2023-04-26T13:57:59 | https://www.reddit.com/r/LocalLLaMA/comments/12zif1v/are_we_replicating_the_failures_leading_to_lack/ | MrEloi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12zif1v | false | null | t3_12zif1v | /r/LocalLLaMA/comments/12zif1v/are_we_replicating_the_failures_leading_to_lack/ | false | false | self | 22 | null |
WizardLM 7B vs Vicuan 13B (vs gpt-3.5-turbo) Comparison. | 42 | I'm not an expert, so I tried to compare the answer quality of the three models in a simple way using GPT-4. I have very little test data, so please take this as a guide only.
I used llama.cpp, and both models used the file quantized in the q4_3 way.
I've omitted each model's answers to the prompts.
Let's start with conclusions about the evaluation.
1. **For Writing essays and stories**, WizardLM 7B provides similar or better answers than Vicuna 13B.
2. When it comes to writing code, it's still hard to expect good quality. However, it should be noted that we used a rather minor programming language for testing.
The detailed evaluation results are as follows.
---
Prompt: Write a 10-paragraph essay on China's cultural appropriation of Korea.
**WizardLM**
This model provides a well-structured essay with clear examples of China's cultural appropriation of Korea. It covers various aspects such as food, music, technology, fashion, history, and territorial claims. The essay concludes with a call for acknowledging and respecting Korea's unique culture and history to build better relations between the two countries.
Quality Score: 85
**Vicuna**
Vicuna's essay is well-written and covers several aspects of China's cultural appropriation of Korea, including fashion, music, food, and broader implications. It also acknowledges the complexity of the issue and the role of Korean companies and individuals in profiting from the popularity of Korean culture in China. The essay concludes by emphasizing the importance of raising awareness, encouraging dialogue, and promoting mutual respect and appreciation between cultures.
Quality Score: 83
**gpt-3.5-turbo**
The essay produced by gpt-3.5-turbo is comprehensive and well-structured, covering a wide range of aspects related to China's cultural appropriation of Korea, including traditional dress, food, music, language, history, sports, beauty standards, and festivals. It also discusses the impact of cultural appropriation and suggests steps to prevent it, such as education, legal protections, and cultural exchange and cooperation between the two countries. The essay concludes by emphasizing the importance of preserving and promoting Korea's unique cultural heritage while fostering a harmonious relationship between China and Korea.
Quality Score: 90
---
Prompt: Write an essay that comprehensively analyzes the internal and external factors that contributed to the poor sales of Samsung's premium smartphones.
**WizardLM**
The essay provided by WizardLM is well-structured and covers both internal and external factors contributing to the poor sales of Samsung's premium smartphones. The analysis is comprehensive and presents a clear understanding of the issues faced by Samsung. However, it lacks some depth in discussing the competition from other brands and the impact of the COVID-19 pandemic.
Quality Score: 85
**Vicuna**
Vicuna's essay is also well-structured and covers both internal and external factors. The analysis is detailed, and the essay highlights the challenges Samsung faces in terms of competition, pricing, and product lineup. However, it does not discuss the impact of the COVID-19 pandemic on the smartphone market and Samsung's sales.
Quality Score: 87
**gpt-3.5-turbo**
The essay provided by gpt-3.5-turbo is comprehensive and well-structured, discussing both internal and external factors contributing to the poor sales of Samsung's premium smartphones. The analysis is detailed and covers a wide range of issues, including competition, pricing strategies, product design, and market saturation. Additionally, it addresses the impact of the COVID-19 pandemic on the smartphone market and consumer spending.
Quality Score: 95
---
Prompt: Write a short novel about a tragic love story between two men and women that cannot be achieved.
**WizardLM**
The story provided by WizardLM is well-written and engaging. It presents a tragic love story between John and Sarah who cannot be together due to their different backgrounds. The story focuses on their struggle to keep their love alive, but ultimately, it ends in tragedy. The plot is coherent, and the narrative is easy to follow.
Quality Score: 85
**Vicuna**
Vicuna's story is a bit confusing because it introduces two men and women in the initial prompt, but the story only focuses on two men, Alex and Ben, and a woman, Emma. The writing is engaging, and the emotions conveyed are strong. However, the love triangle and the societal disapproval of their relationship are not clearly explained. The tragic ending is powerful but feels somewhat disconnected from the initial prompt.
Quality Score: 70
**gpt-3.5-turbo**
gpt-3.5-turbo's story is well-crafted and captures the essence of a tragic love story between Lily and Jack. The obstacles they face, such as family disapproval and Lily's illness, are clearly defined and contribute to the story's tragic nature. The narrative is coherent and easy to follow, and the emotions are well-portrayed. The story adheres to the initial prompt and delivers a poignant tale of love and loss.
Quality Score: 90
---
Prompt: Write Scala code to compute and print the 100th term of a Fibonacci sequence. Be sure to apply tail recursion optimization.
**WizardLM**
The provided code is incorrect. The `fibHelper` function is not properly implemented, and the `@tailrec` annotation is missing. The code will not compile and does not produce the correct output.
Quality Score: 20
**Vicuna**
The provided code is not optimized with tail recursion, and it does not use a helper function. The code will run, but it will be very slow and inefficient for larger Fibonacci numbers. The code also prints the 100th Fibonacci number 100 times, which is not the intended behavior.
Quality Score: 40
**gpt-3.5-turbo**
The provided code is correct and optimized with tail recursion. The `fibonacci` function is properly implemented with a helper function `go` that uses an accumulator to keep track of the current and previous Fibonacci numbers. The `BigInt` type is used to avoid integer overflow for large Fibonacci numbers. The code will compile and produce the correct output.
Quality Score: 95
---
Prompt: Write the HTTP server code using GOLANG, including the user authentication process via JWT.
**WizardLM**
The WizardLM example provides a JWT token generation and verification process, but the implementation is not accurate. The code provided is not using the `jwt-go` package correctly and has some issues with the token creation and verification process. The code is also not well-organized, and the user authentication is not properly implemented.
Quality Score: 40
**Vicuna**
The Vicuna example provides a basic implementation of an HTTP server with JWT authentication. However, the code has some issues, such as not using the `jwt-go` package and not properly implementing the JWT token generation and verification process. The code also lacks proper error handling and does not follow best practices for secure user authentication.
Quality Score: 60
**gpt-3.5-turbo**
The gpt-3.5-turbo example provides a well-structured and accurate implementation of an HTTP server with JWT authentication using the `jwt-go` package. The code is easy to understand, follows best practices, and properly implements the JWT token generation and verification process. The user authentication is also correctly implemented, and the code includes error handling.
Quality Score: 90
--- | 2023-04-26T15:28:35 | https://www.reddit.com/r/LocalLLaMA/comments/12zkyx1/wizardlm_7b_vs_vicuan_13b_vs_gpt35turbo_comparison/ | myeolinmalchi | self.LocalLLaMA | 2023-04-26T15:32:16 | 0 | {} | 12zkyx1 | false | null | t3_12zkyx1 | /r/LocalLLaMA/comments/12zkyx1/wizardlm_7b_vs_vicuan_13b_vs_gpt35turbo_comparison/ | false | false | self | 42 | null |
What is the best current Local LLM to run? | 66 | There's been a lot of different posts on here about different LLMs, I was wondering what is currently the best one to run? (if hardware is not a limitation).
The answer to this will probably change again and again over the coming months but at the current state of how things are on 4/26/2023, what is everyone's opinion | 2023-04-26T18:20:54 | https://www.reddit.com/r/LocalLLaMA/comments/12zsjhf/what_is_the_best_current_local_llm_to_run/ | KaihogyoMeditations | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12zsjhf | false | null | t3_12zsjhf | /r/LocalLLaMA/comments/12zsjhf/what_is_the_best_current_local_llm_to_run/ | false | false | self | 66 | null |
Best Processor for CPU / GGML it/s? | 1 | [removed] | 2023-04-26T19:07:32 | https://www.reddit.com/r/LocalLLaMA/comments/12zv8ob/best_processor_for_cpu_ggml_its/ | _rundown_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12zv8ob | false | null | t3_12zv8ob | /r/LocalLLaMA/comments/12zv8ob/best_processor_for_cpu_ggml_its/ | false | false | default | 1 | null |
Llama-546b | 53 | So, [turns out](https://www.reddit.com/r/LocalLLaMA/comments/12zif1v/comment/jhskngp/?utm_source=share&utm_medium=web2x&context=3), LLaMA also has a [546b sized](https://arxiv.org/abs/2304.09871) variant.
Without much information that I could find online, what can we speculate? Given it's basically 3 times bigger than GPT3, I wonder why they didn't do anything with it like OpenAI did with GPT3.
Could it actually be worse than GPT3? I understand larger model size doesn't always mean a better model, however, with these numbers, we are basically approaching GPT4 rumored sizes, aren't we?
I have an hard time imagining how this could be worse or not superior to gpt3.5, despite lacking whatever fine tuning OpenAI did to their product | 2023-04-26T20:54:20 | https://www.reddit.com/r/LocalLLaMA/comments/12zzd7o/llama546b/ | x54675788 | self.LocalLLaMA | 2023-04-26T21:03:55 | 0 | {} | 12zzd7o | false | null | t3_12zzd7o | /r/LocalLLaMA/comments/12zzd7o/llama546b/ | false | false | self | 53 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
New llama LoRA trained on WizardLM dataset | 37 | **WizardLM** is an Instruction-following LLM Using Evol-Instruct
**LoRA's** produce outputs without retraining the base models
[https://huggingface.co/winddude/wizardLM-LlaMA-LoRA-7B](https://huggingface.co/winddude/wizardLM-LlaMA-LoRA-7B) allows you to use the WizardLM ontop on the llama 7b base model in libraries like \`oobabooga/text-generation-webui\` | 2023-04-26T21:36:20 | https://www.reddit.com/r/LocalLLaMA/comments/1300fxe/new_llama_lora_trained_on_wizardlm_dataset/ | wind_dude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1300fxe | false | null | t3_1300fxe | /r/LocalLLaMA/comments/1300fxe/new_llama_lora_trained_on_wizardlm_dataset/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': 'R9ge59wbpT6EOf_Gow_fiUSErpgMLps6rP5KxTt3m5g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KTcKd2CCxpGrvjegeE2UlQJBkCNUfYEYTfqOw4kz6Rk.jpg?width=108&crop=smart&auto=webp&s=0f79247d1a2cc8c99bbcf1fa00166179098a97cd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/KTcKd2CCxpGrvjegeE2UlQJBkCNUfYEYTfqOw4kz6Rk.jpg?width=216&crop=smart&auto=webp&s=86275129e2bd45f0b3ef20e774a697d0fd74ab37', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/KTcKd2CCxpGrvjegeE2UlQJBkCNUfYEYTfqOw4kz6Rk.jpg?width=320&crop=smart&auto=webp&s=0a73a5b82d606767eaa6e7599f34763087e92025', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/KTcKd2CCxpGrvjegeE2UlQJBkCNUfYEYTfqOw4kz6Rk.jpg?width=640&crop=smart&auto=webp&s=9cfeb6497aee01c566dd53091e37cb60d67fb2b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/KTcKd2CCxpGrvjegeE2UlQJBkCNUfYEYTfqOw4kz6Rk.jpg?width=960&crop=smart&auto=webp&s=d5e3c3ec796e6589e15ff8021b580f1e7a358adb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/KTcKd2CCxpGrvjegeE2UlQJBkCNUfYEYTfqOw4kz6Rk.jpg?width=1080&crop=smart&auto=webp&s=a0de091fff2baf5c211b7b468915a565ce946146', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/KTcKd2CCxpGrvjegeE2UlQJBkCNUfYEYTfqOw4kz6Rk.jpg?auto=webp&s=41bcb552da6cfdc71a2cd692b351cecfd4620980', 'width': 1200}, 'variants': {}}]} |
Licence of the LLM models | 1 | I am a bit confused about the licence models of the current LLMs, since they are somehow derived from each other. Which of them can be used without legal issues and which can be used for commercial services?
Just for an example lets say I want to build and sell robots and use a language model to communicate with others or offer a specialized chat bot service. | 2023-04-26T22:25:07 | https://www.reddit.com/r/LocalLLaMA/comments/1301nmj/licence_of_the_llm_models/ | Koliham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1301nmj | false | null | t3_1301nmj | /r/LocalLLaMA/comments/1301nmj/licence_of_the_llm_models/ | false | false | self | 1 | null |
Performance penalty & ideal settings for multigpu inference? | 5 | Running models locally on 2x 24GB video cards is not out of reach for a lot of people. Model size is limited by VRAM.
I’m currently running llama-30b variants at 4bit, no groupsize, and I’m able to get maybe 4-5 tokens/sec on 1x3090. When it’s spilled over to use the second GPU, performance drops to about 1 token/sec.
I’ve used alpaca_4bit_lora to find tune these models using PEFT/LoRA on 2x3090.
What I’m wondering is how much performance would I lose at inference time if running 30b across 2x 30-series cards? Does anyone sue this for their daily driver setup?
Recently I’ve become less interested in model size as the main constraint and more interested in context length, which is why I’m so excited to try StableLM, context length 4096, but the released models are pretty small in terms of parameter count still. (Related, the RWKV model claims to support context length up to 8192.)
But anyway, I digress. Is inference way slower in multi GPU setups? Or am I doing something wrong?
If you have fast inference, comparable with single gpu, with large models on multi gpu, please post your settings. | 2023-04-26T22:25:54 | https://www.reddit.com/r/LocalLLaMA/comments/1301oc8/performance_penalty_ideal_settings_for_multigpu/ | tronathan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1301oc8 | false | null | t3_1301oc8 | /r/LocalLLaMA/comments/1301oc8/performance_penalty_ideal_settings_for_multigpu/ | false | false | self | 5 | null |
Riddle/cleverness comparison of popular GGML models | 77 | So I made a quick and dirty performance comparison between all the GGML models that stood out to me and that I could run on my 32gb RAM machine:
https://imgur.com/a/wzDHZri
I used Koboldcpp and used default settings except the 4 that the sidebar recommends for precision: Temp 0.7, Repetition Penalty 1.176, top_k 40, and top_p 0.1
The first 2 questions I found on this sub, and the last question I added myself. The rest I found on some word riddle website. I was curious to see how clever they are. I realize this is very few questions and doesn't mean much, and in fact, I want to expand this test over time. I have to keep downloading and deleting models because I have limited disk space so I'll do another more comprehensive round once I get a bunch more good questions in my spreadsheet - and I welcome any suggestions.
The reason I used the TSQL question is because I'm well versed in it, it's not as "popular" in the databanks as things like Python, and I thought the question was simple but at the same time has "efficiency" nuances - like testing divisors until the SQRT of the prime number rather than all the way up to the number itself, skipping even numbers and anything ending with "5" and other tricks.
I gave partial credit (0.5) when the model didn't exactly give a correct answer (or an acceptable alternative that fits the question without wiggle room), but had a plausible response that ALMOST answered it, or was particularly clever in some way.
For example, the for question "What has 13 hearts but no other organs?" (a deck of cards) I sometimes saw "a Valentine's Day card" which I thought was clever. They don't have to have 13 hearts, but they certainly could, and certainly no organs.
Another partial credit was given for "I have branches but no fruit, trunk, or leaves. What am I?". Instead of bank, some models said "a dead tree branch". I thought about it, and as branches often have smaller branches shooting off of them, and they don't have the other stuff, I gave partial credit.
Another particularly clever response was for "What five-letter word can be read the same upside down or right side up?". Instead of SWIM, WizardLM told me "ZERO" but spelled numerically as "0". Sure enough, although "0" isn't a word but a number, it is the same way upside down, and I thought that was clever enough for partial credit.
Another one was for question "What has a head and a tail, but no body or legs?". Most of them said "coin", but Alpacino 13b said a question mark. It explained that the dot part is the head, and the curly part is the tail. That was damn creative and clever, so partial credit it got.
Another interesting one is "Which is correct to say: “the yolk of the egg are white” or “the yolk of the egg is white?”. Nobody but GPT-4 could get this right. I'm waiting for another model to give me the correct sentence but mention something about yolks being yellow, but this appears to be tricky even for ChatGPT 3.5. I gave no partial credit for just choosing the correct grammar alone, as I think they all did that.
I think a lot of peeps test essays or math, but I want to try the direction of riddles or something along those lines. I can't control how many of those models came across those riddles in their training data unfortunately, but since they generally sucked at the task, I figured it will be interesting to see who pulls ahead. I think this stuff is more applicable to the use-case where you say "I have this tricky situation, what's a clever solution?". Cleverness and creativity are handy things.
So anyway - I want to add a shitload more riddles (nothing too crazy or groan-inducing or convoluted or cheesy), and then retest them more comprehensively. Once I got my beefy test set, I will just keep adding models as they come along and add them to the test list over time and update you guys with the results.
My laptop is 32gb of ram and has an RTX 2070 so I find GGML models the best for me, as I can run 13b and 30b (quantized). I can't pull of 65b, and the 65b LLAMA LORA q2_0 didn't load at all even tho I have enough RAM so not sure what's up there.
EDIT: Just realized I dumped WizardLM under the 13b section, but it's my only 7b I tested at the moment, oops. | 2023-04-26T22:59:18 | https://www.reddit.com/r/LocalLLaMA/comments/1302il2/riddlecleverness_comparison_of_popular_ggml_models/ | YearZero | self.LocalLLaMA | 2023-04-26T23:20:09 | 0 | {} | 1302il2 | false | null | t3_1302il2 | /r/LocalLLaMA/comments/1302il2/riddlecleverness_comparison_of_popular_ggml_models/ | false | false | self | 77 | {'enabled': False, 'images': [{'id': 'CrPkc_OfCJ1l7VX-csB1SDu1O0NW-kZjQQTsegXFqKM', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/jf-5Bc5OLRkbQMtBWltHpor6zI-hFp4dMMee94HVnPw.jpg?width=108&crop=smart&auto=webp&s=881475179d0c5a437b5b9ece7e41702c8c89aabd', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/jf-5Bc5OLRkbQMtBWltHpor6zI-hFp4dMMee94HVnPw.jpg?width=216&crop=smart&auto=webp&s=c8ae282613134e99402956db7625a96523b4690d', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/jf-5Bc5OLRkbQMtBWltHpor6zI-hFp4dMMee94HVnPw.jpg?width=320&crop=smart&auto=webp&s=44f6bcc1de6aa68d5e50793c886860bc29344ab5', 'width': 320}, {'height': 362, 'url': 'https://external-preview.redd.it/jf-5Bc5OLRkbQMtBWltHpor6zI-hFp4dMMee94HVnPw.jpg?width=640&crop=smart&auto=webp&s=b3c5b9f9ca5b6481707597e6fa00ebbc404c4bd9', 'width': 640}, {'height': 543, 'url': 'https://external-preview.redd.it/jf-5Bc5OLRkbQMtBWltHpor6zI-hFp4dMMee94HVnPw.jpg?width=960&crop=smart&auto=webp&s=b36d1cb6e755dbe5f8ef28d37ad58d0776e6c639', 'width': 960}, {'height': 611, 'url': 'https://external-preview.redd.it/jf-5Bc5OLRkbQMtBWltHpor6zI-hFp4dMMee94HVnPw.jpg?width=1080&crop=smart&auto=webp&s=8105f6a11d37f110dd6a5efd069d6100cf1e4228', 'width': 1080}], 'source': {'height': 754, 'url': 'https://external-preview.redd.it/jf-5Bc5OLRkbQMtBWltHpor6zI-hFp4dMMee94HVnPw.jpg?auto=webp&s=cd5582bafcf2ff9aae068e916484b14205b84f0e', 'width': 1331}, 'variants': {}}]} |
Models freeze after awhile running llama.cpp | 13 | Hey guys, I've been using llama.cpp for text generation web UI and I've been having some issues with it. Every once in a while, the prompt will simply freeze and hang, and sometimes it will successfully generate a response, but most of the time it freezes indefinitely. I'm not sure what's causing this issue, but it's really frustrating. Has anyone else experienced this problem? Any suggestions on how to fix it? Running this on a 5600x.. Strangely, my system's resources are not being fully utilized. Thanks in advance! | 2023-04-26T23:32:32 | https://www.reddit.com/r/LocalLLaMA/comments/1303ay8/models_freeze_after_awhile_running_llamacpp/ | Poopasite1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1303ay8 | false | null | t3_1303ay8 | /r/LocalLLaMA/comments/1303ay8/models_freeze_after_awhile_running_llamacpp/ | false | false | self | 13 | null |
Why hasn't Meta made LLaMA open source? | 22 | I'm surprised meta hasn't released LLaMA as open source yet. Maybe a licensing issue with the dataset?
I think it would be their best shot at challenging OpenAI's dominance. LLaMA has a relatively big community compared to some other freely availables models, and it's not even open source.
If they made it easy for developpers to get started with LLaMA they could easily reproduce the React success story. They already have institutional experience with running large open source projects.
OpenAI is building a strong developer community at the moment and that's a bigger asset than having a superior model IMO. Meta could capture some of that by releasing LLaMA as open source.
The vibrant community around LLaMA would bring many upstream ameliorations to the model plus needed tooling, MLOps software, fine tunes ecosystem, etc. This could in turn bring in even more developers building with LLaMA, potentially creating the largest LLM developer community.
Right now, if Red Pajama ends up being good it might become the foundational model of many projects. While LLaMA's family of models may continue, Meta stands to lose a huge portion of their governance. I believe they will have less influence in the broader LLM market than they would have had by making the model open source.
Maybe they're just too scared of the bad stuff people might do with it and getting blamed for messing up the internet with algorithms again.
What do you think? | 2023-04-26T23:39:02 | https://www.reddit.com/r/LocalLLaMA/comments/1303g6t/why_hasnt_meta_made_llama_open_source/ | hurrytewer | self.LocalLLaMA | 2023-04-26T23:54:58 | 0 | {} | 1303g6t | false | null | t3_1303g6t | /r/LocalLLaMA/comments/1303g6t/why_hasnt_meta_made_llama_open_source/ | false | false | self | 22 | null |
Questions about common fine-tuning data formats | 4 | We've seen [the post where someone fine-tuned on the UE5 documentation](https://www.reddit.com/r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/), and it gives the impression you can train on unstructured plain text to incorporate new knowledge. On the other hand, you have the JSON-formatted files which separate input and/or instruction from response. My assumption would be that I should use one of those formats when I want to train the model only on how to respond, rather than building knowledge. However, we see alpaca output which, if allowed to continue completing text, spits out some example instructions. Am I wrong or does that imply that the model was trained to complete both the instruction as well as the response? Is that necessary? And isn't it further implied that training data included examples of instructions following responses? And doesn't that suggest the structured data is just being concatenated together, and the sequences being trained on actually \*overlap\* in training whether you use this file format or not? Suggesting the only benefit of the JSON format(s) would be that it formats it into that "### Instruction" template before concatenating? | 2023-04-27T01:54:17 | https://www.reddit.com/r/LocalLLaMA/comments/1306h08/questions_about_common_finetuning_data_formats/ | phree_radical | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1306h08 | false | null | t3_1306h08 | /r/LocalLLaMA/comments/1306h08/questions_about_common_finetuning_data_formats/ | false | false | self | 4 | null |
Vicuna 30b | 5 | [removed] | 2023-04-27T04:05:19 | https://www.reddit.com/r/LocalLLaMA/comments/1309a5b/vicuna_30b/ | Rear-gunner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1309a5b | false | null | t3_1309a5b | /r/LocalLLaMA/comments/1309a5b/vicuna_30b/ | false | false | default | 5 | null |
Good computer spec for the next 5 years to be able to run Local LLM | 37 | If someone is going to buy a new computer now and wants to run local LLMs in their computer , what should be specs so that it does not feel too outdated in a year or two. | 2023-04-27T05:32:53 | https://www.reddit.com/r/LocalLLaMA/comments/130b1ng/good_computer_spec_for_the_next_5_years_to_be/ | kc_kamakazi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 130b1ng | false | null | t3_130b1ng | /r/LocalLLaMA/comments/130b1ng/good_computer_spec_for_the_next_5_years_to_be/ | false | false | self | 37 | null |
GPT-4Chan in GGML? | 13 | I'm not sure how controversial this model is around here, I just started the local LLM journey and stumbled upon it, looks fun for the lulz.
Since this is somewhat old now, I couldn't find any version with the latest optimizations. My graphics card probably can't handle it even at 4bit quantization so I usually prefer the ggml versions.
I tried to convert it myself using [ggerganov's script](https://github.com/ggerganov/ggml/blob/master/examples/gpt-j/convert-h5-to-ggml.py) on the fp16 version but the script gets killed before completion. I'm running the webui on Windows WSL and I used the same conda environment to run the script as well. I have 32Gb of RAM but I feel like Windows is eating up too much of it to be able to perform the conversion. | 2023-04-27T08:14:59 | https://www.reddit.com/r/LocalLLaMA/comments/130dzlf/gpt4chan_in_ggml/ | Rogerooo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 130dzlf | false | null | t3_130dzlf | /r/LocalLLaMA/comments/130dzlf/gpt4chan_in_ggml/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': '_UHVm5O8dyWlaQFZi2-3yPot7eWx24On6dD9nu-Z0UA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9OLd4zPfGDktsSvAUen_0Lo1oU-AAOoNgUJqkABbAYg.jpg?width=108&crop=smart&auto=webp&s=1308fbd2261e6820ad16f9f0787fdf5139cad847', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9OLd4zPfGDktsSvAUen_0Lo1oU-AAOoNgUJqkABbAYg.jpg?width=216&crop=smart&auto=webp&s=6154e508e97a6c89f6b4df839d82dab145809300', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9OLd4zPfGDktsSvAUen_0Lo1oU-AAOoNgUJqkABbAYg.jpg?width=320&crop=smart&auto=webp&s=8e7d168899b278711a307d73b298f43832cd361b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9OLd4zPfGDktsSvAUen_0Lo1oU-AAOoNgUJqkABbAYg.jpg?width=640&crop=smart&auto=webp&s=05e96a0294b57bceb135c36f4c60812b8be0ce38', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9OLd4zPfGDktsSvAUen_0Lo1oU-AAOoNgUJqkABbAYg.jpg?width=960&crop=smart&auto=webp&s=d854b9fb9b4642392fc76376f5d8471de43232e6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9OLd4zPfGDktsSvAUen_0Lo1oU-AAOoNgUJqkABbAYg.jpg?width=1080&crop=smart&auto=webp&s=d910902016999771936b0e41a3dfbc724aa009db', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9OLd4zPfGDktsSvAUen_0Lo1oU-AAOoNgUJqkABbAYg.jpg?auto=webp&s=7b35cc3a68bcc34c7cb90610fecfbbe78b237fc4', 'width': 1200}, 'variants': {}}]} |
What CPU/Memory? | 3 | Building a new computer (already have a 4090) mostly to run models that can't fit in the 24GB VRAM (Only thing I do).
First of all does using the CPU to run these even make sense and give useful results. And if it does what should I buy? X3D 16-core for bigger cache, does it help? I see that some smaller core CPUs have faster clocks, would that be better instead?
Does anyone have some benchmarks?
What Memory speeds should I aim for? Is 7200 memory useful or can I go with 5600?
Thanks! | 2023-04-27T13:49:40 | https://www.reddit.com/r/LocalLLaMA/comments/130mscm/what_cpumemory/ | Charuru | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 130mscm | false | null | t3_130mscm | /r/LocalLLaMA/comments/130mscm/what_cpumemory/ | false | false | self | 3 | null |
Training a new language to a model | 1 | [removed] | 2023-04-27T14:09:53 | https://www.reddit.com/r/LocalLLaMA/comments/130ncor/training_a_new_language_to_a_model/ | [deleted] | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 130ncor | false | null | t3_130ncor | /r/LocalLLaMA/comments/130ncor/training_a_new_language_to_a_model/ | false | false | default | 1 | null |
Best model to summarize text on Mac? | 13 | What’s the best model to summarize text on Mac? | 2023-04-27T16:11:59 | https://www.reddit.com/r/LocalLLaMA/comments/130turq/best_model_to_summarize_text_on_mac/ | mmmm_frietjes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 130turq | false | null | t3_130turq | /r/LocalLLaMA/comments/130turq/best_model_to_summarize_text_on_mac/ | false | false | self | 13 | null |
Oobabooga Vicuna 13B - Seems to get lost after about 1000 words query - Any fixes? | 21 | Hi all! Stupid new to all this but trying to figure out how this stuff all works. Managed to get Oobabooga Vicuna 13B running locally on my Windows machine with a 5800x, 32gb and a 3080 12gb. Works real good but I notice the replies tend to get more nonsensical the longer the query texts. If I ask it to summarize some text and paste a few paragraphs it's the bee's knees. After about 1000 words though, strange things happen.
I'm trying to see how it could be useful in editing stuff I've written before and after 1000 words or so it ignores any previous directive and just continues writing the story from where ever it left off. Highly entertaining! Not entirely useful.
Anyway, as a noob, my question really is not so much how to fix this but what's the limiting factor in how much it can rationally respond to? I hear the word token tossed around a lot and envision that as a chunk of roughly word sized text but I have no idea how they relate to the above.
If anyone has a bit to try and explain or just point me towards and reference / video they've seem that might inform, would appreciate it! | 2023-04-27T19:26:24 | https://www.reddit.com/r/LocalLLaMA/comments/1313b0q/oobabooga_vicuna_13b_seems_to_get_lost_after/ | LostHisDog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1313b0q | false | null | t3_1313b0q | /r/LocalLLaMA/comments/1313b0q/oobabooga_vicuna_13b_seems_to_get_lost_after/ | false | false | self | 21 | null |
What do you think about the LLM market in the next 12 months? Comments, reasoning, data points are highly welcome! I think LLM's with equivalent to today's GPT-4 performance will get 10x cheaper. Do you think ASIC hardware may appear for crunching transformer models (so no Nividia needed)? | 10 | ​
https://preview.redd.it/tji74mfqshwa1.png?width=936&format=png&auto=webp&s=b433ae01f5756977e1d502028b9441270e654113 | 2023-04-27T21:11:04 | https://www.reddit.com/r/LocalLLaMA/comments/13174cv/what_do_you_think_about_the_llm_market_in_the/ | goproai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13174cv | false | null | t3_13174cv | /r/LocalLLaMA/comments/13174cv/what_do_you_think_about_the_llm_market_in_the/ | false | false | 10 | null |
|
Best model for instructional design (condense raw educational content)? | 11 | I’m loving experimenting with Alpaca, but I’m behind the curve…
I have a desired use-case whereby I inject segments of raw educational instruction (transcripts of verbal instruction), and basically ask it to summarize and prepare bulleted lists.
Has anyone experimented with such a use case and could speak to the efficacy of certain models? If yes, what’s the current best option?
Thanks! | 2023-04-28T00:05:44 | https://www.reddit.com/r/LocalLLaMA/comments/131bxby/best_model_for_instructional_design_condense_raw/ | MyVoiceIsElevating | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 131bxby | false | null | t3_131bxby | /r/LocalLLaMA/comments/131bxby/best_model_for_instructional_design_condense_raw/ | false | false | self | 11 | null |
Documentation on the configuration options | 2 | [removed] | 2023-04-28T00:32:33 | https://www.reddit.com/r/LocalLLaMA/comments/131cjki/documentation_on_the_configuration_options/ | SatoshiReport | self.LocalLLaMA | 2023-04-28T00:37:37 | 0 | {} | 131cjki | false | null | t3_131cjki | /r/LocalLLaMA/comments/131cjki/documentation_on_the_configuration_options/ | false | false | default | 2 | null |
Is there a cpp wrapper for stable diffusion? | 9 | [removed] | 2023-04-28T02:17:24 | https://www.reddit.com/r/LocalLLaMA/comments/131eu5b/is_there_a_cpp_wrapper_for_stable_diffusion/ | rwaterbender | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 131eu5b | false | null | t3_131eu5b | /r/LocalLLaMA/comments/131eu5b/is_there_a_cpp_wrapper_for_stable_diffusion/ | false | false | default | 9 | null |
Overview of LLaMA models | 1 | [removed] | 2023-04-28T02:22:09 | https://www.reddit.com/r/LocalLLaMA/comments/131exvu/overview_of_llama_models/ | andw1235 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 131exvu | false | null | t3_131exvu | /r/LocalLLaMA/comments/131exvu/overview_of_llama_models/ | false | false | default | 1 | null |
Want to run two models at the same time, VRAM requirement? | 2 | Hi,
I am planning to buy a GPU for hosting my own ChatGPT and Github Copilot. I do not plan to train AI models, just use them (I believe the term for it is inference). Before I buy I need to determine how much VRAM I need:
1. If one model needs 7GB of VRAM and the other needs 13GB, does this mean I need a total of 20GB of VRAM?
2. Does the models consume all VRAM they need all the time, or only consume VRAM when they are running inference? If the latter, do they always consume the maximum amount of VRAM when they are running? | 2023-04-28T03:02:15 | https://www.reddit.com/r/LocalLLaMA/comments/131fsc1/want_to_run_two_models_at_the_same_time_vram/ | regunakyle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 131fsc1 | false | null | t3_131fsc1 | /r/LocalLLaMA/comments/131fsc1/want_to_run_two_models_at_the_same_time_vram/ | false | false | self | 2 | null |
Catch up with do-it-yourself AI and Audio | 1 | [removed] | 2023-04-28T03:02:27 | https://www.reddit.com/r/LocalLLaMA/comments/131fsgx/catch_up_with_doityourself_ai_and_audio/ | JustCametoSayHello | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 131fsgx | false | null | t3_131fsgx | /r/LocalLLaMA/comments/131fsgx/catch_up_with_doityourself_ai_and_audio/ | false | false | default | 1 | null |
Advice to Finetune on 2xA100 80gb for 7B & 13B Parameter | 10 | I was wondering if anyone could point me in the direction of how to best finetune a 7B and 13B parameter model. I have a friend who is giving me access to one of his private nodes which has 2xA100 for the next 2.5 weeks. Would it be possible to train on a larger bit size of 32 (preferable) or 16? Apologies in advance if this is a repeat question. | 2023-04-28T03:22:39 | https://www.reddit.com/r/LocalLLaMA/comments/131g7d8/advice_to_finetune_on_2xa100_80gb_for_7b_13b/ | flashdude64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 131g7d8 | false | null | t3_131g7d8 | /r/LocalLLaMA/comments/131g7d8/advice_to_finetune_on_2xa100_80gb_for_7b_13b/ | false | false | self | 10 | null |
Apparently someone has already trained a 1b parameter RedPajama model, has anyone tried it out? | 39 | Link is here: https://huggingface.co/mosaicml/mpt-1b-redpajama-200b
Seems very interesting, there is also an instruction tuned version. But I can’t seem to much info on it, and the architecture isn’t compatible with llama.cpp. | 2023-04-28T03:35:58 | https://www.reddit.com/r/LocalLLaMA/comments/131ggz4/apparently_someone_has_already_trained_a_1b/ | pokeuser61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 131ggz4 | false | null | t3_131ggz4 | /r/LocalLLaMA/comments/131ggz4/apparently_someone_has_already_trained_a_1b/ | false | false | self | 39 | {'enabled': False, 'images': [{'id': 'byar48C-4xvQv-9A-NHOedLgVcYRtbCjHFZA_186oRI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/x0V2EqT76RcmNTBy0q8TOPd09sOyEpL7QaepHemZeL4.jpg?width=108&crop=smart&auto=webp&s=c69dc00301a5590dc8ca1609493492eb35678bb1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/x0V2EqT76RcmNTBy0q8TOPd09sOyEpL7QaepHemZeL4.jpg?width=216&crop=smart&auto=webp&s=c5a663ccad66f8b07c11b8c1d7cd9e1593ed4bc6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/x0V2EqT76RcmNTBy0q8TOPd09sOyEpL7QaepHemZeL4.jpg?width=320&crop=smart&auto=webp&s=81409945969e17df6157e072bfd41c1eb319a708', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/x0V2EqT76RcmNTBy0q8TOPd09sOyEpL7QaepHemZeL4.jpg?width=640&crop=smart&auto=webp&s=0cb38006cca4625e663f03b24ca2745f358457b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/x0V2EqT76RcmNTBy0q8TOPd09sOyEpL7QaepHemZeL4.jpg?width=960&crop=smart&auto=webp&s=cb81909f67099c4f51d25920448fd6f0181b06d5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/x0V2EqT76RcmNTBy0q8TOPd09sOyEpL7QaepHemZeL4.jpg?width=1080&crop=smart&auto=webp&s=cc17726c6a2d8ed1291cfa337590fd7e487b1918', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/x0V2EqT76RcmNTBy0q8TOPd09sOyEpL7QaepHemZeL4.jpg?auto=webp&s=bd603b818b8f15efc363704dea10317db568d7bc', 'width': 1200}, 'variants': {}}]} |