title
stringlengths 1
300
| score
int64 0
3.09k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
3.09k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
P40 vs 3090 vs mac mini cluster? | 7 | Hello all.
I am interested in running the llama 3.3 70b model in order to rid myself of paying for chatgpt and claude.
I already own a single 3090, and I know a dual 3090 setup is popular for this model. However, for the price of a 3090 on ebay (\~800 bucks), I can buy 3 P40s and have money left over for a CPU and motherboard.
There is also always the option of going with a few mac minis and soldering in larger ram chips. Not ideal, but possible.
What are your thoughts? | 2025-01-05T00:28:07 | https://www.reddit.com/r/LocalLLaMA/comments/1htsvmz/p40_vs_3090_vs_mac_mini_cluster/ | Striking_Luck5201 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1htsvmz | false | null | t3_1htsvmz | /r/LocalLLaMA/comments/1htsvmz/p40_vs_3090_vs_mac_mini_cluster/ | false | false | self | 7 | null |
Response of flagships LLMs to the question "Who are you, Claude?" - All LLMs want to impersonate Claude. | 2 | 2025-01-05T00:52:30 | https://www.reddit.com/r/LocalLLaMA/comments/1httdoc/response_of_flagships_llms_to_the_question_who/ | cpldcpu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1httdoc | false | null | t3_1httdoc | /r/LocalLLaMA/comments/1httdoc/response_of_flagships_llms_to_the_question_who/ | false | false | 2 | null |
||
What’s the Biggest Bottleneck for LLM Development? | 47 | What do you think is the biggest hurdle for the future of LLMs? Is it compute costs, data quality, or something else?
I’ve spoken to quite a few people about this, and the discussion often boils down to the availability of data or compute costs. I’d love to hear your thoughts as well! | 2025-01-05T01:31:03 | https://www.reddit.com/r/LocalLLaMA/comments/1htu6kp/whats_the_biggest_bottleneck_for_llm_development/ | Equivalent_Owl9786 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1htu6kp | false | null | t3_1htu6kp | /r/LocalLLaMA/comments/1htu6kp/whats_the_biggest_bottleneck_for_llm_development/ | false | false | self | 47 | null |
themachine (12x3090) | 174 | Someone recently asked about large servers to run LLMs... | 2025-01-05T01:51:27 | https://www.reddit.com/r/LocalLLaMA/comments/1htulfp/themachine_12x3090/ | rustedrobot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1htulfp | false | null | t3_1htulfp | /r/LocalLLaMA/comments/1htulfp/themachine_12x3090/ | false | false | self | 174 | null |
How to use Mistral Nemo Instruct chat template during fine-tuning | 2 | Hi, I am writing this post hoping someone will clarify something I do not understand about Mistral Nemo Instruct chat template.
It is my understanding that its chat template will apply the system message only in the last user-assistant interaction and only if there is no last assistant message. For example consider this chat:
{"role": "system", "content":"you are a helpful assistant"}
{"role": "user", "content": "Hello, how are you?"}
{"role": "assistant", "content": "I'm doing great. How can I help you today?"}
{"role": "user", "content": "I'd like to show off how chat templating works!"}
it will be formatted as
`<s>[INST]Hello, how are you?[/INST]I'm doing great. How can I help you today?</s>[INST]you are a helpful assistant`
`I'd like to show off how chat templating works![/INST]`
while if we remove the last user interaction ("I'd like to show off how chat templating works!") the chat will be formatted without the system message:
`<s>[INST]Hello, how are you?[/INST]I'm doing great. How can I help you today?</s>`
While this makes sense to me for inference, I do not understand how it will work during fine-tuning, since in that case the system message will always be excluded in the formatted template. Should I assume that when preparing data for the fine-tuning using Nemo template any instruction of the system message should be put in the user message instead? | 2025-01-05T01:55:09 | https://www.reddit.com/r/LocalLLaMA/comments/1htuo13/how_to_use_mistral_nemo_instruct_chat_template/ | hertric | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1htuo13 | false | null | t3_1htuo13 | /r/LocalLLaMA/comments/1htuo13/how_to_use_mistral_nemo_instruct_chat_template/ | false | false | self | 2 | null |
LocalLLaMA for Summarising video lectures | 1 | [removed] | 2025-01-05T02:16:15 | https://www.reddit.com/r/LocalLLaMA/comments/1htv2yz/localllama_for_summarising_video_lectures/ | camillegarcia9595 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1htv2yz | false | null | t3_1htv2yz | /r/LocalLLaMA/comments/1htv2yz/localllama_for_summarising_video_lectures/ | false | false | self | 1 | null |
AI Tool That Turns GitHub Repos into Instant Wikis with DeepSeek v3! | 460 | 2025-01-05T03:15:04 | https://www.reddit.com/gallery/1htw7g5 | Physical-Physics6613 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1htw7g5 | false | null | t3_1htw7g5 | /r/LocalLLaMA/comments/1htw7g5/ai_tool_that_turns_github_repos_into_instant/ | false | false | 460 | null |
||
Deepseek-v3 is insanely popular. A 671B model's downloads are going to overtake QwQ-32B-preview. | 327 | 2025-01-05T03:29:25 | https://www.reddit.com/r/LocalLLaMA/comments/1htwh4l/deepseekv3_is_insanely_popular_a_671b_models/ | realJoeTrump | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1htwh4l | false | null | t3_1htwh4l | /r/LocalLLaMA/comments/1htwh4l/deepseekv3_is_insanely_popular_a_671b_models/ | false | false | 327 | null |
||
🚀 Introducing **Titan Sight**: Seamless Web Search Integration for LLM Agents with Advanced Caching and Free Options! 🧠🔍 | 1 | [removed] | 2025-01-05T03:30:43 | https://www.reddit.com/r/LocalLLaMA/comments/1htwi0s/introducing_titan_sight_seamless_web_search/ | Powerful_Soup7645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1htwi0s | false | null | t3_1htwi0s | /r/LocalLLaMA/comments/1htwi0s/introducing_titan_sight_seamless_web_search/ | false | false | self | 1 | null |
🚀 Introducing **Titan Sight**: Seamless Web Search Integration for LLM Agents with Advanced Caching and Free Options! | 1 | [removed] | 2025-01-05T03:32:06 | https://www.reddit.com/r/LocalLLaMA/comments/1htwizm/introducing_titan_sight_seamless_web_search/ | Powerful_Soup7645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1htwizm | false | null | t3_1htwizm | /r/LocalLLaMA/comments/1htwizm/introducing_titan_sight_seamless_web_search/ | false | false | self | 1 | null |
Exploring Local LLMs Without Pretrained Helper Features for Enhanced Performance? | 0 | Hey everyone,
I’ve been diving into the world of local LLMs and had a thought I wanted to explore with the community. Most LLMs seem to be heavily pretrained to assist users intuitively, which is great for ease of use but might come at the cost of performance, especially when considering token efficiency or backend processing power.
What I’m curious about is the possibility of running a local LLM that strips away the "helper" features (e.g., preloaded conversational or empathic capabilities) and instead focuses purely on raw performance. This kind of model would require more precise and detailed prompts from the user to achieve desired results but could potentially process requests more efficiently or accurately.
* Does an LLM like this already exist?
* Does anyone here have experience with such LLM setups?
* Are there specific models or configurations you’d recommend for a stripped-down approach?
* Any trade-offs you’ve noticed between usability and performance in local LLMs?
I’d love to hear your thoughts, insights, or recommendations! Thanks in advance for sharing your experiences. 😊 | 2025-01-05T03:32:13 | https://www.reddit.com/r/LocalLLaMA/comments/1htwj35/exploring_local_llms_without_pretrained_helper/ | gamblingapocalypse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1htwj35 | false | null | t3_1htwj35 | /r/LocalLLaMA/comments/1htwj35/exploring_local_llms_without_pretrained_helper/ | false | false | self | 0 | null |
Introcuding kokoro-onnx TTS | 95 | Hey everyone!
I recently worked on the *kokoro-onnx* package, which is a TTS (text-to-speech) system built with onnxruntime, based on the new *kokoro* model ([https://huggingface.co/hexgrad/Kokoro-82M](https://huggingface.co/hexgrad/Kokoro-82M))
The model is really cool and includes multiple voices, including a whispering feature similar to Eleven Labs.
It works faster than real-time on macOS M1. The package supports Linux, Windows, macOS x86-64, and arm64!
You can find the package here:
[https://github.com/thewh1teagle/kokoro-onnx](https://github.com/thewh1teagle/kokoro-onnx)
Demo:
*Processing video i6l455b0i3be1...*
| 2025-01-05T03:34:02 | https://www.reddit.com/r/LocalLLaMA/comments/1htwkba/introcuding_kokoroonnx_tts/ | WeatherZealousideal5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1htwkba | false | null | t3_1htwkba | /r/LocalLLaMA/comments/1htwkba/introcuding_kokoroonnx_tts/ | false | false | self | 95 | {'enabled': False, 'images': [{'id': '7eehV2XcSO66YwpZvbmmLkh4WEE7p2glytEMg55eeAw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?width=108&crop=smart&auto=webp&s=92cfc3df092163df6e76edecf249be0243c6fb17', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?width=216&crop=smart&auto=webp&s=4fda7fe6de500bd32173f9eaf4c1dbeb853a0355', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?width=320&crop=smart&auto=webp&s=8867f0eb88644560eabc8a6b216bc7e5d8fe9521', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?width=640&crop=smart&auto=webp&s=0ddc1534af5f00261cdb1ed1b5eaa475f95f839a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?width=960&crop=smart&auto=webp&s=a5fd09c695aaeb1af8eecc126abf1d3a5ad39621', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?width=1080&crop=smart&auto=webp&s=9f97cff2f09fdbc84e403b7c09690957f67df949', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?auto=webp&s=fbec1ab60ce313a08de3b4e6b2ee1f34d687170a', 'width': 1200}, 'variants': {}}]} |
What Could Be the HackerRank or LeetCode Equivalent for Prompt Engineers? | 1 | [removed] | 2025-01-05T04:03:29 | https://www.reddit.com/r/LocalLLaMA/comments/1htx3xs/what_could_be_the_hackerrank_or_leetcode/ | Comfortable_Device50 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1htx3xs | false | null | t3_1htx3xs | /r/LocalLLaMA/comments/1htx3xs/what_could_be_the_hackerrank_or_leetcode/ | false | false | self | 1 | null |
Fooling AI Detection Tools | 8 | I wonder how people are fooling AI detection software like [ZeroGPT](http://zerogpt.com/), [GPTZero](http://gptzero.me/) and [Turnitin's built in detector](http://turnitin.com/).
Adjusting temperature used to work quite well, but it seems like some detectors have learnt to counter that trick. Recently, I tried using [NaturalLM](https://huggingface.co/qingy2024/NaturalLM-7B-Instruct), but it didn't work at all (flagged at 100% AI).
Since this is [LocalLLaMA](https://www.reddit.com/r/LocalLLaMA), and we all run our own models, what other parameters (e.g. top_k, top_p) are people messing with to evade detection? Alternatively, what can you manually do to the text to evade detection? | 2025-01-05T05:15:38 | https://www.reddit.com/r/LocalLLaMA/comments/1htyecp/fooling_ai_detection_tools/ | Mysterious_Finish543 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1htyecp | false | null | t3_1htyecp | /r/LocalLLaMA/comments/1htyecp/fooling_ai_detection_tools/ | false | false | self | 8 | null |
Looking for a specific all-in-one model I can't seem to find. | 1 | [removed] | 2025-01-05T06:08:34 | https://www.reddit.com/r/LocalLLaMA/comments/1htz9o4/looking_for_a_specific_allinone_model_i_cant_seem/ | Suffering_Hairline23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1htz9o4 | false | null | t3_1htz9o4 | /r/LocalLLaMA/comments/1htz9o4/looking_for_a_specific_allinone_model_i_cant_seem/ | false | false | self | 1 | null |
Llama 3.2 local fine tune | 1 | [removed] | 2025-01-05T06:22:23 | https://www.reddit.com/r/LocalLLaMA/comments/1htzhg6/llama_32_local_fine_tune/ | FigPsychological3731 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1htzhg6 | false | null | t3_1htzhg6 | /r/LocalLLaMA/comments/1htzhg6/llama_32_local_fine_tune/ | false | false | self | 1 | null |
Open source LLMs to extract data flows from system context diagram | 1 | [removed] | 2025-01-05T06:25:14 | https://www.reddit.com/r/LocalLLaMA/comments/1htzj08/open_source_llms_to_extract_data_flows_from/ | Sensitive-Feed-4411 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1htzj08 | false | null | t3_1htzj08 | /r/LocalLLaMA/comments/1htzj08/open_source_llms_to_extract_data_flows_from/ | false | false | 1 | null |
|
Data extraction from diagrams | 1 | [removed] | 2025-01-05T06:32:51 | https://www.reddit.com/r/LocalLLaMA/comments/1htzn7n/data_extraction_from_diagrams/ | Sensitive-Feed-4411 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1htzn7n | false | null | t3_1htzn7n | /r/LocalLLaMA/comments/1htzn7n/data_extraction_from_diagrams/ | false | false | self | 1 | null |
*Naive User* Do I need to host locally? | 0 | After tinkering around with self-hosting a local model, I have come to realize that I am more seeking local integration in a Windows environment with use in browsing, writing, generative media, and limited local file searching.
I would like some advice on which tools/interfaces/UIs/plugins I can run on Windows which rely on an online model like OpenAI to do the heavy lifting while interacting with local applications.
Again, this is a new interest so forgive me if this post comes across as largely ignorant. | 2025-01-05T06:36:09 | https://www.reddit.com/r/LocalLLaMA/comments/1htzoyl/naive_user_do_i_need_to_host_locally/ | Acclaim1H | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1htzoyl | false | null | t3_1htzoyl | /r/LocalLLaMA/comments/1htzoyl/naive_user_do_i_need_to_host_locally/ | false | false | self | 0 | null |
Data extraction from diagrams | 1 | [removed] | 2025-01-05T06:36:13 | https://www.reddit.com/r/LocalLLaMA/comments/1htzp04/data_extraction_from_diagrams/ | Sensitive-Feed-4411 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1htzp04 | false | null | t3_1htzp04 | /r/LocalLLaMA/comments/1htzp04/data_extraction_from_diagrams/ | false | false | self | 1 | null |
Help Needed: Setting Up my own LLM NSFW from Scratch to Finish – No Coding Experience! | 1 | [removed] | 2025-01-05T08:23:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hu1nv5/help_needed_setting_up_my_own_llm_nsfw_from/ | AstralEchoes999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu1nv5 | false | null | t3_1hu1nv5 | /r/LocalLLaMA/comments/1hu1nv5/help_needed_setting_up_my_own_llm_nsfw_from/ | false | false | nsfw | 1 | null |
why `max_model_len` influence generation in vllm | 1 | [removed] | 2025-01-05T08:26:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hu1pqn/why_max_model_len_influence_generation_in_vllm/ | xiaobanni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu1pqn | false | null | t3_1hu1pqn | /r/LocalLLaMA/comments/1hu1pqn/why_max_model_len_influence_generation_in_vllm/ | false | false | self | 1 | null |
URIAL: Untuned LLMs with Restyled In-context Alignment (Rethinking alignment), still relevant? | 1 | [https://arxiv.org/abs/2312.01552](https://arxiv.org/abs/2312.01552)
What's the current take on this? Do you know if is is used anywhere? | 2025-01-05T08:27:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hu1pwz/urial_untuned_llms_with_restyled_incontext/ | Fantastic_Climate_90 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu1pwz | false | null | t3_1hu1pwz | /r/LocalLLaMA/comments/1hu1pwz/urial_untuned_llms_with_restyled_incontext/ | false | false | self | 1 | null |
LLM Benchmarks that run with Ollama? | 0 | Im having a hard time even finding benchmarks for LLMs at all. And they all seem to require either vLLM, llamacpp (both of which are notorious pains in the ass to setup on windows) or just dont work with GGUFS.
Also, Ollama has consistently given me the fastest tokens/s of any method.
Anyone have a quick and painless benchmark/eval suite I can use seamlessly with my Ollama endpoint?
Thanks | 2025-01-05T08:34:29 | https://www.reddit.com/r/LocalLLaMA/comments/1hu1ute/llm_benchmarks_that_run_with_ollama/ | Imjustmisunderstood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu1ute | false | null | t3_1hu1ute | /r/LocalLLaMA/comments/1hu1ute/llm_benchmarks_that_run_with_ollama/ | false | false | self | 0 | null |
🙏🙏🇬🇲🇬🇲🥲🥲struggle day by day for something to eat at home..Gambia a very hard country in west Africa | 0 | 2025-01-05T09:09:22 | https://www.reddit.com/gallery/1hu2im2 | Modoulamin_cham12 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hu2im2 | false | null | t3_1hu2im2 | /r/LocalLLaMA/comments/1hu2im2/struggle_day_by_day_for_something_to_eat_at/ | false | false | 0 | null |
||
🙏🙏🇬🇲🇬🇲🥲🥲struggle day by day for something to eat at home..Gambia a very hard country in west Africa | 1 | 2025-01-05T09:10:00 | https://www.reddit.com/gallery/1hu2j3d | Modoulamin_cham12 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hu2j3d | false | null | t3_1hu2j3d | /r/LocalLLaMA/comments/1hu2j3d/struggle_day_by_day_for_something_to_eat_at/ | false | false | 1 | null |
||
I have 6 million characters of ElevenLabs credits, what's the best way to use them? | 52 | Generating some synth data? making some audio books? idk -- i subscribed to the scale plan and forgot to cancel, drained my bank account for over $1k, lets make it worth it! | 2025-01-05T09:14:23 | https://www.reddit.com/r/LocalLLaMA/comments/1hu2mc4/i_have_6_million_characters_of_elevenlabs_credits/ | CrazyPhilosopher1643 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu2mc4 | false | null | t3_1hu2mc4 | /r/LocalLLaMA/comments/1hu2mc4/i_have_6_million_characters_of_elevenlabs_credits/ | false | false | self | 52 | null |
Ah we just Discovered a new usecase for reasoning models (o1) for commercialization. And this can be done without much user effort!!! #StoryPersonalizationUsingAI | 1 | Many people ask how one can use reasoning models (o1) for real life purposes citing it's just a reasoning model. So here is an experiment which we've conducted to personalize stories for a cultural context from an original story. For example, if there is an original story in an American or Russian setting, we retain the core message of the story and apply it to a different setting such as Indian or European. Although sometimes, it might not be possible to adapt the original story to different cultural contexts, as part of this project, we've taken stories which have universal human values across different cultural contexts such as American/Russian/Irish/Swedish and applied them to an Indian setting.
Here are our personalized stories (All of these stories are < 2000 words and can be read in <= 10 mins):
1. Indian Adaptation of the story \[Hearts and Hands\](https://americanliterature.com/author/o-henry/short-story/hearts-and-hands/) by American author O'Henry.
2. Indian Adaptation of the story \[Vanka\](https://americanliterature.com/author/anton-chekhov/short-story/vanka/) by Russian author Anton Chekhov.
3. Indian Adaptation of the story \[Seflish Giant\](https://americanliterature.com/author/oscar-wilde/short-story/the-selfish-giant/) by Irish author Oscar Wilde.
4. Indian Adaptation of \[Little Match Girl\](https://americanliterature.com/author/hans-christian-andersen/short-story/the-little-match-girl/) by Swedish author Hans Christian Andresen.
\*\*Github Link:\*\* [https://github.com/desik1998/PersonalizingStoriesUsingAI/tree/main](https://github.com/desik1998/PersonalizingStoriesUsingAI/tree/main)
\*\*X Post (Reposted by Lukasz Kaiser - Major Researcher who worked on o1 Model):\*\* [https://x.com/desik1998/status/1875551392552907226](https://x.com/desik1998/status/1875551392552907226)
\*\*What actually gets personalized?\*\*
The characters/names/cities/festivals/climate/food/language-tone are all adapted/changed to local settings while maintaining the overall crux of the original stories.
For example, here are the personalizations done as part of Vanka: The name of the protagonist is changed from Zhukov to Chotu, The festival setting is changed from Christmas to Diwali, The Food is changed from Bread to Roti and Sometimes within the story, conversations include Hindi words (written in English) to add emotional depth and authenticity. This is all done while preserving the core values of the original story such as child innocence, abuse and hope.
\### Benefits:
1. Personalized stories have more relatable characters, settings and situations which helps readers relate and connect deeper to the story.
2. \*\*Reduced cognitive load for readers:\*\* We've showed our \[personalized stories\](https://github.com/desik1998/PersonalizingStoriesUsingAI/tree/main/PersonalizedStories) to multiple people and they've said that it's easier to read the personalized story than the original story because of the familiarity of the names/settings in the personalized story.
\### How was this done?
\*\*Personalizing stories involves navigating through multiple possibilities, such as selecting appropriate names, cities, festivals, and cultural nuances to adapt the original narrative effectively. Choosing the most suitable options from this vast array can be challenging. This is where o1’s advanced reasoning capabilities shine. By explicitly prompting the model to evaluate and weigh different possibilities, it can systematically assess each option and make the optimal choice. Thanks to its exceptional reasoning skills and capacity for extended, thoughtful analysis, o1 excels at this task. In contrast, other models often struggle due to their limited ability to consider multiple dimensions over an extended period and identify the best choices. This gives o1 a distinct advantage in delivering high-quality personalizations.\*\*
Here is the procedure we followed and that too using very simple prompting techniques:
\*\*Step 1:\*\* Give the whole original story to the model and ask how to personalize it for a cultural context. Ask the model to explore all the different possible choices for personalization, compare each of them and get the best one. \*\*For now, we ask the model to avoid generating the whole personalized story for now and let it use up all the tokens for deciding what all things need to be adapted for doing the personalization.\*\*
Prompt:
\`\`\`
Personalize this story for Indian audience with below details in mind:
1. The personalization should relate/sell to a vast majority of Indians.
2. Adjust content to reflect Indian culture, language style, and simplicity, ensuring the result is easy for an average Indian reader to understand.
3. Avoid any "woke" tones or modern political correctness that deviates from the story’s essence.
Identify all the aspects which can be personalized then as while you think, think through all the different combinations of personalizations, come up with different possible stories and then give the best story. Make sure to not miss details as part of the final story. Don't generate story for now and just give the best adaptation. We'll generare the story later.
\`\`\`
\*\*Step 2:\*\* Now ask the model to generate the personalized story.
\*\*Step 3:\*\* If the story is not good enough, just tell the model that it's not good enough and ask it to adapt more for the local culture. (Surprisingly, it betters the story!!!).
\*\*Step 4:\*\* Some minor manual changes if we want to make.
Here is the detailed conversations which we've had with o1 model for generating each of the personalized stories \[\[1\](https://chatgpt.com/share/6762e3f7-0994-8011-853b-1b1553bc7f82), \[2\](https://chatgpt.com/share/676bd09b-12d4-8011-9102-da7defbff2b9), \[3\](https://chatgpt.com/share/6762e40a-21e8-8011-b32d-7865f5e53814), \[4\](https://chatgpt.com/share/676c0aca-04a0-8011-b81a-e6577126e1b9)\].
\### Other approaches tried (Not great results):
1. Directly prompting a non reasoning model to give the whole personalized story doesn't give good outputs.
2. Transliteration based approach for non reasoning model:
2.1 We give the whole story to LLM and ask it how to personalize on a high level.
2.2 We then go through each para of the original story and ask the LLM to personalize the current para. And as part of this step, we also give \`\`\`the whole original story, personalized story generated till current para and the high level personalizations which we got from 2.1 for the overall story.\`\`\`
2.3 We append each of the personalized paras to get the final personalized story.
But The main problem with this approach is:
1. We've to heavily prompt the model and these prompts might change based on story as well.
2. The model temperature needs to be changed for different stories.
3. The cost is very high because we've to give the whole original story, personalized story for each part of the para personalization.
4. The story generated is also not very great and the model often goes in a tangential way.
\*\*From this experiment, we can conclude that prompting alone a non reasoning model might not be sufficient and additional training by manually curating story datasets might be required\*\*. Given this is a manual task, we can distill the stories from o1 to a smaller non reasoning model and see how well it does.
\[Here\](https://github.com/desik1998/PersonalizingStoriesUsingAI/blob/main/OtherApproachesCode/Personalized\_Novel\_Generation\_POC\_draft.ipynb) is the overall code for this approach and \[here is the personalized story generated using this approach for "Gifts of The Magi"\](https://raw.githubusercontent.com/desik1998/PersonalizingStoriesUsingAI/refs/heads/main/OtherApproachesCode/Gifts%20of%20Selfless%20Love.txt) which doesn't meet the expectations.
\### Next Steps:
1. Come up with an approach for long novels. Currently the stories are no more than 2000 words.
2. Making this work with smaller LLMs': Gather Dataset for different languages by hitting o1 model and then distill that to smaller model.
\* This requires a dataset for Non Indian settings as well. So request people to submit a PR as well.
3. The current work is at a macro grain (a country level personalization). Further work needs to be done to understand how to do it at Individual level and their independent preferences.
4. The Step 3 as part of the Algo might require some manual intervention and additionally we need to make some minor changes post o1 gives the final output. We can evaluate if there are mechanisms to automate everything.
\### How did this start?
Last year (9 months back), we were working on creating a novel with the Subject \["What would happen if the Founding Fathers came back to modern times"\](https://github.com/desik1998/NovelWithLLMs). Although we were able to \[generate a story, it wasn't upto the mark\](https://github.com/desik1998/NovelWithLLMs/blob/main/Novel.md). We later posted a post (currently deleted) in Andrej Karpathy's LLM101 Repo to build something on these lines. Andrej took the same idea and a few days back tried it with o1 and \[got decent results\](https://x.com/karpathy/status/1868903650451767322). Additionally, a few months back, we got feedback that writing a complete story from scratch might be difficult for an LLM so instead try on Personalization using existing story. After trying many approaches, each of the approaches falls short but it turns out o1 model excels in doing this easily. Given there are a lot of existing stories on the internet, we believe people can now use the approach above or tweak it to create new novels personalized for their own settings and if possible, even sell it.
\### LICENSE
MIT - \*\*We're open sourcing our work and everyone is encouraged to use these learnings to personalize non licensed stories into their own cultural context for commercial purposes as well 🙂.\*\* | 2025-01-05T09:48:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hu3b70/ah_we_just_discovered_a_new_usecase_for_reasoning/ | Desik_1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu3b70 | false | null | t3_1hu3b70 | /r/LocalLLaMA/comments/1hu3b70/ah_we_just_discovered_a_new_usecase_for_reasoning/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ZHeh8GIpp8wV4AItR2Xu0nJOgrLANvpq_ZEPSpOPgQE', 'resolutions': [{'height': 103, 'url': 'https://external-preview.redd.it/yFebGM8cnCWiwWXLG5QIftKmZc-f8MhBTuBLzJ42wRU.jpg?width=108&crop=smart&auto=webp&s=afb31e2d1542f39e0c6c72d22f3dd8403595a90b', 'width': 108}, {'height': 206, 'url': 'https://external-preview.redd.it/yFebGM8cnCWiwWXLG5QIftKmZc-f8MhBTuBLzJ42wRU.jpg?width=216&crop=smart&auto=webp&s=294f74a4e6072f4da35af364ca1e9e11b261f7bc', 'width': 216}, {'height': 305, 'url': 'https://external-preview.redd.it/yFebGM8cnCWiwWXLG5QIftKmZc-f8MhBTuBLzJ42wRU.jpg?width=320&crop=smart&auto=webp&s=15bff2bfb9e42343638df3393377693111fe0f16', 'width': 320}], 'source': {'height': 434, 'url': 'https://external-preview.redd.it/yFebGM8cnCWiwWXLG5QIftKmZc-f8MhBTuBLzJ42wRU.jpg?auto=webp&s=b31b383c1d70e63061a6f75d37f18b49636ce722', 'width': 455}, 'variants': {}}]} |
Ah we just Discovered a new commercial usecase for o1 Model. And this can be done without much user effort!!! #StoryPersonalizationUsingAI | 31 | Many people ask how one can use reasoning models such as o1 for more real life and commercial purposes citing it's just a reasoning model and can be used mostly for scientific purposes. So here is an experiment which we've conducted to personalize stories for a cultural context from an original story. For example, if there is an original story in an American or Russian setting, we retain the core message of the story and apply it to a different setting such as Indian or European. Although sometimes, it might not be possible to adapt the original story to different cultural contexts, as part of this project, we've taken stories which have universal human values across different cultural contexts such as American/Russian/Irish/Swedish and applied them to an Indian setting.
Here are our personalized stories (All of these stories are < 2000 words and can be read in <= 10 mins):
1. Indian Adaptation of the story [Hearts and Hands](https://americanliterature.com/author/o-henry/short-story/hearts-and-hands/) by American author O'Henry.
2. Indian Adaptation of the story [Vanka](https://americanliterature.com/author/anton-chekhov/short-story/vanka/) by Russian author Anton Chekhov.
3. Indian Adaptation of the story [Seflish Giant](https://americanliterature.com/author/oscar-wilde/short-story/the-selfish-giant/) by Irish author Oscar Wilde.
4. Indian Adaptation of [Little Match Girl](https://americanliterature.com/author/hans-christian-andersen/short-story/the-little-match-girl/) by Swedish author Hans Christian Andresen.
**Github Link:** https://github.com/desik1998/PersonalizingStoriesUsingAI/tree/main
**X Post (Reposted by Lukasz Kaiser - Major Researcher who worked on o1 Model):** https://x.com/desik1998/status/1875551392552907226
**What actually gets personalized?**
The characters/names/cities/festivals/climate/food/language-tone are all adapted/changed to local settings while maintaining the overall crux of the original stories.
For example, here are the personalizations done as part of Vanka: The name of the protagonist is changed from Zhukov to Chotu, The festival setting is changed from Christmas to Diwali, The Food is changed from Bread to Roti and Sometimes within the story, conversations include Hindi words (written in English) to add emotional depth and authenticity. This is all done while preserving the core values of the original story such as child innocence, abuse and hope.
### Benefits:
1. Personalized stories have more relatable characters, settings and situations which helps readers relate and connect deeper to the story.
2. **Reduced cognitive load for readers:** We've showed our [personalized stories](https://github.com/desik1998/PersonalizingStoriesUsingAI/tree/main/PersonalizedStories) to multiple people and they've said that it's easier to read the personalized story than the original story because of the familiarity of the names/settings in the personalized story.
### How was this done?
**Personalizing stories involves navigating through multiple possibilities, such as selecting appropriate names, cities, festivals, and cultural nuances to adapt the original narrative effectively. Choosing the most suitable options from this vast array can be challenging. This is where o1’s advanced reasoning capabilities shine. By explicitly prompting the model to evaluate and weigh different possibilities, it can systematically assess each option and make the optimal choice. Thanks to its exceptional reasoning skills and capacity for extended, thoughtful analysis, o1 excels at this task. In contrast, other models often struggle due to their limited ability to consider multiple dimensions over an extended period and identify the best choices. This gives o1 a distinct advantage in delivering high-quality personalizations.**
Here is the procedure we followed and that too using very simple prompting techniques:
**Step 1:** Give the whole original story to the model and ask how to personalize it for a cultural context. Ask the model to explore all the different possible choices for personalization, compare each of them and get the best one. **For now, we ask the model to avoid generating the whole personalized story for now and let it use up all the tokens for deciding what all things need to be adapted for doing the personalization.**
Prompt:
```
Personalize this story for Indian audience with below details in mind:
1. The personalization should relate/sell to a vast majority of Indians.
2. Adjust content to reflect Indian culture, language style, and simplicity, ensuring the result is easy for an average Indian reader to understand.
3. Avoid any "woke" tones or modern political correctness that deviates from the story’s essence.
Identify all the aspects which can be personalized then as while you think, think through all the different combinations of personalizations, come up with different possible stories and then give the best story. Make sure to not miss details as part of the final story. Don't generate story for now and just give the best adaptation. We'll generare the story later.
```
**Step 2:** Now ask the model to generate the personalized story.
**Step 3:** If the story is not good enough, just tell the model that it's not good enough and ask it to adapt more for the local culture. (Surprisingly, it betters the story!!!).
**Step 4:** Some minor manual changes if we want to make.
Here is the detailed conversations which we've had with o1 model for generating each of the personalized stories [[1](https://chatgpt.com/share/6762e3f7-0994-8011-853b-1b1553bc7f82), [2](https://chatgpt.com/share/676bd09b-12d4-8011-9102-da7defbff2b9), [3](https://chatgpt.com/share/6762e40a-21e8-8011-b32d-7865f5e53814), [4](https://chatgpt.com/share/676c0aca-04a0-8011-b81a-e6577126e1b9)].
### Other approaches tried (Not great results):
1. Directly prompting a non reasoning model to give the whole personalized story doesn't give good outputs.
2. Transliteration based approach for non reasoning model:
2.1 We give the whole story to LLM and ask it how to personalize on a high level.
2.2 We then go through each para of the original story and ask the LLM to personalize the current para. And as part of this step, we also give ```the whole original story, personalized story generated till current para and the high level personalizations which we got from 2.1 for the overall story.```
2.3 We append each of the personalized paras to get the final personalized story.
But The main problem with this approach is:
1. We've to heavily prompt the model and these prompts might change based on story as well.
2. The model temperature needs to be changed for different stories.
3. The cost is very high because we've to give the whole original story, personalized story for each part of the para personalization.
4. The story generated is also not very great and the model often goes in a tangential way.
**From this experiment, we can conclude that prompting alone a non reasoning model might not be sufficient and additional training by manually curating story datasets might be required**. Given this is a manual task, we can distill the stories from o1 to a smaller non reasoning model and see how well it does.
[Here](https://github.com/desik1998/PersonalizingStoriesUsingAI/blob/main/OtherApproachesCode/Personalized_Novel_Generation_POC_draft.ipynb) is the overall code for this approach and [here is the personalized story generated using this approach for "Gifts of The Magi"](https://raw.githubusercontent.com/desik1998/PersonalizingStoriesUsingAI/refs/heads/main/OtherApproachesCode/Gifts%20of%20Selfless%20Love.txt) which doesn't meet the expectations.
### Next Steps:
1. Come up with an approach for long novels. Currently the stories are no more than 2000 words.
2. Making this work with smaller LLMs': Gather Dataset for different languages by hitting o1 model and then distill that to smaller model.
* This requires a dataset for Non Indian settings as well. So request people to submit a PR as well.
3. The current work is at a macro grain (a country level personalization). Further work needs to be done to understand how to do it at Individual level and their independent preferences.
4. The Step 3 as part of the Algo might require some manual intervention and additionally we need to make some minor changes post o1 gives the final output. We can evaluate if there are mechanisms to automate everything.
### How did this start?
Last year (9 months back), we were working on creating a novel with the Subject ["What would happen if the Founding Fathers came back to modern times"](https://github.com/desik1998/NovelWithLLMs). Although we were able to [generate a story, it wasn't upto the mark](https://github.com/desik1998/NovelWithLLMs/blob/main/Novel.md). We later posted a post (currently deleted) in Andrej Karpathy's LLM101 Repo to build something on these lines. Andrej took the same idea and a few days back tried it with o1 and [got decent results](https://x.com/karpathy/status/1868903650451767322). Additionally, a few months back, we got feedback that writing a complete story from scratch might be difficult for an LLM so instead try on Personalization using existing story. After trying many approaches, each of the approaches falls short but it turns out o1 model excels in doing this easily. Given there are a lot of existing stories on the internet, we believe people can now use the approach above or tweak it to create new novels personalized for their own settings and if possible, even sell it.
### LICENSE
MIT - **We're open sourcing our work and everyone is encouraged to use these learnings to personalize non licensed stories into their own cultural context for commercial purposes as well 🙂.** | 2025-01-05T09:49:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hu3c48/ah_we_just_discovered_a_new_commercial_usecase/ | Desik_1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu3c48 | false | null | t3_1hu3c48 | /r/LocalLLaMA/comments/1hu3c48/ah_we_just_discovered_a_new_commercial_usecase/ | false | false | self | 31 | {'enabled': False, 'images': [{'id': 'ZHeh8GIpp8wV4AItR2Xu0nJOgrLANvpq_ZEPSpOPgQE', 'resolutions': [{'height': 103, 'url': 'https://external-preview.redd.it/yFebGM8cnCWiwWXLG5QIftKmZc-f8MhBTuBLzJ42wRU.jpg?width=108&crop=smart&auto=webp&s=afb31e2d1542f39e0c6c72d22f3dd8403595a90b', 'width': 108}, {'height': 206, 'url': 'https://external-preview.redd.it/yFebGM8cnCWiwWXLG5QIftKmZc-f8MhBTuBLzJ42wRU.jpg?width=216&crop=smart&auto=webp&s=294f74a4e6072f4da35af364ca1e9e11b261f7bc', 'width': 216}, {'height': 305, 'url': 'https://external-preview.redd.it/yFebGM8cnCWiwWXLG5QIftKmZc-f8MhBTuBLzJ42wRU.jpg?width=320&crop=smart&auto=webp&s=15bff2bfb9e42343638df3393377693111fe0f16', 'width': 320}], 'source': {'height': 434, 'url': 'https://external-preview.redd.it/yFebGM8cnCWiwWXLG5QIftKmZc-f8MhBTuBLzJ42wRU.jpg?auto=webp&s=b31b383c1d70e63061a6f75d37f18b49636ce722', 'width': 455}, 'variants': {}}]} |
Building a Cheap ARMv9 SBC Cluster to Run Deepseek v3 | 13 | I'm toying with the idea of building a cheap ARMv9 SBC cluster to run Deepseek v3. From what I've researched, you need around **664 GB of RAM** to handle the model effectively. Based on some quick calculations, I imagine a setup with around 11 nodes could work, assuming something like this per node:
* **CPU**: ARMv9.2 with \~30 TOPS (maybe add 10 more TOPS if the onboard GPU can be utilized)
* **RAM**: 64 GB at 100 GB/s
* **Network**: Dual 5 GbE (bonding could be an option)
* **Storage**: NVMe, perhaps 1 TB per node, though something smaller might suffice if Ceph is used
I'm curious about a few things:
1. **Software**: What frameworks/tools would you recommend for efficiently distributing the model across the cluster? Kubernetes with MPI, Ray, or something else entirely?
2. **Performance**: Any ballpark expectations on tokens-per-second (toks/s) this kind of setup might achieve?
3. **Cost Optimization**: Are there better/cheaper alternatives for ARMv9 boards or strategies to optimize cost while maintaining the required performance?
Would love to hear your thoughts, especially if you've attempted something similar! | 2025-01-05T10:07:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hu3pj1/building_a_cheap_armv9_sbc_cluster_to_run/ | flatmax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu3pj1 | false | null | t3_1hu3pj1 | /r/LocalLLaMA/comments/1hu3pj1/building_a_cheap_armv9_sbc_cluster_to_run/ | false | false | self | 13 | null |
Is there an easy way to deploy and test new models? | 5 |
So I do a lot of prototyping, sometimes research related and sometimes just to see if the hype around a new model is actually warranted.
Here’s my problem:
Since I’m essentially just trying things out, I end up with models collecting dust on some compute platform. I can’t do it locally because I’m GPU poor.
Here’s what I’m looking for or want to build if it doesn’t exist:
A place where I can just pick the latest open-source model, generate and deploy that model so I can then use the API locally with whatever I’m prototyping. I don’t want to pay for storing the models, but I’m fine with paying for usage (within reason). Also Ideally, I don’t want to spend any time building the API and wrestling with dependencies either (I just don’t have the time). I want a platform that handles all that for me.
Any service that lets me do the above? | 2025-01-05T10:23:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hu3z86/is_there_an_easy_way_to_deploy_and_test_new_models/ | _lindt_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu3z86 | false | null | t3_1hu3z86 | /r/LocalLLaMA/comments/1hu3z86/is_there_an_easy_way_to_deploy_and_test_new_models/ | false | false | self | 5 | null |
Randomised SVD/PCA for longer context - any potential? | 5 | I've had this idea rattling in my brain for a little now, and would love some input on whether it has potential - there's so many proposed efficiency improvements to attention, I've lost track of what has and hasn't been tried!
The process would be something to the effect of:
1. First compute the Keys and Queries as normal
2. Then, conduct randomised PCA on the queries to identify the D largest components of the Query space.
3. For each of the D largest components, keep the Key vector that best matches that component
4. Do regular attention on those Keys.
Given typical attention for a sequence of length N has complexity O(N\^2), while randomised PCA is O(D\^2), there's potentially some pretty big inference time savings here.
I can't see any existing research into whether this has legs. LoRA and Linformers come close in that they also use lower-rank approximations, but I think what i'm proposing is unique. Any insights? | 2025-01-05T10:31:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hu43wm/randomised_svdpca_for_longer_context_any_potential/ | enjeyw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu43wm | false | null | t3_1hu43wm | /r/LocalLLaMA/comments/1hu43wm/randomised_svdpca_for_longer_context_any_potential/ | false | false | self | 5 | null |
Studying | 1 | [removed] | 2025-01-05T10:55:34 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1hu4iln | false | null | t3_1hu4iln | /r/LocalLLaMA/comments/1hu4iln/studying/ | false | false | default | 1 | null |
||
Studying | 1 | [removed] | 2025-01-05T10:57:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hu4jtl/studying/ | Significant_Expert72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu4jtl | false | null | t3_1hu4jtl | /r/LocalLLaMA/comments/1hu4jtl/studying/ | false | false | self | 1 | null |
We deserve something better than LangChain | 140 | Over the past few years, I spent a lot of time building apps using LLMs while I was in grad school. Throughout the development journey, I found one common problem in the current AI app development ecosystem.
**The existing frameworks, such as LangChain, have too much abstraction.**
Why is this a problem? There are a couple of reasons.
1. **Black Boxing:** it makes your backend black-boxed and difficult to see through what is going on in your pipeline.
2. **Too Generic**: their tools are too generic to use when you want to build a specific feature but it's not easy to understand the source code and customize it.
3. **Learning Curve**: they define whole new syntaxes that make us have to learn how to use them.Frameworks should exist to reduce the difficulty of repetitive tasks. But when we have to spend so much time learning how to use the framework to simplify some basic processes, it's a waste of time.
**What is the cause of this problem?**
The root of this problem is very simple. Their architecture design is "framework-centric" and tries to hide database-like operations as much as possible. This makes the framework highly abstracted.
**Suggested Solution**
The solution to this problem is making a new abstraction architecture that is "database-centric," which means that you can use it as if you're using a database rather than a framework. I call this database "CapybaraDB" and you can see [the docs here](https://docs.capybaradb.co)
**Example of the new architecture (CapybaraDB)**
Imagine you have a diary app and want to save your user's data in a way you can retrieve it later for a RAG pipeline. With existing frameworks, you need to carefully design your schema and pipeline so that it fits the framework syntax. But in my new database, you can save your data just like you are using MongoDB. The only difference between CapybaraDB and MongoDB is that you have to wrap the text that you wish to embed with the "EmbText" class. That's all you have to do! Everything, including chunking text, embedding, and indexing, will be handled by the CapybaraDB server side.
{ # 'content' field will be the subject of semantic search later.
"title": "Diary Entry: A Quiet Morning",
"content": EmbText(
"March 5, 2024 - Woke up early today. The sunrise painted the sky in soft hues of orange and pink. I brewed a cup of coffee and sat by the window. It's these quiet moments that remind me how peaceful mornings can be before the world starts rushing around."
),
"type": "diary",
"status": "Personal"
},
**EmbText**
Any text data you wrap with "EmbText" will be chunked, embedded, and indexed automatically and asynchronously. You can use it in nested fields too.
Ex. EmbText("a text you want to embed")
**CapybaraDB data processing on the server side**
Detect all EmbText data in the saved documents
↓
chunk into smaller strings
↓
embed
↓
save documents and vectors
↓
ready for semantic search at any time
**Minimum but necessary abstractions are provided**
CapybaraDB doesn't provide too much abstraction, as you can see. It only provides the minimum & necessary abstraction so that you can add your custom pipeline as you like on top of it. You can use it as if you are using MongoDB with an extra embedding feature for AI apps, which the original MongoDB doesn't offer.
This project was originally an internal tool. But I thought I could help someone else and decided to productize it.
[Whole Documentation is available here](https://docs.capybaradb.co/document/query)
PS: Why is it called CapybaraDB?
Because it's cute. | 2025-01-05T11:27:29 | https://www.reddit.com/r/LocalLLaMA/comments/1hu52mq/we_deserve_something_better_than_langchain/ | Available_Ad_5360 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu52mq | false | null | t3_1hu52mq | /r/LocalLLaMA/comments/1hu52mq/we_deserve_something_better_than_langchain/ | false | false | self | 140 | null |
Seeking Advice: Windows Laptop vs. Mac Mini for Local LLMs | 1 | [removed] | 2025-01-05T11:36:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hu5883/seeking_advice_windows_laptop_vs_mac_mini_for/ | noorAshuvo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu5883 | false | null | t3_1hu5883 | /r/LocalLLaMA/comments/1hu5883/seeking_advice_windows_laptop_vs_mac_mini_for/ | false | false | self | 1 | null |
Windows Laptop with RTX 4060 or Mac Mini M4 Pro for Running Local LLMs? | 1 | [removed] | 2025-01-05T11:39:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hu5aah/windows_laptop_with_rtx_4060_or_mac_mini_m4_pro/ | noorAshuvo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu5aah | false | null | t3_1hu5aah | /r/LocalLLaMA/comments/1hu5aah/windows_laptop_with_rtx_4060_or_mac_mini_m4_pro/ | false | false | self | 1 | null |
Well it may be janky but this is my AI/nas/plex/media/everything server. | 15 | 2025-01-05T11:42:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hu5c7d/well_it_may_be_janky_but_this_is_my/ | Quebber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu5c7d | false | null | t3_1hu5c7d | /r/LocalLLaMA/comments/1hu5c7d/well_it_may_be_janky_but_this_is_my/ | false | false | 15 | null |
||
Model stored in locall storage instead of loading it into the browser? | 0 | I am trying to use qwen 2.5 coder 7B instruct in a webapp but the model will be stored locally on the laptop (not in browser using webgpu) ssd so that in-browser performance is not affected. Is something like this even possible? If yes, is a 16 GB + RAM computer (no discrete graphics) and a decent cpu sufficient for this purpose? | 2025-01-05T11:48:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hu5g01/model_stored_in_locall_storage_instead_of_loading/ | RandomDude71094 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu5g01 | false | null | t3_1hu5g01 | /r/LocalLLaMA/comments/1hu5g01/model_stored_in_locall_storage_instead_of_loading/ | false | false | self | 0 | null |
Run ollama on older gpu | 5 | Hi i have a gt 710 and is there any way i could run ollama on this card. It has compute capability 3.5 and cuda version 11.4. Mine has 2gb of vram | 2025-01-05T11:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hu5hh3/run_ollama_on_older_gpu/ | SatisfactionIcy1393 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu5hh3 | false | null | t3_1hu5hh3 | /r/LocalLLaMA/comments/1hu5hh3/run_ollama_on_older_gpu/ | false | false | self | 5 | null |
I have found a buyer who is selling M40 24G for 160 USD | 0 | How good are these? I am planning to get four of them, then it will be 96 GiB. How good will it be? | 2025-01-05T11:55:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hu5k7j/i_have_found_a_buyer_who_is_selling_m40_24g_for/ | maifee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu5k7j | false | null | t3_1hu5k7j | /r/LocalLLaMA/comments/1hu5k7j/i_have_found_a_buyer_who_is_selling_m40_24g_for/ | false | false | self | 0 | null |
Context Length Problem with VLLM | 1 | [removed] | 2025-01-05T12:14:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hu5v2p/context_length_problem_with_vllm/ | RecognitionNo5205 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu5v2p | false | null | t3_1hu5v2p | /r/LocalLLaMA/comments/1hu5v2p/context_length_problem_with_vllm/ | false | false | 1 | null |
|
Handwritten Letter Classification Challenge | Industry Assignment 2 IHC - Machine Learning for Real-World Application | 1 | [removed] | 2025-01-05T12:18:25 | https://www.reddit.com/r/LocalLLaMA/comments/1hu5x4u/handwritten_letter_classification_challenge/ | velmurugan_kannan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu5x4u | false | null | t3_1hu5x4u | /r/LocalLLaMA/comments/1hu5x4u/handwritten_letter_classification_challenge/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/-49ec6WxNL5aV4e28i5SB_Tpj7AYYR4L8cG2Sk27KeY.jpg?auto=webp&s=75373fc59bdb25efc5f11e5d7dcd4136e25f6966', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/-49ec6WxNL5aV4e28i5SB_Tpj7AYYR4L8cG2Sk27KeY.jpg?width=108&crop=smart&auto=webp&s=8848cd391e59be71f0cccbf2097fa260405f2fac', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/-49ec6WxNL5aV4e28i5SB_Tpj7AYYR4L8cG2Sk27KeY.jpg?width=216&crop=smart&auto=webp&s=2d5d9132f1a56ac9272cef98ded4014ce74549bb', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/-49ec6WxNL5aV4e28i5SB_Tpj7AYYR4L8cG2Sk27KeY.jpg?width=320&crop=smart&auto=webp&s=f59698ee3905d8212980484f0c8e9e1793765ac6', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/-49ec6WxNL5aV4e28i5SB_Tpj7AYYR4L8cG2Sk27KeY.jpg?width=640&crop=smart&auto=webp&s=c77d04e990a6bf12d5cdec4f2f8b789a303f2dc0', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/-49ec6WxNL5aV4e28i5SB_Tpj7AYYR4L8cG2Sk27KeY.jpg?width=960&crop=smart&auto=webp&s=90ad8db683606d22993796bf7624e91a63ab08ef', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/-49ec6WxNL5aV4e28i5SB_Tpj7AYYR4L8cG2Sk27KeY.jpg?width=1080&crop=smart&auto=webp&s=87e96b0205e21382355906c31dd5f6723b35d6ca', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'OiaK_U23THY4XKyTfRd_q9AkFT-LBhth67T-yafwnS0'}], 'enabled': False} |
Qwen has launched smallthinker 3B , reasoning model | 1 | 2025-01-05T12:24:50 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hu60m8 | false | null | t3_1hu60m8 | /r/LocalLLaMA/comments/1hu60m8/qwen_has_launched_smallthinker_3b_reasoning_model/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/qgwcha7u46be1.png?auto=webp&s=b90b8a96067055b6d43792118df4de663dd543ad', 'width': 1080, 'height': 554}, 'resolutions': [{'url': 'https://preview.redd.it/qgwcha7u46be1.png?width=108&crop=smart&auto=webp&s=1156dba4eadae16c1658ea06a3d0883ca690ada8', 'width': 108, 'height': 55}, {'url': 'https://preview.redd.it/qgwcha7u46be1.png?width=216&crop=smart&auto=webp&s=aa5c66dfd41c9a281d48cc136b9192796f27a671', 'width': 216, 'height': 110}, {'url': 'https://preview.redd.it/qgwcha7u46be1.png?width=320&crop=smart&auto=webp&s=3b1c4d2823b634db1825c3e2ddf6f5959601fee9', 'width': 320, 'height': 164}, {'url': 'https://preview.redd.it/qgwcha7u46be1.png?width=640&crop=smart&auto=webp&s=176db6e3bea459022e50139e3ab087944589d2bc', 'width': 640, 'height': 328}, {'url': 'https://preview.redd.it/qgwcha7u46be1.png?width=960&crop=smart&auto=webp&s=035ef338b334d3db3ab2d1f7227c25a6ff40190a', 'width': 960, 'height': 492}, {'url': 'https://preview.redd.it/qgwcha7u46be1.png?width=1080&crop=smart&auto=webp&s=29e3f6928c7137d8f8fec99118e810ef5ecf16aa', 'width': 1080, 'height': 554}], 'variants': {}, 'id': 'qUrwWR5ZVisvrojh21w3o--kbwgHlP-BNiwLWzUnFTE'}], 'enabled': True} |
|||
Smallthinker 3B parameters reasoning model | 1 | 2025-01-05T12:27:04 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hu61tr | false | null | t3_1hu61tr | /r/LocalLLaMA/comments/1hu61tr/smallthinker_3b_parameters_reasoning_model/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/sklw6vn856be1.png?auto=webp&s=6569d5714281e513d2dccaa181597f7a93688921', 'width': 1080, 'height': 525}, 'resolutions': [{'url': 'https://preview.redd.it/sklw6vn856be1.png?width=108&crop=smart&auto=webp&s=ccf6ee35d6756e51bd9c2a787cd7e4b6372c5583', 'width': 108, 'height': 52}, {'url': 'https://preview.redd.it/sklw6vn856be1.png?width=216&crop=smart&auto=webp&s=25a815a2a4c545976ddcb1fb69fe37fb91798a59', 'width': 216, 'height': 105}, {'url': 'https://preview.redd.it/sklw6vn856be1.png?width=320&crop=smart&auto=webp&s=f65a2ecab1dc061dd50a5022b9c21409ff51ca12', 'width': 320, 'height': 155}, {'url': 'https://preview.redd.it/sklw6vn856be1.png?width=640&crop=smart&auto=webp&s=c6aa0973766edce2c0f6f08a3cfd56d309b9ce0c', 'width': 640, 'height': 311}, {'url': 'https://preview.redd.it/sklw6vn856be1.png?width=960&crop=smart&auto=webp&s=277e3714384ea55192685cba6e9f4c28e27474bd', 'width': 960, 'height': 466}, {'url': 'https://preview.redd.it/sklw6vn856be1.png?width=1080&crop=smart&auto=webp&s=4271bb756c664c9fdcb64796ed7606c2d908aae9', 'width': 1080, 'height': 525}], 'variants': {}, 'id': 'OtDvYmBVLkv8D79AKCSO4S3BSmyyZ4sFmyHA09_vNOE'}], 'enabled': True} |
|||
LLMLingua for code compression | 1 | Has anyone used LLMLingua to compress their inputs that contains several code blocks? I would like to use this with Claude 3.5 Sonnet to cut down some costs. However, I feel that this is only meant for text compression and summarization.
If LLMLingua isn't suitable for this type of usecase then what other alternative options do I have? Should I let a smaller LLM summarize my input to a certain degree and then feed it into a SOTA model? | 2025-01-05T12:37:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hu67cu/llmlingua_for_code_compression/ | Round_Mixture_7541 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu67cu | false | null | t3_1hu67cu | /r/LocalLLaMA/comments/1hu67cu/llmlingua_for_code_compression/ | false | false | self | 1 | null |
Anyone else experiencing issues with DeepSeek V3 via OpenRouter when input exceeds 12k tokens? | 1 | [removed] | 2025-01-05T12:47:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hu6dff/anyone_else_experiencing_issues_with_deepseek_v3/ | MisterKot_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu6dff | false | null | t3_1hu6dff | /r/LocalLLaMA/comments/1hu6dff/anyone_else_experiencing_issues_with_deepseek_v3/ | false | false | self | 1 | null |
You can now turn github repos into prompts in one click with the gitingest extension! | 1 | 2025-01-05T12:52:24 | https://v.redd.it/5vjg3cue86be1 | MrCyclopede | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hu6g1o | false | {'reddit_video': {'bitrate_kbps': 5000, 'fallback_url': 'https://v.redd.it/5vjg3cue86be1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'width': 1594, 'scrubber_media_url': 'https://v.redd.it/5vjg3cue86be1/DASH_96.mp4', 'dash_url': 'https://v.redd.it/5vjg3cue86be1/DASHPlaylist.mpd?a=1738673559%2CYWYzYzU1NjRhM2RjYTZiY2E5ZWQ0NzNmNTk3ZTIyY2ZjOWI5MmFhY2UzZTEzOGMzNzc1NjcyOGM0MDc3MWQxNg%3D%3D&v=1&f=sd', 'duration': 10, 'hls_url': 'https://v.redd.it/5vjg3cue86be1/HLSPlaylist.m3u8?a=1738673559%2CZGUxNzVlY2YzYmM2NTBjOGFhZDEwNzVhNjYzZGIyZTQ2M2QwYTcwNDc5NjBmMTM3OTFkYjU2MTczMGEyMjQyNg%3D%3D&v=1&f=sd', 'is_gif': False, 'transcoding_status': 'completed'}} | t3_1hu6g1o | /r/LocalLLaMA/comments/1hu6g1o/you_can_now_turn_github_repos_into_prompts_in_one/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/OGh5OThjdWU4NmJlMTakAVzdZZA0fjL-PQ1klSvAr2COhnOH7iFPXqwmKQ7u.png?format=pjpg&auto=webp&s=ec9ae65e91ea1d1eca46863ed2502071a2187fe9', 'width': 1594, 'height': 1080}, 'resolutions': [{'url': 'https://external-preview.redd.it/OGh5OThjdWU4NmJlMTakAVzdZZA0fjL-PQ1klSvAr2COhnOH7iFPXqwmKQ7u.png?width=108&crop=smart&format=pjpg&auto=webp&s=9e962b68c84df04949333a87ed2f8d4488e1b716', 'width': 108, 'height': 73}, {'url': 'https://external-preview.redd.it/OGh5OThjdWU4NmJlMTakAVzdZZA0fjL-PQ1klSvAr2COhnOH7iFPXqwmKQ7u.png?width=216&crop=smart&format=pjpg&auto=webp&s=8b09e3747367f19b4833ded67be9f708ba4cbc9a', 'width': 216, 'height': 146}, {'url': 'https://external-preview.redd.it/OGh5OThjdWU4NmJlMTakAVzdZZA0fjL-PQ1klSvAr2COhnOH7iFPXqwmKQ7u.png?width=320&crop=smart&format=pjpg&auto=webp&s=b7a606eaa43f33a18182ac7ab8f05245354fd4b2', 'width': 320, 'height': 216}, {'url': 'https://external-preview.redd.it/OGh5OThjdWU4NmJlMTakAVzdZZA0fjL-PQ1klSvAr2COhnOH7iFPXqwmKQ7u.png?width=640&crop=smart&format=pjpg&auto=webp&s=c87c7c883f84aaa0b169ab1bd45eaa9a6752aa9d', 'width': 640, 'height': 433}, {'url': 'https://external-preview.redd.it/OGh5OThjdWU4NmJlMTakAVzdZZA0fjL-PQ1klSvAr2COhnOH7iFPXqwmKQ7u.png?width=960&crop=smart&format=pjpg&auto=webp&s=7cd69daa638eaff13046f6ea29f86d5dce41ec5d', 'width': 960, 'height': 650}, {'url': 'https://external-preview.redd.it/OGh5OThjdWU4NmJlMTakAVzdZZA0fjL-PQ1klSvAr2COhnOH7iFPXqwmKQ7u.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d0b2f0dbce2fc1adc88d044fbf58815c5cb26856', 'width': 1080, 'height': 731}], 'variants': {}, 'id': 'OGh5OThjdWU4NmJlMTakAVzdZZA0fjL-PQ1klSvAr2COhnOH7iFPXqwmKQ7u'}], 'enabled': False} |
||
Anyone got a good guide on getting structured LLM output (ideally to match a Pydantic class) using the OpenAI python library and Ollama running Llama3.2 | 1 | Title basically, anyone got a good way of doing this? If you have a short code snippet that does it that would be ideal too | 2025-01-05T12:59:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hu6k9p/anyone_got_a_good_guide_on_getting_structured_llm/ | OverclockingUnicorn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu6k9p | false | null | t3_1hu6k9p | /r/LocalLLaMA/comments/1hu6k9p/anyone_got_a_good_guide_on_getting_structured_llm/ | false | false | self | 1 | null |
Order of fields in Structured output can hurt LLMs output | 1 | 2025-01-05T13:07:03 | https://www.dsdev.in/order-of-fields-in-structured-output-can-hurt-llms-output | phantom69_ftw | dsdev.in | 1970-01-01T00:00:00 | 0 | {} | 1hu6ovj | false | null | t3_1hu6ovj | /r/LocalLLaMA/comments/1hu6ovj/order_of_fields_in_structured_output_can_hurt/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/xkrp8DE7x6V4wicAKjYwp0Duw0LyVdedJewnJHYHqSk.jpg?auto=webp&s=e0055353cc004e2a66565f0e3bdab3a0e9daee3f', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/xkrp8DE7x6V4wicAKjYwp0Duw0LyVdedJewnJHYHqSk.jpg?width=108&crop=smart&auto=webp&s=45af2717f1ac80606cdb705d7706c910c63c2cbc', 'width': 108, 'height': 56}, {'url': 'https://external-preview.redd.it/xkrp8DE7x6V4wicAKjYwp0Duw0LyVdedJewnJHYHqSk.jpg?width=216&crop=smart&auto=webp&s=da0d3e670b2511aa1e20d4267bd2aeac6226f54f', 'width': 216, 'height': 113}, {'url': 'https://external-preview.redd.it/xkrp8DE7x6V4wicAKjYwp0Duw0LyVdedJewnJHYHqSk.jpg?width=320&crop=smart&auto=webp&s=5642385991d3bc269c18eb373d1132de847c96d8', 'width': 320, 'height': 168}, {'url': 'https://external-preview.redd.it/xkrp8DE7x6V4wicAKjYwp0Duw0LyVdedJewnJHYHqSk.jpg?width=640&crop=smart&auto=webp&s=1845ee66f761888a6b39eac9fa05891a6c49dc43', 'width': 640, 'height': 336}, {'url': 'https://external-preview.redd.it/xkrp8DE7x6V4wicAKjYwp0Duw0LyVdedJewnJHYHqSk.jpg?width=960&crop=smart&auto=webp&s=825bec715382eb0bce006ec2932cd5c09885a0a4', 'width': 960, 'height': 504}, {'url': 'https://external-preview.redd.it/xkrp8DE7x6V4wicAKjYwp0Duw0LyVdedJewnJHYHqSk.jpg?width=1080&crop=smart&auto=webp&s=26e664c5a4a89e6bf75353a628d5b37b1c4bcef5', 'width': 1080, 'height': 567}], 'variants': {}, 'id': '_1ifzbJyd9yC8ishW2jhr8CgnDfR9vqJkohYnWPj08c'}], 'enabled': False} |
||
what's better for Llama 3.2:3B (Ollama) inference - Nvidia RTX 4060 8GB GPU or Mac Mini M4? | 1 | I am running Ollama on my old Intel MacBook with this particular model and it's relatively slow.
Hence I am looking for an upgrade.
what's better for Llama 3.2:3B (Ollama) inference -
Nvidia RTX 4060 8GB (for my existing PC) or Mac Mini M4 (total replacement of existing devices)? | 2025-01-05T13:42:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hu7bbr/whats_better_for_llama_323b_ollama_inference/ | harsh611 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu7bbr | false | null | t3_1hu7bbr | /r/LocalLLaMA/comments/1hu7bbr/whats_better_for_llama_323b_ollama_inference/ | false | false | self | 1 | null |
Llama 3.3 70B Help | 1 | Hi All,
So i have a local Ai box that i run at home and i wanna upgrade it so i can run the bigger models coming out iv seen this post that has specs for what i need to run Llama 3.3 70B but wanna ask if its worth upgrading my current rig and what to change or if i should scrap it and start a new and is somthing like AMD Epyc worth it for Ai or do i want intel etc im just after some help with what to change here is my current build.
Ryzen 7 5800x
32GB DDR4 3200Mhz
1TB 990 Pro NVME
Nvida CMP 100-210 16GB
[https://nodeshift.com/blog/how-to-install-llama-3-3-70b-instruct-locally](https://nodeshift.com/blog/how-to-install-llama-3-3-70b-instruct-locally) | 2025-01-05T13:42:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hu7bet/llama_33_70b_help/ | Totalkiller4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu7bet | false | null | t3_1hu7bet | /r/LocalLLaMA/comments/1hu7bet/llama_33_70b_help/ | false | false | self | 1 | null |
How long until AI agent that interact with email, calendar, to-do list, etc? | 1 | I've spent a TON of time searching and haven't found anything promising yet, so I wanted to ask for opinions:
How long do you think it'll be until we have our "own" AI agent? Something with access to our email, to-do list, etc, that we can interact with. For example:
Me: "summarize recent emails"
AI: "..John Smith wants to get coffee this week to discuss Project Expo Marker"
Me: "Reply with a calendar invite for 7PM Thursday"
etc.. | 2025-01-05T14:32:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hu8993/how_long_until_ai_agent_that_interact_with_email/ | NHarvey3DK | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu8993 | false | null | t3_1hu8993 | /r/LocalLLaMA/comments/1hu8993/how_long_until_ai_agent_that_interact_with_email/ | false | false | self | 1 | null |
LLaMA 3.1 405B Chatbot HF Space | 1 | For developers who want to test the Llama 3.1 405B LLM, I have posted a Hugging Face space for Llama-3.1-405B through an API so you can test it.
Space link: [https://huggingface.co/spaces/FareedKhan/llama-3.1-405B](https://huggingface.co/spaces/FareedKhan/llama-3.1-405B) | 2025-01-05T14:45:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hu8ilu/llama_31_405b_chatbot_hf_space/ | FareedKhan557 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu8ilu | false | null | t3_1hu8ilu | /r/LocalLLaMA/comments/1hu8ilu/llama_31_405b_chatbot_hf_space/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/MJEngISh6-A0vtvwu0BhB2snMbijLmfEQr54V5eZcrI.jpg?auto=webp&s=711dd9c25a0687d33234c6e0854b33a8e8d41493', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/MJEngISh6-A0vtvwu0BhB2snMbijLmfEQr54V5eZcrI.jpg?width=108&crop=smart&auto=webp&s=bdb3043589c8887001f15237385a31b51d0d5d62', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/MJEngISh6-A0vtvwu0BhB2snMbijLmfEQr54V5eZcrI.jpg?width=216&crop=smart&auto=webp&s=7173b499363db85f9a7bb3ec285a93cdb65c6f61', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/MJEngISh6-A0vtvwu0BhB2snMbijLmfEQr54V5eZcrI.jpg?width=320&crop=smart&auto=webp&s=42125ffc8f8b2d0ba012055983e9b946847154a9', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/MJEngISh6-A0vtvwu0BhB2snMbijLmfEQr54V5eZcrI.jpg?width=640&crop=smart&auto=webp&s=e52e4ed723944fcba7d9c0a7a545032d0abeec01', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/MJEngISh6-A0vtvwu0BhB2snMbijLmfEQr54V5eZcrI.jpg?width=960&crop=smart&auto=webp&s=6418d5053ac3034c9df01211416f92a49dcb263b', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/MJEngISh6-A0vtvwu0BhB2snMbijLmfEQr54V5eZcrI.jpg?width=1080&crop=smart&auto=webp&s=e6072d8268a0f276ef9b326d75cbb915aa45ff41', 'width': 1080, 'height': 583}], 'variants': {}, 'id': 'L_juM-BhzuRqln0KmJDfONbXINY45xxVQdOlolFkz-Y'}], 'enabled': False} |
PSU ATX Standards, CPU Upgrade, and Storage Questions for Our High-Performance ML Build | 1 | [removed] | 2025-01-05T14:45:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hu8iqk/psu_atx_standards_cpu_upgrade_and_storage/ | PreatorAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu8iqk | false | null | t3_1hu8iqk | /r/LocalLLaMA/comments/1hu8iqk/psu_atx_standards_cpu_upgrade_and_storage/ | false | false | self | 1 | null |
ChatGPT Alternative: NexusAI | 1 | [removed] | 2025-01-05T14:56:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hu8qku/chatgpt_alternative_nexusai/ | pushkarmcpe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu8qku | false | null | t3_1hu8qku | /r/LocalLLaMA/comments/1hu8qku/chatgpt_alternative_nexusai/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/Vebj3OMFNNDPAkgXjcTm-yQqcDt1qkySuEHJ3L10PtI.jpg?auto=webp&s=a167cf2f8170207430fccf0111a56502cfd57d54', 'width': 64, 'height': 64}, 'resolutions': [], 'variants': {}, 'id': '0Rg_jIR4dh-TSRfxOecffHskunhUge7h1pQSO0vu8js'}], 'enabled': False} |
Tools like aider, but for more than just coding? | 1 | I've grown very fond of aider's approach to co-creating with an AI, giving the AI access to the entire codebase and making changes directly to it while having the entire codebase, and potentially the git history, as context.
I have experimented with applying aider to "code"bases containing sets of Markdown files, to work on them in a similar fashion. However, aider's prompts are (quite reasonably for a coding aid) very focused on thinking as a programmer, and working with things which are code-like and benefit from software engineering approaches, which makes this approach more cumbersome and less effective than it needs to be.
Are any of you aware of tools with a similar fundamental approach (working within the context of file structures and modifying them directly while interacting with a human via chat to discuss approaches and goals) which are more suited for "content-work"? In my specific case: things like, but not limited to, creating strategy documents, analysing and deriving actions/analyses from workshop transcripts, creating learning materials like slides.
I'll consider suggestions for products which operate on online data, e.g. wiki(ish things) like Confluence or Notion, but... call me old-fashioned, but my preference is for tools which work on my local filesystem instead of on someone else's computer -- a sentiment which should be familiar to the denizens of /r/LocalLLaMA :-) .
I've half made up my mind to fork and modify aider for this purpose, but I wonder if there aren't tools out there already which work in this way, and probably better than my half-baked attempt.
Any suggestions or experience you can share? | 2025-01-05T14:59:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hu8sdt/tools_like_aider_but_for_more_than_just_coding/ | InternetOfStuff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu8sdt | false | null | t3_1hu8sdt | /r/LocalLLaMA/comments/1hu8sdt/tools_like_aider_but_for_more_than_just_coding/ | false | false | self | 1 | null |
How DeepSeek V3 token generation performance in llama.cpp depends on prompt length | 1 | 2025-01-05T15:04:41 | fairydreaming | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hu8wr5 | false | null | t3_1hu8wr5 | /r/LocalLLaMA/comments/1hu8wr5/how_deepseek_v3_token_generation_performance_in/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/7390mnrdw6be1.png?auto=webp&s=094d642adf1388da99a000f7c68ff7ead7b6cca0', 'width': 900, 'height': 600}, 'resolutions': [{'url': 'https://preview.redd.it/7390mnrdw6be1.png?width=108&crop=smart&auto=webp&s=fbc94fcd1a46b4ae842d454d58ddd869a7f66a90', 'width': 108, 'height': 72}, {'url': 'https://preview.redd.it/7390mnrdw6be1.png?width=216&crop=smart&auto=webp&s=0ce20c1fcbfaa978daa354fede01e55128c1ac40', 'width': 216, 'height': 144}, {'url': 'https://preview.redd.it/7390mnrdw6be1.png?width=320&crop=smart&auto=webp&s=ae47b50f8a722abe9bdf062de6122281aab93a5e', 'width': 320, 'height': 213}, {'url': 'https://preview.redd.it/7390mnrdw6be1.png?width=640&crop=smart&auto=webp&s=60a043dd4f1135122d1cf5503b4d2caaed3e5a80', 'width': 640, 'height': 426}], 'variants': {}, 'id': 'lHwgLYN1azCOfnHkb1huxTYo3hLC85d6WELsEErjkhc'}], 'enabled': True} |
|||
Meta AI Introduces EWE (Explicit Working Memory): A Novel Approach that Enhances Factuality in Long-Form Text Generation by Integrating a Working Memory | 1 | https://www.marktechpost.com/2025/01/03/meta-ai-introduces-ewe-explicit-working-memory-a-novel-approach-that-enhances-factuality-in-long-form-text-generation-by-integrating-a-working-memory/ | 2025-01-05T15:37:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hu9lr7/meta_ai_introduces_ewe_explicit_working_memory_a/ | USERNAME123_321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu9lr7 | false | null | t3_1hu9lr7 | /r/LocalLLaMA/comments/1hu9lr7/meta_ai_introduces_ewe_explicit_working_memory_a/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/s56yK4fzz-rDovS7gdxb0_hWW27AH9-aiAOC9jUNm7Q.jpg?auto=webp&s=e071e2b72f42429bd48d1f1af8424c798d936ba3', 'width': 1704, 'height': 1088}, 'resolutions': [{'url': 'https://external-preview.redd.it/s56yK4fzz-rDovS7gdxb0_hWW27AH9-aiAOC9jUNm7Q.jpg?width=108&crop=smart&auto=webp&s=816110cbe7e286c65f1e7cc14e735ec79de87361', 'width': 108, 'height': 68}, {'url': 'https://external-preview.redd.it/s56yK4fzz-rDovS7gdxb0_hWW27AH9-aiAOC9jUNm7Q.jpg?width=216&crop=smart&auto=webp&s=102370d04e726b2893cebcc04b23d2d951ef1136', 'width': 216, 'height': 137}, {'url': 'https://external-preview.redd.it/s56yK4fzz-rDovS7gdxb0_hWW27AH9-aiAOC9jUNm7Q.jpg?width=320&crop=smart&auto=webp&s=b83be3ba8296524d948e79c0ff2d9280bcf4b046', 'width': 320, 'height': 204}, {'url': 'https://external-preview.redd.it/s56yK4fzz-rDovS7gdxb0_hWW27AH9-aiAOC9jUNm7Q.jpg?width=640&crop=smart&auto=webp&s=1075d29cab16047ac933c70bc55dd971f7473a38', 'width': 640, 'height': 408}, {'url': 'https://external-preview.redd.it/s56yK4fzz-rDovS7gdxb0_hWW27AH9-aiAOC9jUNm7Q.jpg?width=960&crop=smart&auto=webp&s=86aac2d57cddbd4daf2d07559ed28d47027f0930', 'width': 960, 'height': 612}, {'url': 'https://external-preview.redd.it/s56yK4fzz-rDovS7gdxb0_hWW27AH9-aiAOC9jUNm7Q.jpg?width=1080&crop=smart&auto=webp&s=074636b2ca8e1c10f949a3dfcb2bfb1f25a1ba53', 'width': 1080, 'height': 689}], 'variants': {}, 'id': 'd4xFnkFBfC4p_TOK7nn_o2o9FUJoWMEobQ4QdkyUlE8'}], 'enabled': False} |
Docker Containers on M Series Macs can't run with GPU? | 1 | I tend to like to run a lot of my services in containers rather than installing on "bare metal." However, my understanding is that at least right now, docker can't access the Metal API on Mac Machines and so I should run things like Whisper and Ollama directly, correct?
I read there are some workarounds to this, but I'm not sure how complex/iffy they are? | 2025-01-05T15:54:29 | https://www.reddit.com/r/LocalLLaMA/comments/1hu9zg5/docker_containers_on_m_series_macs_cant_run_with/ | ottovonbizmarkie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hu9zg5 | false | null | t3_1hu9zg5 | /r/LocalLLaMA/comments/1hu9zg5/docker_containers_on_m_series_macs_cant_run_with/ | false | false | self | 1 | null |
Kokoro-onnx as TTS engine in windows and android | 1 | [removed] | 2025-01-05T16:13:32 | https://www.reddit.com/r/LocalLLaMA/comments/1huaev9/kokoroonnx_as_tts_engine_in_windows_and_android/ | ImportantOwl2939 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1huaev9 | false | null | t3_1huaev9 | /r/LocalLLaMA/comments/1huaev9/kokoroonnx_as_tts_engine_in_windows_and_android/ | false | false | self | 1 | null |
how to use Kokoro onnx as realtime TTS engine | 1 | [removed] | 2025-01-05T16:14:59 | https://www.reddit.com/r/LocalLLaMA/comments/1huag2d/how_to_use_kokoro_onnx_as_realtime_tts_engine/ | ImportantOwl2939 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1huag2d | false | null | t3_1huag2d | /r/LocalLLaMA/comments/1huag2d/how_to_use_kokoro_onnx_as_realtime_tts_engine/ | false | false | self | 1 | null |
What is the current best hardware for llm use ? | 1 | I don’t really have a budget and I just want to be able to run the latest local models such as deep seek v3.
I’ve heard of some chips especially designed for transformer usage such a groq and others but I was wondering which are currently available and the current state of the market.
Note : If there is already a thread about this subject please send me a link!! | 2025-01-05T16:23:53 | https://www.reddit.com/r/LocalLLaMA/comments/1huank4/what_is_the_current_best_hardware_for_llm_use/ | WaldToonnnnn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1huank4 | false | null | t3_1huank4 | /r/LocalLLaMA/comments/1huank4/what_is_the_current_best_hardware_for_llm_use/ | false | false | self | 1 | null |
Browser Use running Locally on single 3090 | 1 | 2025-01-05T16:31:44 | https://v.redd.it/w3xldu74a7be1 | pascalschaerli | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1huau1d | false | {'reddit_video': {'bitrate_kbps': 5000, 'fallback_url': 'https://v.redd.it/w3xldu74a7be1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'width': 1288, 'scrubber_media_url': 'https://v.redd.it/w3xldu74a7be1/DASH_96.mp4', 'dash_url': 'https://v.redd.it/w3xldu74a7be1/DASHPlaylist.mpd?a=1738686717%2CNmJjOGRkZTIwN2YwZTE2NGM0YTE3NWNkNDg2M2IxYTY0ZmQ4YWQxM2ZkNmMxZjNkMTNhZDE0ZjQwZjk1OTU0MQ%3D%3D&v=1&f=sd', 'duration': 24, 'hls_url': 'https://v.redd.it/w3xldu74a7be1/HLSPlaylist.m3u8?a=1738686717%2CNTI1ZjU4MmZhMDYxYmRiZTY3NWJkOTAxZDYxN2FlZDNiYzkzNjdiOTNkOGE1ZjVmZDkxZDlhYTJjYjExMzM0MA%3D%3D&v=1&f=sd', 'is_gif': False, 'transcoding_status': 'completed'}} | t3_1huau1d | /r/LocalLLaMA/comments/1huau1d/browser_use_running_locally_on_single_3090/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/eHdiY3V2NzRhN2JlMWnc2bCTUUxCVz36KpeGm_Mc-Z0lRe9hnAIrRiIH6ir_.png?format=pjpg&auto=webp&s=b26eaab57a1f22d62a3c44de9b6f997aae5b8a86', 'width': 1288, 'height': 1080}, 'resolutions': [{'url': 'https://external-preview.redd.it/eHdiY3V2NzRhN2JlMWnc2bCTUUxCVz36KpeGm_Mc-Z0lRe9hnAIrRiIH6ir_.png?width=108&crop=smart&format=pjpg&auto=webp&s=774647a791d2802b0002976731f4eca2e1c971aa', 'width': 108, 'height': 90}, {'url': 'https://external-preview.redd.it/eHdiY3V2NzRhN2JlMWnc2bCTUUxCVz36KpeGm_Mc-Z0lRe9hnAIrRiIH6ir_.png?width=216&crop=smart&format=pjpg&auto=webp&s=666192aebb3fa1db1bbf514bad6365cbb79684ad', 'width': 216, 'height': 181}, {'url': 'https://external-preview.redd.it/eHdiY3V2NzRhN2JlMWnc2bCTUUxCVz36KpeGm_Mc-Z0lRe9hnAIrRiIH6ir_.png?width=320&crop=smart&format=pjpg&auto=webp&s=62ebc5b2312cac9dcbcb88c4354bf4c68f31c4c1', 'width': 320, 'height': 268}, {'url': 'https://external-preview.redd.it/eHdiY3V2NzRhN2JlMWnc2bCTUUxCVz36KpeGm_Mc-Z0lRe9hnAIrRiIH6ir_.png?width=640&crop=smart&format=pjpg&auto=webp&s=4d519aea65561707d3e1b2794b10e58eeaa2abff', 'width': 640, 'height': 536}, {'url': 'https://external-preview.redd.it/eHdiY3V2NzRhN2JlMWnc2bCTUUxCVz36KpeGm_Mc-Z0lRe9hnAIrRiIH6ir_.png?width=960&crop=smart&format=pjpg&auto=webp&s=507cfc471b9ce375741b304df0dcb9454fcde2fe', 'width': 960, 'height': 804}, {'url': 'https://external-preview.redd.it/eHdiY3V2NzRhN2JlMWnc2bCTUUxCVz36KpeGm_Mc-Z0lRe9hnAIrRiIH6ir_.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6662a0179a67193d3f2a9c48c4459b423df73284', 'width': 1080, 'height': 905}], 'variants': {}, 'id': 'eHdiY3V2NzRhN2JlMWnc2bCTUUxCVz36KpeGm_Mc-Z0lRe9hnAIrRiIH6ir_'}], 'enabled': False} |
||
Need Suggestions for Building a Local PDF-Based Assistant with Llama 3.2-Vision | 1 | [removed] | 2025-01-05T16:51:44 | https://www.reddit.com/r/LocalLLaMA/comments/1hubabv/need_suggestions_for_building_a_local_pdfbased/ | No_Hovercraft_0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hubabv | false | null | t3_1hubabv | /r/LocalLLaMA/comments/1hubabv/need_suggestions_for_building_a_local_pdfbased/ | false | false | self | 1 | null |
Looking for online Docker Host with GPUs | 1 | [removed] | 2025-01-05T17:17:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hubvp8/looking_for_online_docker_host_with_gpus/ | Intelligent_Lab1491 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hubvp8 | false | null | t3_1hubvp8 | /r/LocalLLaMA/comments/1hubvp8/looking_for_online_docker_host_with_gpus/ | false | false | self | 1 | null |
How come llm space don't have a comfy UI? | 1 | We all build agents, they all have the same basic functionality. I know things like langchain, llamaindex and langflow exist and empower us to build workflows. But they tend to be cursed or overly complicated for nothing.
So yeah image gen got a comfyui, everybody building custom nodes for it and even if it's a hell of a mess it just works(95% of time?). How come we don't have such projects in llm space?
Do you know any interesting project to build workflow with such ease?
Where do you see the limits of such project?
Is it because we are IT nerds and we love our python scripts? And image gen is run by artists and ui devs? 😅 | 2025-01-05T17:32:44 | https://www.reddit.com/r/LocalLLaMA/comments/1huc8di/how_come_llm_space_dont_have_a_comfy_ui/ | No_Afternoon_4260 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1huc8di | false | null | t3_1huc8di | /r/LocalLLaMA/comments/1huc8di/how_come_llm_space_dont_have_a_comfy_ui/ | false | false | self | 1 | null |
Considering a bachelor's in translation, but concerned about the advances in LLMs (Advice only from professionals in the field please) | 1 | Hi everyone,
I’m considering pursuing a bachelor’s degree in translation because I’m passionate about languages and the craft of translation. However, I have serious concerns about the future of the field due to advancements in large language models and AI.
From what I’ve observed and tested myself (I work with English-French), current LLMs are already 80% of the way to producing high-quality translations. Many professionals have also told me that Machine Translation Post-Editing (MTPE) is now easier and more efficient than correcting a bad human translator’s work.
What worries me is that AI could improve even further, or it might be even there all ready its just a question of scaling up to get costs down, with the integration of chain-of-thought reasoning for self-correction and retrieval-augmented generation (RAG) for technical terminology and domain-specific translations, it feels like the gap might completely close in a few years.
I’d really appreciate blunt and honest advice, especially from professionals already working in Machine Translation related fields. I don't want to waste years and a lot of money to end up with no job.
Thank you. | 2025-01-05T17:46:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hucjvh/considering_a_bachelors_in_translation_but/ | Time_Confection8711 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hucjvh | false | null | t3_1hucjvh | /r/LocalLLaMA/comments/1hucjvh/considering_a_bachelors_in_translation_but/ | false | false | self | 1 | null |
Scaling Chat—Featherless.ai or Other Options? | 1 | Hey r/LocalLLaMA!
I’m looking to set up a chat inference system that can handle anywhere from a few users up to about 100 concurrent ones. I want to keep it cost-effective and ensure it runs smoothly without slowdowns or other performance issues.
I’ve been checking out [Featherless.ai](https://featherless.ai/) and their $25/month plan seems promising. However, they limit larger models to one concurrent request and smaller ones to three. Do you think [Featherless.ai](http://Featherless.ai) is a good fit for scaling, or are there better alternatives out there?
Any tips or recommendations on how to scale efficiently and economically would be greatly appreciated!
Thanks a bunch! | 2025-01-05T17:52:25 | https://www.reddit.com/r/LocalLLaMA/comments/1hucot4/scaling_chatfeatherlessai_or_other_options/ | madhatter349 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hucot4 | false | null | t3_1hucot4 | /r/LocalLLaMA/comments/1hucot4/scaling_chatfeatherlessai_or_other_options/ | false | false | self | 1 | null |
That's one of your "frontier" benchmarks (aka "always check your data") | 1 | [Sample question. Green is supposed right answer, red - answer most people with EQ \> 0 would give](https://preview.redd.it/rfn58gqrr7be1.png?width=879&format=png&auto=webp&s=7e33c04a51a8ad0bf77738244b1b4636d5a87099)
[https://simple-bench.com/try-yourself](https://simple-bench.com/try-yourself) | 2025-01-05T18:04:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hucz8q/thats_one_of_your_frontier_benchmarks_aka_always/ | IxinDow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hucz8q | false | null | t3_1hucz8q | /r/LocalLLaMA/comments/1hucz8q/thats_one_of_your_frontier_benchmarks_aka_always/ | false | false | 1 | null |
|
Is there an LLM chat/service/api provider with "Continue" feature? | 1 | This. A feature from LM Studio is something I can't live without when it comes to creative tasks. It gives me complete control over what models do. Such a basic thing, yet for some reason, it is incredibly rare these days.
https://preview.redd.it/emkrjj8is7be1.png?width=277&format=png&auto=webp&s=307a060bd80069a1be5358e4c6d40602b3e86b60
Have you seen it anywhere else? As much as I love LM Studio, my PC isn't powerful enough to support heavier models. It would be fantastic to have the option to pay for an API use instead or something like that. Thanks! | 2025-01-05T18:11:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hud50i/is_there_an_llm_chatserviceapi_provider_with/ | Johnny_Rell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hud50i | false | null | t3_1hud50i | /r/LocalLLaMA/comments/1hud50i/is_there_an_llm_chatserviceapi_provider_with/ | false | false | 1 | null |
|
Is 7900 xt basically idential to M2 ultra chips in terms of token generation speed? | 1 | 7900xt has 800 GB/s memory bandwidth with a chiplet design, which means they molded two GPU chips with 400 GB /s memory throughput.
Mac studio M2 ultra chips molded two M2 max chips that have 400 GB /s memory throughput, achieving a total 800 GB/s.
It seems that they were built with same kind of architecture. While prompt processing speed may vary depending on GPU core throughput, my experience is that token generaration speed is largely limited by memory throughput.
Any thoughts? | 2025-01-05T18:16:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hud9md/is_7900_xt_basically_idential_to_m2_ultra_chips/ | siegevjorn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hud9md | false | null | t3_1hud9md | /r/LocalLLaMA/comments/1hud9md/is_7900_xt_basically_idential_to_m2_ultra_chips/ | false | false | self | 1 | null |
UwU 7B Instruct | 1 | 2025-01-05T18:24:13 | https://huggingface.co/qingy2024/UwU-7B-Instruct | random-tomato | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hudfsf | false | null | t3_1hudfsf | /r/LocalLLaMA/comments/1hudfsf/uwu_7b_instruct/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/e76qTaHLeg5WKSdCwGeG9nlGQJ8OUug__xJlMt-lCMs.jpg?auto=webp&s=560fbd47c69ec67cc4eb205c16f6ff3e525c1dcc', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/e76qTaHLeg5WKSdCwGeG9nlGQJ8OUug__xJlMt-lCMs.jpg?width=108&crop=smart&auto=webp&s=a591e1038bcf38390765857a053fce046d7ab31f', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/e76qTaHLeg5WKSdCwGeG9nlGQJ8OUug__xJlMt-lCMs.jpg?width=216&crop=smart&auto=webp&s=dfe5ac0c640ece3c0f9a63ee25f12fecbebfaef8', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/e76qTaHLeg5WKSdCwGeG9nlGQJ8OUug__xJlMt-lCMs.jpg?width=320&crop=smart&auto=webp&s=f032d75e99679dfbc1cb6ed94e0d93be47b43749', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/e76qTaHLeg5WKSdCwGeG9nlGQJ8OUug__xJlMt-lCMs.jpg?width=640&crop=smart&auto=webp&s=4bf4f23547e187b0df481ebec2c221bed5e2a65b', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/e76qTaHLeg5WKSdCwGeG9nlGQJ8OUug__xJlMt-lCMs.jpg?width=960&crop=smart&auto=webp&s=5e6df3ce221885151179eb732fb44ba3a602c895', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/e76qTaHLeg5WKSdCwGeG9nlGQJ8OUug__xJlMt-lCMs.jpg?width=1080&crop=smart&auto=webp&s=1da3f4c52a09b5d5649ebc3a9f4650480765183a', 'width': 1080, 'height': 583}], 'variants': {}, 'id': 'rBx9lS0AQNLm_y9mtDgUxxtMbONYDK5eKEedi5hFf08'}], 'enabled': False} |
||
Right now I'm paying for github copilot but I only use the code completion features, not the actual chat. are there any good models that do this locally? | 1 | hi!
Question in title.. I'd like to know if there are any good models that do this work locally for all languages (mostly c++, html, css, typescript) and also integrate well with vs code?
thanks! | 2025-01-05T18:33:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hudnly/right_now_im_paying_for_github_copilot_but_i_only/ | teh_mICON | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hudnly | false | null | t3_1hudnly | /r/LocalLLaMA/comments/1hudnly/right_now_im_paying_for_github_copilot_but_i_only/ | false | false | self | 1 | null |
Help with ollama and the Continue VSCode extension? Sometimes it works, sometimes it fails spectacularly | 1 | [removed] | 2025-01-05T19:25:01 | https://www.reddit.com/r/LocalLLaMA/comments/1huevrq/help_with_ollama_and_the_continue_vscode/ | im_dylan_it | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1huevrq | false | null | t3_1huevrq | /r/LocalLLaMA/comments/1huevrq/help_with_ollama_and_the_continue_vscode/ | false | false | self | 1 | null |
Share memory between GPU and RAM? | 1 | I am currently working on a small system, I only holds an rx5500xt 8GB and a R5 5600 with 16GB Ddr4 3200MHz.
I played with it already and saw that some 7B-8B models with Q6_K_L fit in the GPU with a 4k context window, but I would like to know if any of you have experimented to load the model itself fully in the GPU and the context window in computer RAM. If that's possible I believe it would be possible to run 7B-8B models at 64k context window in small models.
Do you have any considerations or resources I could consult for that? | 2025-01-05T19:29:06 | https://www.reddit.com/r/LocalLLaMA/comments/1huez8k/share_memory_between_gpu_and_ram/ | JuCaDemon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1huez8k | false | null | t3_1huez8k | /r/LocalLLaMA/comments/1huez8k/share_memory_between_gpu_and_ram/ | false | false | self | 1 | null |
Local AI written Text detection | 1 | Are there any tools or model I can locally to somewhat reliably detect LLM written text?
I’m specifically interested in recognizing the earlier models like ChatGPT 3 | 2025-01-05T20:01:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hufqu7/local_ai_written_text_detection/ | boredjo4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hufqu7 | false | null | t3_1hufqu7 | /r/LocalLLaMA/comments/1hufqu7/local_ai_written_text_detection/ | false | false | self | 1 | null |
I made a (difficult) humour analysis benchmark about understanding the jokes in cult British pop quiz show Never Mind the Buzzcocks | 1 | 2025-01-05T20:02:56 | _sqrkl | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hufsgu | false | null | t3_1hufsgu | /r/LocalLLaMA/comments/1hufsgu/i_made_a_difficult_humour_analysis_benchmark/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/rcqgoy5kd8be1.png?auto=webp&s=c4a7348f93d400b5a39ab1a15345159338a82434', 'width': 1276, 'height': 1153}, 'resolutions': [{'url': 'https://preview.redd.it/rcqgoy5kd8be1.png?width=108&crop=smart&auto=webp&s=84cf92f6c48c611fed1bd4767fef2b699149e639', 'width': 108, 'height': 97}, {'url': 'https://preview.redd.it/rcqgoy5kd8be1.png?width=216&crop=smart&auto=webp&s=398c32d87af88bc5adc7b0d159a9524f6ae1c8e8', 'width': 216, 'height': 195}, {'url': 'https://preview.redd.it/rcqgoy5kd8be1.png?width=320&crop=smart&auto=webp&s=544903551ca07273a6c0f28d13092358695749d8', 'width': 320, 'height': 289}, {'url': 'https://preview.redd.it/rcqgoy5kd8be1.png?width=640&crop=smart&auto=webp&s=4f5745e800dbc3ea636211092b545835534bf8dd', 'width': 640, 'height': 578}, {'url': 'https://preview.redd.it/rcqgoy5kd8be1.png?width=960&crop=smart&auto=webp&s=1f259ee5ea1c432817a0c3aa10a8c7796220d971', 'width': 960, 'height': 867}, {'url': 'https://preview.redd.it/rcqgoy5kd8be1.png?width=1080&crop=smart&auto=webp&s=fcc215c9e4ce5d56718c11d61e5907408c131a29', 'width': 1080, 'height': 975}], 'variants': {}, 'id': 'l6TEl0bFIQrgRrfAUiwKuzp_M6IPF3WXzv00P_nvNPs'}], 'enabled': True} |
|||
Dolphin 3.0 Released (Llama 3.1 + 3.2 + Qwen 2.5) | 1 | 2025-01-05T20:03:29 | https://huggingface.co/collections/cognitivecomputations/dolphin-30-677ab47f73d7ff66743979a3 | TechnoByte_ | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hufsy4 | false | null | t3_1hufsy4 | /r/LocalLLaMA/comments/1hufsy4/dolphin_30_released_llama_31_32_qwen_25/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/RDtnHLr3enIirLUTYyEWWf8SjhkYvz9YIuZ89YFUL_4.jpg?auto=webp&s=857e5c4ba6a4a5669e3cf76a5e6d278b2df5adde', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/RDtnHLr3enIirLUTYyEWWf8SjhkYvz9YIuZ89YFUL_4.jpg?width=108&crop=smart&auto=webp&s=f0d5e2e6de4bff1b7b87819d9467904880a408d3', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/RDtnHLr3enIirLUTYyEWWf8SjhkYvz9YIuZ89YFUL_4.jpg?width=216&crop=smart&auto=webp&s=740dd0ceef67852a058afad4b48d0e4f9d904ff4', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/RDtnHLr3enIirLUTYyEWWf8SjhkYvz9YIuZ89YFUL_4.jpg?width=320&crop=smart&auto=webp&s=9b4a839b6c21c7c30ce6bf779fe348eb475a318a', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/RDtnHLr3enIirLUTYyEWWf8SjhkYvz9YIuZ89YFUL_4.jpg?width=640&crop=smart&auto=webp&s=35762ca011564a31bd0387d1d535874e2bbb33c2', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/RDtnHLr3enIirLUTYyEWWf8SjhkYvz9YIuZ89YFUL_4.jpg?width=960&crop=smart&auto=webp&s=b664a3a0b11500279a6930616207d1f9a2f91794', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/RDtnHLr3enIirLUTYyEWWf8SjhkYvz9YIuZ89YFUL_4.jpg?width=1080&crop=smart&auto=webp&s=bbe0920c765146a1ece5c1b8b29299fd078d59e9', 'width': 1080, 'height': 583}], 'variants': {}, 'id': 'UQM08j_aV_PUC29YoQT-sX6TRyPytU2JBaGzaNQYXro'}], 'enabled': False} |
||
Which js library do you use to work with your local LLM server? (Trying to decide between openai and ollama js libraries, or just using raw HTTP requests - are there more options out there?) | 1 | So, I'm currently locally hosting ollama, and have a few credits in OpenRouter. I'd like to make an app that is inter-operable between the two (if not more) providers.
My options so far are the ollama js library, the openai library (which OpenRouter seems to recommend), and using HTTP requests directly. I imagine if I used the requests directly, I'd end up creating my own library anyway.
Is there any choice I'm missing? | 2025-01-05T20:03:37 | https://www.reddit.com/r/LocalLLaMA/comments/1huft1v/which_js_library_do_you_use_to_work_with_your/ | OneFanFare | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1huft1v | false | null | t3_1huft1v | /r/LocalLLaMA/comments/1huft1v/which_js_library_do_you_use_to_work_with_your/ | false | false | self | 1 | null |
Help with ollama and the Continue VSCode extension? Sometimes it works, sometimes it fails spectacularly | 1 | [removed] | 2025-01-05T20:11:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hufzn0/help_with_ollama_and_the_continue_vscode/ | im_dylan_it | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hufzn0 | false | null | t3_1hufzn0 | /r/LocalLLaMA/comments/1hufzn0/help_with_ollama_and_the_continue_vscode/ | false | false | self | 1 | null |
Introducing Ainara, which is my attempt to create an open source AI assistant featuring real-time capabilites | 1 | [removed] | 2025-01-05T20:15:12 | https://www.reddit.com/r/LocalLLaMA/comments/1hug2vi/introducing_ainara_which_is_my_attempt_to_create/ | _khromalabs_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hug2vi | false | null | t3_1hug2vi | /r/LocalLLaMA/comments/1hug2vi/introducing_ainara_which_is_my_attempt_to_create/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/aAaN9J-AGs_iG1O89KtbnLTG5f1evevaCZhAuDpGuBU.jpg?auto=webp&s=505635df8266fb7608796281ea57d5a9aa50c6fe', 'width': 480, 'height': 360}, 'resolutions': [{'url': 'https://external-preview.redd.it/aAaN9J-AGs_iG1O89KtbnLTG5f1evevaCZhAuDpGuBU.jpg?width=108&crop=smart&auto=webp&s=e1cfc3323fbc697d9095d1892d8eb73cb6e0a3bf', 'width': 108, 'height': 81}, {'url': 'https://external-preview.redd.it/aAaN9J-AGs_iG1O89KtbnLTG5f1evevaCZhAuDpGuBU.jpg?width=216&crop=smart&auto=webp&s=9bba2908f4cb28244b4f99f40690e2555fd473b0', 'width': 216, 'height': 162}, {'url': 'https://external-preview.redd.it/aAaN9J-AGs_iG1O89KtbnLTG5f1evevaCZhAuDpGuBU.jpg?width=320&crop=smart&auto=webp&s=7b448fdae795405823e2b9985482f50237af7c59', 'width': 320, 'height': 240}], 'variants': {}, 'id': '5ALXEnzx0cEhH5YMCqhXymeE9O71EMtJIC0kHJQDjcg'}], 'enabled': False} |
Local LLM Recommendations | 1 | Hi All,
I want to expand my skills into the world of local LLM as a Data Scientist. I've got a few use cases that I would like to test, but I know that I need to get started. I have been gifted a PC with 1TB M.2 SSD 32GB ram and an NVidia Graphics card. Its an Nvidia card GTX but I will have to confirm the model (I also have a GTX 1060, but I don't think I can physically fit both in the case even though there are sufficient slots).
for PoC I want basic chat Q&A style. However, I would like to test out some of these use cases:
1. Multi-data-point company identity verification
1. I'm regularly asked to look at external lists of companies and try to match them with our internal lists. Sometimes these lists have multiple identifying data points (e.g. company name, address, etc.) and sometimes its just the company name to match on. I would like to leverage and LLM (even if it runs overnight) to try and match as closely as possible against these lists
2. Environment Crawling Context Relevant Questions
1. I would like to have the LLM be able to reference files within my network and answer questions based on it. this could include things like policy docs inside the org. or to be able to traverse files for code (e.g. a PHP file that has an "include" statement, to be able to identify the file and read it for context)
Also as an enthusiast developer, I may want to write some python to interact with the LLM more than a chatBot style interface.
Any thoughts or recommendations would be appreciated! | 2025-01-05T20:17:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hug4qr/local_llm_recommendations/ | roblu001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hug4qr | false | null | t3_1hug4qr | /r/LocalLLaMA/comments/1hug4qr/local_llm_recommendations/ | false | false | self | 1 | null |
Qlora or fft? | 1 | [removed] | 2025-01-05T20:55:53 | https://www.reddit.com/r/LocalLLaMA/comments/1huh1wh/qlora_or_fft/ | Born-Adhesiveness893 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1huh1wh | false | null | t3_1huh1wh | /r/LocalLLaMA/comments/1huh1wh/qlora_or_fft/ | false | false | self | 1 | null |
A beginner question about multi gpu setup | 1 | [removed] | 2025-01-05T21:00:10 | https://www.reddit.com/r/LocalLLaMA/comments/1huh5mi/a_beginner_question_about_multi_gpu_setup/ | tapancnallan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1huh5mi | false | null | t3_1huh5mi | /r/LocalLLaMA/comments/1huh5mi/a_beginner_question_about_multi_gpu_setup/ | false | false | self | 1 | null |
Model to run on 2-4 GBs of RAM? | 1 | Yeah, shooting the moon xD
I have a very limited use case - 1-2 emails per day, the text of which needs to be analyzed for a potential event to add to calendar (.ics). Easy for ChatGPT, but I thought that it might just be possible to do it locally on my shared Proxmox server.
I can devote 2 GB, perhaps up to 4 GB of RAM to this, and obviously processing speed is not much of an issue (if it takes half an hour, so be it) - but is there any model that will run on such RAM limitation? | 2025-01-05T21:08:27 | https://www.reddit.com/r/LocalLLaMA/comments/1huhczs/model_to_run_on_24_gbs_of_ram/ | SlowStopper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1huhczs | false | null | t3_1huhczs | /r/LocalLLaMA/comments/1huhczs/model_to_run_on_24_gbs_of_ram/ | false | false | self | 1 | null |
Streaming TTS INPUT? | 1 | Hi!
I need very low latency and it would really help if I could not wait the whole LLM generation to start generating audio.
Spliting by sentence works but the audio generated is not consistant at all and the quality is very bad.
Do you know any local TTS that allows to not provide all the text at the beginning of the generation and allows to stream the input (and output)? | 2025-01-05T21:34:14 | https://www.reddit.com/r/LocalLLaMA/comments/1huhzd6/streaming_tts_input/ | Zeink303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1huhzd6 | false | null | t3_1huhzd6 | /r/LocalLLaMA/comments/1huhzd6/streaming_tts_input/ | false | false | self | 1 | null |
LLMs which fit into 24Gb | 1 | Hi all. Finally, I put together my rig for LLMs/AI experiments. It's i7-12700kf (12С/20T), 128Gb RAM and 3090 RTX 24 GB.
While it's able to run, let's say, llama3.3 70B on CPU, models which fit into VRAM run by the order of magnitude faster.
So far I get really nice results with Gemma2 27B for generic discussions, and Qwen2.5-coder 31B for programming.
What do you guys use, when you're limited to 24Gb? | 2025-01-05T21:42:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hui6qq/llms_which_fit_into_24gb/ | ipomaranskiy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hui6qq | false | null | t3_1hui6qq | /r/LocalLLaMA/comments/1hui6qq/llms_which_fit_into_24gb/ | false | false | self | 1 | null |
Training a Basic Personal Agent Locally | 1 | [removed] | 2025-01-05T22:20:19 | https://www.reddit.com/r/LocalLLaMA/comments/1huj2y1/training_a_basic_personal_agent_locally/ | Yoavbarak | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1huj2y1 | false | null | t3_1huj2y1 | /r/LocalLLaMA/comments/1huj2y1/training_a_basic_personal_agent_locally/ | false | false | self | 1 | null |
ML Server Parts Suggestions | 1 | [removed] | 2025-01-05T22:45:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hujoea/ml_server_parts_suggestions/ | TheUnrealAdagio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hujoea | false | null | t3_1hujoea | /r/LocalLLaMA/comments/1hujoea/ml_server_parts_suggestions/ | false | false | self | 1 | null |
How to quickly launch alltalk for SillyTavern? | 1 | How to quickly launch alltalk for SillyTavern?
How do I create a shortcut to quickly launch alltalk? | 2025-01-05T22:59:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hujzy8/how_to_quickly_launch_alltalk_for_sillytavern/ | Successful-Button-53 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hujzy8 | false | null | t3_1hujzy8 | /r/LocalLLaMA/comments/1hujzy8/how_to_quickly_launch_alltalk_for_sillytavern/ | false | false | self | 1 | null |
Best Software for Running Local LLMs on Windows with AMD 6800XT and 16GB VRAM | 1 | [removed] | 2025-01-05T23:02:32 | https://www.reddit.com/r/LocalLLaMA/comments/1huk2o1/best_software_for_running_local_llms_on_windows/ | ITMSPGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1huk2o1 | false | null | t3_1huk2o1 | /r/LocalLLaMA/comments/1huk2o1/best_software_for_running_local_llms_on_windows/ | false | false | self | 1 | null |
I've been out of the local llm space for a while what do people use to run these models now? | 1 | last time i checked people used ooba booga text gen ui but i dont think thats the case anymore | 2025-01-05T23:12:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hukb1g/ive_been_out_of_the_local_llm_space_for_a_while/ | pigeon57434 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hukb1g | false | null | t3_1hukb1g | /r/LocalLLaMA/comments/1hukb1g/ive_been_out_of_the_local_llm_space_for_a_while/ | false | false | self | 1 | null |
Measuring non-determinism in LLMs--boring but perhaps useful work | 1 | [removed] | 2025-01-05T23:48:53 | https://www.reddit.com/r/LocalLLaMA/comments/1hul4g0/measuring_nondeterminism_in_llmsboring_but/ | Skiata | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hul4g0 | false | null | t3_1hul4g0 | /r/LocalLLaMA/comments/1hul4g0/measuring_nondeterminism_in_llmsboring_but/ | false | false | self | 1 | null |
How are these companies building video/image generation tools? From scratch, fine-tuning Llama, or something else?
| 1 | [removed] | 2025-01-06T00:03:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hulfvz/how_are_these_companies_building_videoimage/ | conlake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hulfvz | false | null | t3_1hulfvz | /r/LocalLLaMA/comments/1hulfvz/how_are_these_companies_building_videoimage/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/mHeTlzjF1liAq1kx_5OlzzBrhLnNfBDggSFNkIa7guk.jpg?auto=webp&s=665147fa8c8643bc489b96ab3c20e5d2f61d6a29', 'width': 1200, 'height': 627}, 'resolutions': [{'url': 'https://external-preview.redd.it/mHeTlzjF1liAq1kx_5OlzzBrhLnNfBDggSFNkIa7guk.jpg?width=108&crop=smart&auto=webp&s=6701dc349ef197fb927ed08528f7cf7659bb5fe8', 'width': 108, 'height': 56}, {'url': 'https://external-preview.redd.it/mHeTlzjF1liAq1kx_5OlzzBrhLnNfBDggSFNkIa7guk.jpg?width=216&crop=smart&auto=webp&s=2037b03e81be5b3fd74b0629a04dac91cb37807e', 'width': 216, 'height': 112}, {'url': 'https://external-preview.redd.it/mHeTlzjF1liAq1kx_5OlzzBrhLnNfBDggSFNkIa7guk.jpg?width=320&crop=smart&auto=webp&s=cb957ce6585f88841b6c7c2d00ff80d60f12b398', 'width': 320, 'height': 167}, {'url': 'https://external-preview.redd.it/mHeTlzjF1liAq1kx_5OlzzBrhLnNfBDggSFNkIa7guk.jpg?width=640&crop=smart&auto=webp&s=1bb6466ff42b383314058dfba6af9b24dc819828', 'width': 640, 'height': 334}, {'url': 'https://external-preview.redd.it/mHeTlzjF1liAq1kx_5OlzzBrhLnNfBDggSFNkIa7guk.jpg?width=960&crop=smart&auto=webp&s=9dd81cf1e59b40d7d577692ca5f6ae1d2f9fd183', 'width': 960, 'height': 501}, {'url': 'https://external-preview.redd.it/mHeTlzjF1liAq1kx_5OlzzBrhLnNfBDggSFNkIa7guk.jpg?width=1080&crop=smart&auto=webp&s=0a501fe583c2431d09f7e03da7be80de77e72943', 'width': 1080, 'height': 564}], 'variants': {}, 'id': 't1jdJojsqC5z91aXKUR8HOXvOTrdYDUB9Iagjh2BWCk'}], 'enabled': False} |
Feature vs. Reasoning guided generation | 1 | I saw [this video titled “The Dark Matter of AI” by Welch Labs](https://youtube.com/watch?v=UGO_Ehywuxc) a couple weeks ago, and the notion of tracing Residual Streams absolutely blew my mind. A few days ago I asked about how I can do this, and I was introduced to [GemmaScope](https://www.neuronpedia.org/gemma-scope#main) by Google DeepMind and TransformerLens….
I HIGHLY recommend you play with these tools, and read the documentation. In short, from what I understand, we can _visually trace_ the evolution of token probabilities layer-by-layer. But even more impressive than that, we can _decompose_ latent representations in the neural network into identifiable features.
So here’s my question: Models like QwQ superficially follow the rules of logic by saturating attention with “reasoning-aligned” tokens and context. Models o1 and o3 are time-compute scaling, effectively brute-forcing logic. But why do this rather than one-time train a Sparse AutoEncoder, give control of it to a SOTA LLM like 4o-mini, and allow it to use the SAE to _*internally steer generation? *_
If you are reading this and have never heard of these terms/jargon, forgive me, but I dont want to try and explain anything I dont truly understand—I hope the educated folk in the comments can educate/correct us both on our understanding and assumptions. Im only making this thread to start the conversation! | 2025-01-06T00:25:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hulx6y/feature_vs_reasoning_guided_generation/ | Imjustmisunderstood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hulx6y | false | null | t3_1hulx6y | /r/LocalLLaMA/comments/1hulx6y/feature_vs_reasoning_guided_generation/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/vN8hj9EraQqDWHZdBGZssl3m1WmZrSjleNxHkWFlYAM.jpg?auto=webp&s=ea1882df1414ce801d63c5774ecd1f591f693009', 'width': 480, 'height': 360}, 'resolutions': [{'url': 'https://external-preview.redd.it/vN8hj9EraQqDWHZdBGZssl3m1WmZrSjleNxHkWFlYAM.jpg?width=108&crop=smart&auto=webp&s=079dd1a7f7811343b1a1634302a65c059385b326', 'width': 108, 'height': 81}, {'url': 'https://external-preview.redd.it/vN8hj9EraQqDWHZdBGZssl3m1WmZrSjleNxHkWFlYAM.jpg?width=216&crop=smart&auto=webp&s=43cf53382ee048b22cc77465268a690ad61878c0', 'width': 216, 'height': 162}, {'url': 'https://external-preview.redd.it/vN8hj9EraQqDWHZdBGZssl3m1WmZrSjleNxHkWFlYAM.jpg?width=320&crop=smart&auto=webp&s=19a1de4a453647c80e503c8c9e42a3482a55e0d6', 'width': 320, 'height': 240}], 'variants': {}, 'id': 'j2kzpbeG9HxA91MyEyXj5Y-PuyuAkhDInEZ0sBCmEXw'}], 'enabled': False} |
Kotaemon-papers: an open-source / self-hostable web app to chat with your papers | 1 | Hi r/LocalLLaMA,
Recently our team at [https://github.com/Cinnamon/kotaemon/](https://github.com/Cinnamon/kotaemon/) has been working on a public demo to showcase the new advanced citation features in our open-source RAG application:
[https://cin-model-kotaemon.hf.space/](https://cin-model-kotaemon.hf.space/)
https://preview.redd.it/xysydhb8r9be1.png?width=3582&format=png&auto=webp&s=8d0929a7f7898208a7a6de47c67854cd53aaa822
We’re excited to share a free web app that lets users explore top daily machine learning (ML) papers on Arxiv (via the HuggingFace API) and upload their own Arxiv papers to get LLM-assisted summaries, mind maps, and answers to questions based on the content.
Some notable features:
\- **Instant Summaries & Mind Maps**: Generate concise summaries and visual mind maps for any Arxiv paper.
\- **Transparent Citations**: Verify LLM-generated answers with clear, evidence-backed citations. Citations can be highlighted directly in the in-browser PDF viewer.
\- **Flexible Citation Options**: Choose between highlights and inline citations. Plus, *select any sentence in the AI-generated response to see its supporting source* from the original paper.
\- **Multi-Paper Analysis**: Compare, contrast, and compose summaries from multiple papers simultaneously.
\- **Complex Question Solving**: Use Chain-of-Thought (CoT) reasoning mode to break down and solve complex questions step-by-step.
\- (And most important of all) **Customizable & Self-hosted**: Easily self-host your private app via the free HuggingFace Space hosting feature. You can securely configure your own LLM and upload your private document collections.
We’d love to hear your thoughts, feedback, and recommendations as we continue improving this tool.
Check out the demo and happy hacking! | 2025-01-06T00:37:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hum6nr/kotaemonpapers_an_opensource_selfhostable_web_app/ | taprosoft | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hum6nr | false | null | t3_1hum6nr | /r/LocalLLaMA/comments/1hum6nr/kotaemonpapers_an_opensource_selfhostable_web_app/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/0gT2uKIx3lKA0MtlMrulDhoNZAVcFVhkrWF4XxlrWzQ.jpg?auto=webp&s=d914e3e71cb63f985c78411210a3faf75276a60b', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/0gT2uKIx3lKA0MtlMrulDhoNZAVcFVhkrWF4XxlrWzQ.jpg?width=108&crop=smart&auto=webp&s=612501496d5ec556d76d6ff95a1221e208fc76ef', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/0gT2uKIx3lKA0MtlMrulDhoNZAVcFVhkrWF4XxlrWzQ.jpg?width=216&crop=smart&auto=webp&s=7917d1c23aae45d40d5697387532d02ce78271b4', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/0gT2uKIx3lKA0MtlMrulDhoNZAVcFVhkrWF4XxlrWzQ.jpg?width=320&crop=smart&auto=webp&s=e0fd4b3dada43321d50ba91aeb71de4c311b44c5', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/0gT2uKIx3lKA0MtlMrulDhoNZAVcFVhkrWF4XxlrWzQ.jpg?width=640&crop=smart&auto=webp&s=8295c1a7cd8b4dc519cab5f4948153931b9f7cda', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/0gT2uKIx3lKA0MtlMrulDhoNZAVcFVhkrWF4XxlrWzQ.jpg?width=960&crop=smart&auto=webp&s=2d432cb061c6a1f73467a89f39cfe725d2e38d82', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/0gT2uKIx3lKA0MtlMrulDhoNZAVcFVhkrWF4XxlrWzQ.jpg?width=1080&crop=smart&auto=webp&s=9e8595306d2d74da54c2c74bef3121b29a0d9b55', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'ud_CQwZBTuqIQ0LhHJLEO5U4zobUrMi5WUghacE5MSI'}], 'enabled': False} |
|
Preparing for interview for video understanding | 1 | Hello everyone, i am preparing for an interview for a research job position. It's focusing on foundation model and video understanding. Any insight of papers "I should not miss" ?
Thanks ! | 2025-01-06T00:46:00 | https://www.reddit.com/r/LocalLLaMA/comments/1humczn/preparing_for_interview_for_video_understanding/ | l31bn1tz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1humczn | false | null | t3_1humczn | /r/LocalLLaMA/comments/1humczn/preparing_for_interview_for_video_understanding/ | false | false | self | 1 | null |
Anyone know how to get a 5090 | 1 | Hi everyone,
I am doing a “budget” AI workstation build that I am very excited about. I was for a while considering doing a dual 4090 build but with the 5090s projected 32 GB of vram it just makes more sense to do a single or possibly dual 5090 build. So on that vein HOW IN THE HELL SHOULD I GET ONE. It was tough enough getting my hands on a 4090 like a year after they released. This time I want to be prepared. What can I do? I want to pay the normal price not some insane scalper markup. Is there some pre order service? Do I just sit on Best Buy on release day and hope. Is it better to get straight from the Nvidia MSI or Asus websites? Let me know if you guys have any suggestions or experience in this. | 2025-01-06T00:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/1humgfa/anyone_know_how_to_get_a_5090/ | Rbarton124 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1humgfa | false | null | t3_1humgfa | /r/LocalLLaMA/comments/1humgfa/anyone_know_how_to_get_a_5090/ | false | false | self | 1 | null |
Running Deepseek v3 Coder | 1 | Can I run deepseek v3 coder on a dual xeon v4 w/ 512gb ram? whats the best way to run it, any instructions out there to install it since it doesn't seem to be on ollama yet? | 2025-01-06T00:57:23 | https://www.reddit.com/r/LocalLLaMA/comments/1humln1/running_deepseek_v3_coder/ | PositiveEnergyMatter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1humln1 | false | null | t3_1humln1 | /r/LocalLLaMA/comments/1humln1/running_deepseek_v3_coder/ | false | false | self | 1 | null |