title
stringlengths
1
300
score
int64
0
3.09k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
3.09k
preview
stringlengths
301
5.01k
I want to do aspect based sentiment analysis
1
[removed]
2025-01-06T01:39:11
https://www.reddit.com/r/LocalLLaMA/comments/1hungr5/i_want_to_do_aspect_based_sentiment_analysis/
Rahul_Albus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hungr5
false
null
t3_1hungr5
/r/LocalLLaMA/comments/1hungr5/i_want_to_do_aspect_based_sentiment_analysis/
false
false
self
1
null
Annoyed by misunderstanding? No longer.
1
[removed]
2025-01-06T01:55:12
https://www.reddit.com/r/LocalLLaMA/comments/1hunskd/annoyed_by_misunderstanding_no_longer/
Linkpharm2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hunskd
false
null
t3_1hunskd
/r/LocalLLaMA/comments/1hunskd/annoyed_by_misunderstanding_no_longer/
false
false
self
1
null
Batch prompting with speculative decoding?
1
[removed]
2025-01-06T02:07:54
https://www.reddit.com/r/LocalLLaMA/comments/1huo22a/batch_prompting_with_speculative_decoding/
hyperna21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huo22a
false
null
t3_1huo22a
/r/LocalLLaMA/comments/1huo22a/batch_prompting_with_speculative_decoding/
false
false
self
1
null
VITA: Towards Open-Source Interactive Omni Multimodal LLM
1
2025-01-06T02:20:53
https://github.com/VITA-MLLM/VITA
ninjasaid13
github.com
1970-01-01T00:00:00
0
{}
1huobn4
false
null
t3_1huobn4
/r/LocalLLaMA/comments/1huobn4/vita_towards_opensource_interactive_omni/
false
false
https://b.thumbs.redditm…9lO9_Xo8f-qM.jpg
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/7i8J3WqGisH57p_GfUHmFP3NB5pQo1g6mWjpEMt6JoE.jpg?auto=webp&s=e56392b06941fea6846f3c44f90ad7389b101602', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/7i8J3WqGisH57p_GfUHmFP3NB5pQo1g6mWjpEMt6JoE.jpg?width=108&crop=smart&auto=webp&s=96b9e9b4a1843b66b6fec02ca842dedada8d0493', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/7i8J3WqGisH57p_GfUHmFP3NB5pQo1g6mWjpEMt6JoE.jpg?width=216&crop=smart&auto=webp&s=195633e64078f00658db71556839b51ec8b23302', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/7i8J3WqGisH57p_GfUHmFP3NB5pQo1g6mWjpEMt6JoE.jpg?width=320&crop=smart&auto=webp&s=38e0c79dac5bf1f3dfe67e41205b923b14aa3741', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/7i8J3WqGisH57p_GfUHmFP3NB5pQo1g6mWjpEMt6JoE.jpg?width=640&crop=smart&auto=webp&s=5748eb303238b10fd19d8c57d61d4e05fbadd03f', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/7i8J3WqGisH57p_GfUHmFP3NB5pQo1g6mWjpEMt6JoE.jpg?width=960&crop=smart&auto=webp&s=63108caeaa5e0913b4b8008556f39b6e9cc8e1a8', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/7i8J3WqGisH57p_GfUHmFP3NB5pQo1g6mWjpEMt6JoE.jpg?width=1080&crop=smart&auto=webp&s=28ce568295867cf947d0f645be120e817cd6bb20', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'dhE4YPwkxXLWF7vuH4lD51libG_0kPd5i05SSDMNmXA'}], 'enabled': False}
Noob Questions (i apologize for any ignorance)
1
[removed]
2025-01-06T02:26:43
https://www.reddit.com/r/LocalLLaMA/comments/1huofug/noob_questions_i_apologize_for_any_ignorance/
xclaim494
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huofug
false
null
t3_1huofug
/r/LocalLLaMA/comments/1huofug/noob_questions_i_apologize_for_any_ignorance/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108, 'height': 56}, {'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216, 'height': 113}, {'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320, 'height': 168}, {'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640, 'height': 336}, {'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960, 'height': 504}, {'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080, 'height': 567}], 'variants': {}, 'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM'}], 'enabled': False}
Annoyed by misunderstanding?
1
[removed]
2025-01-06T02:29:05
https://www.reddit.com/r/LocalLLaMA/comments/1huohkj/annoyed_by_misunderstanding/
Linkpharm2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huohkj
false
null
t3_1huohkj
/r/LocalLLaMA/comments/1huohkj/annoyed_by_misunderstanding/
false
false
self
1
null
Noob Question
1
[removed]
2025-01-06T02:30:02
https://www.reddit.com/r/LocalLLaMA/comments/1huoib1/noob_question/
xclaim494
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huoib1
false
null
t3_1huoib1
/r/LocalLLaMA/comments/1huoib1/noob_question/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108, 'height': 56}, {'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216, 'height': 113}, {'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320, 'height': 168}, {'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640, 'height': 336}, {'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960, 'height': 504}, {'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080, 'height': 567}], 'variants': {}, 'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM'}], 'enabled': False}
Annoyed by misunderstanding?
1
[removed]
2025-01-06T02:30:36
https://www.reddit.com/r/LocalLLaMA/comments/1huoip4/annoyed_by_misunderstanding/
Linkpharm2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huoip4
false
null
t3_1huoip4
/r/LocalLLaMA/comments/1huoip4/annoyed_by_misunderstanding/
false
false
self
1
null
Honest use cases for LLMs.
1
[removed]
2025-01-06T02:57:35
https://www.reddit.com/r/LocalLLaMA/comments/1hup22p/honest_use_cases_for_llms/
Low-Inspection-6024
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hup22p
false
null
t3_1hup22p
/r/LocalLLaMA/comments/1hup22p/honest_use_cases_for_llms/
false
false
self
1
null
Honest use cases for LLMs.
1
[removed]
2025-01-06T02:59:52
https://www.reddit.com/r/LocalLLaMA/comments/1hup3r6/honest_use_cases_for_llms/
Low-Inspection-6024
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hup3r6
false
null
t3_1hup3r6
/r/LocalLLaMA/comments/1hup3r6/honest_use_cases_for_llms/
false
false
self
1
null
Honest use cases for LLMs.
1
[removed]
2025-01-06T03:02:43
https://www.reddit.com/r/LocalLLaMA/comments/1hup5yd/honest_use_cases_for_llms/
Low-Inspection-6024
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hup5yd
false
null
t3_1hup5yd
/r/LocalLLaMA/comments/1hup5yd/honest_use_cases_for_llms/
false
false
self
1
null
What's the difference between "dolphin" and "abliterated" models?
1
I just saw the release of Dolphin 3, and I'm not very familiar with uncensored models, so I have a few questions: From what I understand, Dolphin models are trained on "harmful" datasets to reduce refusals, while abliteration is perform "surgery" on the model to remove refusals. Is this correct? And which method is better? Specifically, which method better preserves the original model's intelligence?
2025-01-06T03:03:33
https://www.reddit.com/r/LocalLLaMA/comments/1hup6iu/whats_the_difference_between_dolphin_and/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hup6iu
false
null
t3_1hup6iu
/r/LocalLLaMA/comments/1hup6iu/whats_the_difference_between_dolphin_and/
false
false
self
1
null
DeepSeek V3 is the shit.
1
Man, I am really enjoying this new model! I've worked in the field for 5 years and realized that you simply cannot build consistent workflows on any of the state-of-the-art (SOTA) model providers. They are constantly changing stuff behind the scenes, which messes with how the models behave and interact. It's like trying to build a house on quicksand—frustrating as hell. (Yes I use the API's well and have similar issues.) I've always seen the potential in open-source models and have been using them solidly, but I never really found them to have that same *edge* when it comes to intelligence. They were good, but not quite there. Then December rolled around, and it was an amazing month with the release of the new Gemini variants. Personally, I was having a rough time before that with Claude, ChatGPT, and even the earlier Gemini variants—they all went to absolute shit for a while. It was like the AI apocalypse or something. But now? We're finally back to getting really long, thorough responses without the models trying to force hashtags, comments, or redactions into everything. That was so fucking annoying, literally. There are people in our organizations who straight-up stopped using any AI assistant because of how dogshit it became. Now we're back, baby! Deepseek-V3 is really awesome. 600 billion parameters seem to be a sweet spot of some kind. I won't pretend to know what's going on under the hood with this particular model, but it has been my daily driver, and I’m loving it. I love how you can really dig deep into diagnosing issues, and it’s easy to prompt it to switch between super long outputs and short, concise answers just by using language like "only do this." It’s versatile and reliable without being patronizing(Fuck you Claude). Shit is on fire right now. I am so stoked for 2025. The future of AI is looking bright. Thanks for reading my ramblings. Happy Fucking New Year to all you crazy cats out there. Try not to burn down your mom’s basement with your overclocked rigs. Cheers!
2025-01-06T03:56:12
https://www.reddit.com/r/LocalLLaMA/comments/1huq6z0/deepseek_v3_is_the_shit/
Odd-Environment-7193
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huq6z0
false
null
t3_1huq6z0
/r/LocalLLaMA/comments/1huq6z0/deepseek_v3_is_the_shit/
false
false
self
1
null
Improve prompt for Qwen2VL
1
[removed]
2025-01-06T04:02:01
https://www.reddit.com/r/LocalLLaMA/comments/1huqb1d/improve_prompt_for_qwen2vl/
Expert_Onion1666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huqb1d
false
null
t3_1huqb1d
/r/LocalLLaMA/comments/1huqb1d/improve_prompt_for_qwen2vl/
false
false
self
1
null
Improve Qwen2VL format
1
[removed]
2025-01-06T04:04:56
https://www.reddit.com/r/LocalLLaMA/comments/1huqcz0/improve_qwen2vl_format/
Expert_Onion1666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huqcz0
false
null
t3_1huqcz0
/r/LocalLLaMA/comments/1huqcz0/improve_qwen2vl_format/
false
false
self
1
null
Best model for ERP?
1
[removed]
2025-01-06T04:19:32
https://www.reddit.com/r/LocalLLaMA/comments/1huqmh9/best_model_for_erp/
Bryguy318
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huqmh9
false
null
t3_1huqmh9
/r/LocalLLaMA/comments/1huqmh9/best_model_for_erp/
false
false
nsfw
1
null
PDF Knowledge Base
1
[removed]
2025-01-06T04:25:28
https://www.reddit.com/r/LocalLLaMA/comments/1huqqc6/pdf_knowledge_base/
Extension_Leave9652
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huqqc6
false
null
t3_1huqqc6
/r/LocalLLaMA/comments/1huqqc6/pdf_knowledge_base/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/wpCXRt620JiWIGoeijfCe9Vm6N1rn43HpWVk6sVQYjQ.jpg?auto=webp&s=b3195421233621dc5fe80ab07b267c5dc285332f', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/wpCXRt620JiWIGoeijfCe9Vm6N1rn43HpWVk6sVQYjQ.jpg?width=108&crop=smart&auto=webp&s=c342f3c9c33a4677edce08f20985f41db01318bc', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/wpCXRt620JiWIGoeijfCe9Vm6N1rn43HpWVk6sVQYjQ.jpg?width=216&crop=smart&auto=webp&s=18d7b0600cdc9c0f8233ae56d289e7e27a795b7d', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/wpCXRt620JiWIGoeijfCe9Vm6N1rn43HpWVk6sVQYjQ.jpg?width=320&crop=smart&auto=webp&s=996aa7f61ba5c0991abd62a97121e4e6e6eaec56', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/wpCXRt620JiWIGoeijfCe9Vm6N1rn43HpWVk6sVQYjQ.jpg?width=640&crop=smart&auto=webp&s=9398a2289713e9dd0e2cf52c00aacd800dba7aa0', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/wpCXRt620JiWIGoeijfCe9Vm6N1rn43HpWVk6sVQYjQ.jpg?width=960&crop=smart&auto=webp&s=0cabc694035d1d4dab257c5071411e5ed5228969', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/wpCXRt620JiWIGoeijfCe9Vm6N1rn43HpWVk6sVQYjQ.jpg?width=1080&crop=smart&auto=webp&s=1dc2fe163e0c514a6b30b5672f83a0d65926f40f', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'B4zq-hCPvMgv-hl3yA6e_H8gToQVYNByDc6JcSi73Lc'}], 'enabled': False}
Noob Question
1
[removed]
2025-01-06T04:47:52
https://www.reddit.com/r/LocalLLaMA/comments/1hur4if/noob_question/
xclaim494
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hur4if
false
null
t3_1hur4if
/r/LocalLLaMA/comments/1hur4if/noob_question/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108, 'height': 56}, {'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216, 'height': 113}, {'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320, 'height': 168}, {'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640, 'height': 336}, {'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960, 'height': 504}, {'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080, 'height': 567}], 'variants': {}, 'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM'}], 'enabled': False}
Kokoro-82M, an Apache 2.0 TTS model
1
[removed]
2025-01-06T04:56:45
https://www.reddit.com/r/LocalLLaMA/comments/1hura6f/kokoro82m_an_apache_20_tts_model/
rzvzn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hura6f
false
null
t3_1hura6f
/r/LocalLLaMA/comments/1hura6f/kokoro82m_an_apache_20_tts_model/
false
false
self
1
null
Recommend me some blogs/newsletters about LLM applications!
1
Would love to get some daily/weekly newsletters on how people are solving challenges in the LLM space ( topics like prompt management/guardrails/etc). If you subscribe to any interesting ones I'd love check them out!
2025-01-06T05:12:12
https://www.reddit.com/r/LocalLLaMA/comments/1hurkd1/recommend_me_some_blogsnewsletters_about_llm/
vyngotl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hurkd1
false
null
t3_1hurkd1
/r/LocalLLaMA/comments/1hurkd1/recommend_me_some_blogsnewsletters_about_llm/
false
false
self
1
null
Deepseek v3 is goated (for me)
1
[removed]
2025-01-06T05:12:35
https://www.reddit.com/r/LocalLLaMA/comments/1hurklf/deepseek_v3_is_goated_for_me/
Successful-League176
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hurklf
false
null
t3_1hurklf
/r/LocalLLaMA/comments/1hurklf/deepseek_v3_is_goated_for_me/
false
false
self
1
null
Any ai tool that can help
1
[removed]
2025-01-06T05:59:49
https://www.reddit.com/r/LocalLLaMA/comments/1husd33/any_ai_tool_that_can_help/
unrenderedfile
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1husd33
false
null
t3_1husd33
/r/LocalLLaMA/comments/1husd33/any_ai_tool_that_can_help/
false
false
self
1
null
DataBridge - Fully local, Open Source RAG solution
1
Hi r/LocalLLaMA! I've been working on a completely open-source and local Multi-modal RAG solution called [DataBridge](https://github.com/databridge-org/databridge-core). We're fully customizable, and incredibly modular - if you need any additional features, you can build on top with a single file. We believe the future of RAG and software development is gonna be through specification. As a result, you only need to edit the `config.toml` to get your desired RAG deployment. You can learn more about DataBridge through our [docs](https://databridge.gitbook.io/databridge-docs), or ask questions here! Would love some feedback, directions you think are promising to build towards, and any thoughts you may have about this project!
2025-01-06T06:06:11
https://www.reddit.com/r/LocalLLaMA/comments/1hush0z/databridge_fully_local_open_source_rag_solution/
Advanced_Army4706
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hush0z
false
null
t3_1hush0z
/r/LocalLLaMA/comments/1hush0z/databridge_fully_local_open_source_rag_solution/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/cbiJSejsNDGTO1xfUIJIVn9CB97gnESdzTe-g76LHGY.jpg?auto=webp&s=2c9b40a68c3766467b435445036b433ee4658aaf', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/cbiJSejsNDGTO1xfUIJIVn9CB97gnESdzTe-g76LHGY.jpg?width=108&crop=smart&auto=webp&s=072970c52201bd1a3d8c6139e71d740c219a3501', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/cbiJSejsNDGTO1xfUIJIVn9CB97gnESdzTe-g76LHGY.jpg?width=216&crop=smart&auto=webp&s=3ef66b943add031199cd4cc968a40d5df3e56030', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/cbiJSejsNDGTO1xfUIJIVn9CB97gnESdzTe-g76LHGY.jpg?width=320&crop=smart&auto=webp&s=95a47938a30c34611b4b542dbab3c4e341024321', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/cbiJSejsNDGTO1xfUIJIVn9CB97gnESdzTe-g76LHGY.jpg?width=640&crop=smart&auto=webp&s=e993bfceb940516dfcdbbe11354216c24fa3456f', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/cbiJSejsNDGTO1xfUIJIVn9CB97gnESdzTe-g76LHGY.jpg?width=960&crop=smart&auto=webp&s=85dedb91c7e44445fd790646ff00e79a85c9c762', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/cbiJSejsNDGTO1xfUIJIVn9CB97gnESdzTe-g76LHGY.jpg?width=1080&crop=smart&auto=webp&s=430ba101e97cd4e4d2b02385553edc51e046c4a7', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'wDkHwYWStNx9Zprk3N103RfMV4m8DFIg5VvPQ97IWBE'}], 'enabled': False}
Favorite AI Frameworks for Linux? Text summaries mostly...
1
Hello LLA geniuses! Looking for some pointers on the basics... * Is there really an advantage to using this or that distro for running opensource, local ai? * Is an app like LM studio a good thing to invest time in learning? * What is your favorite tool to make a local LLM integrated/seamless as possible? * Are *paid* vs *free* models significantly different for the average home user, ie doing things like generating spreadsheets, analyzing stocks, and summarizing articles? * Do you need a beefy GPU? I've come across various opinions on this... For context, I'm running * AMD Ryzen 5700g (powering three monitors) * Nvidie GeForce 1030 (running an additional 2 monitors) * Win/Fedora dual boot (might change to Pop!os for DaVinci Resolve functionality) I'm looking to integrate local ai as much as possible into my daily workflow, and don't want to feed my data to non-local providers. Grateful for sage advice
2025-01-06T06:26:39
https://www.reddit.com/r/LocalLLaMA/comments/1huss8e/favorite_ai_frameworks_for_linux_text_summaries/
JohannesComstantine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huss8e
false
null
t3_1huss8e
/r/LocalLLaMA/comments/1huss8e/favorite_ai_frameworks_for_linux_text_summaries/
false
false
self
1
null
VITA-1.5:Towards GPT-4o Level Real-Time Vision and Speech Interaction
1
[removed]
2025-01-06T06:30:09
https://www.reddit.com/r/LocalLLaMA/comments/1husu12/vita15towards_gpt4o_level_realtime_vision_and/
Lynncc6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1husu12
false
null
t3_1husu12
/r/LocalLLaMA/comments/1husu12/vita15towards_gpt4o_level_realtime_vision_and/
false
false
https://b.thumbs.redditm…Gv33y6NHdHnw.jpg
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/rk1ch_1eJDZUY5NLLSYCUjONmdkZuKtrFdRbp87ce4g.jpg?auto=webp&s=3ef3356e4e1196d2bd950b4e3b90683b74e6ecfb', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/rk1ch_1eJDZUY5NLLSYCUjONmdkZuKtrFdRbp87ce4g.jpg?width=108&crop=smart&auto=webp&s=956c49b5e5e963a7c7c973e2e6a57af00715bf59', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/rk1ch_1eJDZUY5NLLSYCUjONmdkZuKtrFdRbp87ce4g.jpg?width=216&crop=smart&auto=webp&s=e008cd159bec165b3e5a7f4420ebfd0b8d215219', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/rk1ch_1eJDZUY5NLLSYCUjONmdkZuKtrFdRbp87ce4g.jpg?width=320&crop=smart&auto=webp&s=485a13952e97c6f1f93dbee40bd484d231718074', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/rk1ch_1eJDZUY5NLLSYCUjONmdkZuKtrFdRbp87ce4g.jpg?width=640&crop=smart&auto=webp&s=caca46b3d4da163621588d0fcef9adfd75a9c856', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/rk1ch_1eJDZUY5NLLSYCUjONmdkZuKtrFdRbp87ce4g.jpg?width=960&crop=smart&auto=webp&s=c22919a4a2fb24aeeb7497198061194e778ad1b3', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/rk1ch_1eJDZUY5NLLSYCUjONmdkZuKtrFdRbp87ce4g.jpg?width=1080&crop=smart&auto=webp&s=4c91675d9564b616e600ba469c92dbef84dcb563', 'width': 1080, 'height': 583}], 'variants': {}, 'id': 'Hd_dGkTAGh4eBBQWHOkZzDeCPinYG-znB-oX7a-9HR4'}], 'enabled': False}
Background of SearchGPT
1
[removed]
2025-01-06T06:51:11
https://www.reddit.com/r/LocalLLaMA/comments/1hut51e/background_of_searchgpt/
mathageche
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hut51e
false
null
t3_1hut51e
/r/LocalLLaMA/comments/1hut51e/background_of_searchgpt/
false
false
self
1
null
Latest Creative Writing LLM
1
[removed]
2025-01-06T07:22:36
https://www.reddit.com/r/LocalLLaMA/comments/1hutl0w/latest_creative_writing_llm/
Toasty_Toms
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hutl0w
false
null
t3_1hutl0w
/r/LocalLLaMA/comments/1hutl0w/latest_creative_writing_llm/
false
false
self
1
null
DeepSeek v3 on 128 gb mbp
1
Hi all, Ive been away from the local AI scene for some time (custom chatbots and RAG left a sour taste in my mouth). That said I have a MBP Max 128 GB and have been hearing great things about the new DeepSeek model. Last year DeepSeek v2 was my goto model on LMStudio, but last I checked I didnt see any v3 yet that wasnt the 600B model. For other MBP 128 GB users - is it possible to run DeepSeek v3 yet? If so, whats your toolchain + t/s looking like). I remember there being models optimized for METAL / apple, did these ever catch on and can we expect one for ds v3? What are the best apps / libs for LLMs on apple silicon?
2025-01-06T07:26:19
https://www.reddit.com/r/LocalLLaMA/comments/1hutmrd/deepseek_v3_on_128_gb_mbp/
rahabash
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hutmrd
false
null
t3_1hutmrd
/r/LocalLLaMA/comments/1hutmrd/deepseek_v3_on_128_gb_mbp/
false
false
self
1
null
What’s a good model for 3090?
1
[removed]
2025-01-06T07:59:54
https://www.reddit.com/r/LocalLLaMA/comments/1huu37i/whats_a_good_model_for_3090/
Fresh_Heron_3707
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huu37i
false
null
t3_1huu37i
/r/LocalLLaMA/comments/1huu37i/whats_a_good_model_for_3090/
false
false
self
1
null
Is there a guide to running Deepseek v3 locally?
1
This is a very basic question I have no experience with Deepseek whatsoever. And my technical experience is a little above average, but not an engineer level. I run several models using Ollama, but I don't know how to run Deepseek v3. Is there any guide on how to?
2025-01-06T08:11:43
https://www.reddit.com/r/LocalLLaMA/comments/1huu94l/is_there_a_guide_to_running_deepseek_v3_locally/
x0rchid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huu94l
false
null
t3_1huu94l
/r/LocalLLaMA/comments/1huu94l/is_there_a_guide_to_running_deepseek_v3_locally/
false
false
self
1
null
Getting great results on Gemma2:27b that I can’t scale and replicate on DeepSeek/ChatGPT/Gemini/etc
1
Not sure if this is the right place to ask or where I can get help, but I’m an absolute beginner to using LLMs. I work in the medical field and in education, and I’m trying to make a LLM patient that can roleplay someone with a secret disease so that a student can ask it questions to figure out what disease they have. The idea is that this can help students figure out how to intelligently ask questions to narrow down differential diagnoses and to become more comfortable with this skill before talking to real patients. I was able to make a great prototype with gemma2:27b, where I just gave it a prompt and it was able to wonderfully roleplay, offer feedback to the student, never spoil the diagnosis until the end, and stay totally medically accurate the entire time. Now I want to scale this up into a website to share with students instead of just a prototype on my computer. I assume that this means I should be using one of the major LLM APIs like OpenAI, Gemini, Claude, DeepSeek, etc. But I haven’t been able to get good results with any of these — they always mix up whether they’re a patient or a facilitator offering feedback, they always spoil the diagnosis early, they sometimes contradict themselves (i.e. say they have a headache and then say they don’t have one right after), and they sometimes make medical errors. I know that this isn’t a very easy thing to solve, but I’m really frustrated and not sure what my options are.
2025-01-06T08:14:55
https://www.reddit.com/r/LocalLLaMA/comments/1huualj/getting_great_results_on_gemma227b_that_i_cant/
Amazydayzee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huualj
false
null
t3_1huualj
/r/LocalLLaMA/comments/1huualj/getting_great_results_on_gemma227b_that_i_cant/
false
false
self
1
null
For those who care about how o1 works technically, OpenAI has stated that o1 was built using reinforcement fine-tuning, which was announced by OpenAI on December 6 as day 2 of Shipmas
1
From [this OpenAI job posting](https://openai.com/careers/research-engineer-scientist-multimodal-product-oriented-research/): >Reinforcement finetuning: our team makes the full RL pipeline that trained o1 available to our customers to build their own expert reasoning models in their domain. OpenAI employee John Allard stated something similar in [this tweet](https://x.com/john__allard/status/1865120101810475503). John Allard also appears in [OpenAI's day 2 of Shipmas video about reinforcement fine-tuning](https://www.youtube.com/watch?v=yCIYS9fx56U), in which several OpenAI employees said similar things. Other OpenAI communications about reinforcement fine-tuning are [here](https://openai.com/form/rft-research-program/) and [here](https://help.openai.com/en/articles/10250364-how-to-access-reinforcement-fine-tuning). [Here](https://www.datacamp.com/blog/reinforcement-fine-tuning) and [here](https://openpipe.ai/blog/openai-rft) are 2 explanations from third parties about reinforcement fine-tuning. Machine learning expert Nathan Lambert uses the non-paywalled part of [this SemiAnalysis article](https://semianalysis.com/2024/12/11/scaling-laws-o1-pro-architecture-reasoning-training-infrastructure-orion-and-claude-3-5-opus-failures/) to give informed speculation about how o1 works in blog post and video [Quick recap on the state of reasoning](https://www.interconnects.ai/p/the-state-of-reasoning). Some of the material in that blog post is detailed further in his older blog post [OpenAI's Reinforcement Finetuning and RL for the masses](https://www.interconnects.ai/p/openais-reinforcement-finetuning). You might also be interested in his blog posts [OpenAI's o1 using "search" was a PSYOP](https://www.interconnects.ai/p/openais-o1-using-search-was-a-psyop) and [o3: The grand finale of AI in 2024](https://www.interconnects.ai/p/openais-o3-the-2024-finale-of-ai).
2025-01-06T08:16:41
https://www.reddit.com/r/LocalLLaMA/comments/1huubev/for_those_who_care_about_how_o1_works_technically/
Wiskkey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huubev
false
null
t3_1huubev
/r/LocalLLaMA/comments/1huubev/for_those_who_care_about_how_o1_works_technically/
false
false
self
1
null
Best model for internationalization?
1
What is the current best model for internationalization? Is Deepseek v3 the best one for English Chinese? Are there any benchmark testing models for internationalization?
2025-01-06T08:23:10
https://www.reddit.com/r/LocalLLaMA/comments/1huuei5/best_model_for_internationalization/
IHave2CatsAnAdBlock
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huuei5
false
null
t3_1huuei5
/r/LocalLLaMA/comments/1huuei5/best_model_for_internationalization/
false
false
self
1
null
Multi-GPU system for Local LLM?
1
After a few days of Googling, I have some unanswered questions about the general way LLM inference functions I've been unable to find without the text becoming unreadable or too abstract. I think it'd be a good idea to gather the technical questions and answers into one thread in a dense format. I'm considering getting a multi-GPU system to do single LLM inference, mainly. I might want to do some fine-tuning as well and some Stable Diffusion. I'd love to get these questions answered before I pull a potentially expensive trigger. LLMs scale best with memory bandwidth, as far as I know. As long as there's enough compute, adding it doesn't scale at all; it all seems to be bottlenecked by the memory speed. From my observations, it looks like 48 GB is the holy grail for reasonably priced local LLM inference; it can comfortably fit a 30B with a Q8 with a massive context or a 70B with a Q4 with a fair context length. Quantitizing a model seems to be the best way to squeeze a lot of additional performance out of it, and to shrink it to fit into anything at the cost of losing quality in the answers and GPUs seem to work perfectly fine with quantized models. From my experience it seems Q4 has an acceptable amount of quality loss for reducing the model size by almost a fourth from FP16. Going smaller than Q4 seems to exponentially increase perplexity loss. The following questions I'm asking only apply for running a single instance of an LLM. I'm assuming two of the same GPUs will run two of the same LLMs at the same speed as you would run a single LLM on one GPU, barring KV computation, which can simply be done serially. GPU/VRAM questions: 1.0: How well do multi-GPU systems scale generally? Is 2x16 GB of HBM2 (1 TB/s) better than 1x24 GB of GDDR5 (350 GB/s), disregarding the additional 8 GB? 1.1: 2x16 GB HBM2 vs. 1x24 GB GDDR6X (940 GB/s)? 1.2: 3x16 GB HBM2 vs. 2x2 4 GB GDDR6X? 1.3: Any predictions for 32 GB GDDR7 (1.79 TB/s)? (Namely the RTX 5090) 1.4: What about not disregarding the additional 8 GB of question 1.0; Is there a difference in quality between a 32B-Q4\_K\_L vs. Q6\_K\_L for example? 1.5: Should I avoid quants below fp16? Q8? Q6? 1.6: How important is compute really compared to VRAM? If I can get double VRAM for half FP16 at the same VRAM bandwidth values, am I losing anything? 1.7: How is ARC for LLM inference? I haven't found any great benchmarks. PCI-e questions: 2.0: Does link speed matter? 2.1: Is it fine stuffing all GPUs into 3.0 x4 slots with riser cables? 2.2: What about mixing slot bandwidths for the same model GPUs? 2.3: PCI-e bifurcation? (1 3.0 x16 -> 4 3.0 x4) 2.4: Is there any communication between GPUs during inference? 2.5: Does link generation matter at all? 3.0 vs. 4.0 specifically. 2.6: Does Resizable BAR affect anything? Rest-of-the-system questions: 3.0: Does the CPU/platform matter at all when doing GPU inference? (Beyond the potential PCI-e diff.) 3.1: Are there any issues with ROCm? 3.2: ... and if I'm willing to tinker with configs and potentially reprogram small sections? 3.3: ... on Linux? 3.4: ... on Windows? 3.5: If issues persist, simply using Vulkan? 3.6: How does CUDA work for older Nvidia GPUs? (Tesla M10, Tesla P40) 3.6: How well does SYCL backend work? (For Intel ARC specifically) 3.7: Would it be more valuable to build a workstation/server computer with octa channel DDR4 (Perhaps quad/octa channel DDR5 once affordable?) and sticking with CPU inference? (For example an EPYC 7262?) (\~1000€ buying used, by my calculations, DDR4-8x would be 200 GB/s with 3200 MT/s) Misc. questions: 4.0: What does fine-tuning need in terms of GPU resources? 4.1: Should I save my money and use OpenAI / Google / Your favorite API provider or just pay for a subscription for their user interfaces? 4.2: Should I simply wait until the holy grail of 1.58 is achieved, and/or 12B/30B models become leagues above what they currently are? 4.3: Is there anything interesting about running 100B+ models yourself at low quants (IQ2\_XS/M)? Is the slowdown of CPU inference worth the potential quality of answers (Q4\_K\_M? Q6\_K?) (My system has 128 GB of DDR4, dual channel 3200 MT/s) 4.4: How do big MoE models compare to 100B+ models, say Mixtral 8x22B vs. Llama 3 120B, in terms of quality of answers? 4.5: ...How about in lower quants? 4.6: ...Do MoEs scale worse with multiple GPUs? Better? 4.7: There are rumors of a 24/32 GB Intel ARC Battlemage. Would this be worth getting, if it appears? Final questions, more directed toward me: 5.0: Were you to recommend a setup at an absolute maximum of 1500€ for GPUs only for the best inference, what would you recommend? I'm currently considering options between Tesla M10s, Tesla P40s, Instinct MI50s, RTX 3090s, and 7900 XTXs. Hitting the 48 GB would be the main goal, but cost efficiency a big key for me as well. I don't mind losing 20% performance over saving 50% of money. 5.1: Would you recommend I keep saving until I can afford something bigger and better? If so, any suggestions? 5.2: Anything you want to share regarding this topic? Do you run a single instance of an LLM with multiple GPUs? Which ones? What models, and T/s? What about the KV processing speed? 5.3: Is there something obvious I forgot to ask that would end up biting my ass here? Thank you for your time!
2025-01-06T08:27:33
https://www.reddit.com/r/LocalLLaMA/comments/1huughr/multigpu_system_for_local_llm/
XMan3332
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huughr
false
null
t3_1huughr
/r/LocalLLaMA/comments/1huughr/multigpu_system_for_local_llm/
false
false
self
1
null
After reviewing research papers and documentation, and conducting thorough testing, I conclude: DeepSeek V3 that excels in reasoning and math, with strong performance in coding and writing. However, concerns remain regarding data privacy and government oversight. What are your thoughts?
1
2025-01-06T08:40:08
https://www.reddit.com/gallery/1huumbj
Dhruvil_XD
reddit.com
1970-01-01T00:00:00
0
{}
1huumbj
false
null
t3_1huumbj
/r/LocalLLaMA/comments/1huumbj/after_reviewing_research_papers_and_documentation/
false
false
https://a.thumbs.redditm…CnbdcSxkuHc4.jpg
1
null
DeepSeek V3 is impressive, but so is its political censorship
1
[removed]
2025-01-06T09:07:42
https://www.reddit.com/r/LocalLLaMA/comments/1huuz9x/deepseek_v3_is_impressive_but_so_is_its_political/
Automatic-Finish-985
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huuz9x
false
null
t3_1huuz9x
/r/LocalLLaMA/comments/1huuz9x/deepseek_v3_is_impressive_but_so_is_its_political/
true
false
nsfw
1
null
Why does Qwen 2.5 support 128k context length, but the output supports only up to 8k?
1
> Context length support up to 128K tokens and can generate up to 8K tokens. https://qwen.readthedocs.io/en/latest/
2025-01-06T09:37:55
https://www.reddit.com/r/LocalLLaMA/comments/1huvdq4/why_does_qwen_25_support_128k_context_length_but/
secsilm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huvdq4
false
null
t3_1huvdq4
/r/LocalLLaMA/comments/1huvdq4/why_does_qwen_25_support_128k_context_length_but/
false
false
self
1
null
Need help brainstorming multiple image analysis.
1
I know that I can currently use vision models for single image analysis and embeddings for image similarity. But what if I want to compare, say, 10 images? Let me give you an example of what my use case would look like: Let's say I have all the images of a product from an e-commerce website. Let's take a medicine as the product – it has 5 images. Now I have a set of 10 allowed values which are different product views, for example: Front View, Back View, Packaging View, Lifestyle View, etc. Now I'm brainstorming how I can identify which of the allowed product view types aren't present in the 5 images I have. Every image could potentially be a combination of multiple views. For example, one image could be a combination of both Front and Packaging Views, and so on. Also if you guys are working with Vision Models, whats the best OSS vision model today?
2025-01-06T09:40:56
https://www.reddit.com/r/LocalLLaMA/comments/1huvf5z/need_help_brainstorming_multiple_image_analysis/
CaptTechno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huvf5z
false
null
t3_1huvf5z
/r/LocalLLaMA/comments/1huvf5z/need_help_brainstorming_multiple_image_analysis/
false
false
self
1
null
Structures can become shackles in AI models
1
https://i.redd.it/f60ikpylkcbe1.gif Read an interesting blog today: [https://open.substack.com/pub/vizuara/p/structures-can-become-shackles-in?r=4ssvv2&utm\_campaign=post&utm\_medium=web&showWelcomeOnShare=false](https://open.substack.com/pub/vizuara/p/structures-can-become-shackles-in?r=4ssvv2&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false) > **Lesser the human structure, more scalable the AI methods can become.**
2025-01-06T10:04:09
https://www.reddit.com/r/LocalLLaMA/comments/1huvqgp/structures_can_become_shackles_in_ai_models/
OtherRaisin3426
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huvqgp
false
null
t3_1huvqgp
/r/LocalLLaMA/comments/1huvqgp/structures_can_become_shackles_in_ai_models/
false
false
https://b.thumbs.redditm…c34zsAMvSkDg.jpg
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/qinyJYmI7kN7WYjcq8jy4_pKVDObPEPRXVjXBZanV_M.jpg?auto=webp&s=4ec4ab47c5f962fa48c0e017e197d62d92205d83', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/qinyJYmI7kN7WYjcq8jy4_pKVDObPEPRXVjXBZanV_M.jpg?width=108&crop=smart&auto=webp&s=339bf475113f566436853d93bc9572dd8e981f43', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/qinyJYmI7kN7WYjcq8jy4_pKVDObPEPRXVjXBZanV_M.jpg?width=216&crop=smart&auto=webp&s=5e48f7923b8ca7b1219212b78b3b67e7ea77a3e0', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/qinyJYmI7kN7WYjcq8jy4_pKVDObPEPRXVjXBZanV_M.jpg?width=320&crop=smart&auto=webp&s=40517c8eca5f95494b019ca3f6edfca37ce98e0a', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/qinyJYmI7kN7WYjcq8jy4_pKVDObPEPRXVjXBZanV_M.jpg?width=640&crop=smart&auto=webp&s=3746a77021c22e219f02e00e1344fd1790623c57', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/qinyJYmI7kN7WYjcq8jy4_pKVDObPEPRXVjXBZanV_M.jpg?width=960&crop=smart&auto=webp&s=c1e3e389e8d3deba53720c092a324e9b6d095a42', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/qinyJYmI7kN7WYjcq8jy4_pKVDObPEPRXVjXBZanV_M.jpg?width=1080&crop=smart&auto=webp&s=a0ed4f8bfe0ddf8f87fa6534218032c109c747c7', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'YrMPlmEsPk1Il4i11ebKl5ryR7ibz9aLXcF6unpoMFI'}], 'enabled': False}
DeepSeek v3 running at 17 tps on 2x M2 Ultra with MLX.distributed!
1
Hey everyone! 😁 Resident MLX fan here - just bringing some good news over from Twitter. Apologies for no screenshot; mobile Reddit isn't letting me include both a pic and text lol Here's the link: https://x.com/awnihannun/status/1875976286474289345
2025-01-06T10:06:13
https://www.reddit.com/r/LocalLLaMA/comments/1huvrer/deepseek_v3_running_at_17_tps_on_2x_m2_ultra_with/
mark-lord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huvrer
false
null
t3_1huvrer
/r/LocalLLaMA/comments/1huvrer/deepseek_v3_running_at_17_tps_on_2x_m2_ultra_with/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/VCB0VQJ-z7wLr3e2_Gd7Dv0GcguQdGqNhI0MUuX6tTw.jpg?auto=webp&s=4e308bef188fe06d9d9464de624559faa0431216', 'width': 1020, 'height': 720}, 'resolutions': [{'url': 'https://external-preview.redd.it/VCB0VQJ-z7wLr3e2_Gd7Dv0GcguQdGqNhI0MUuX6tTw.jpg?width=108&crop=smart&auto=webp&s=a8de553dc2570f8aeef2720f24034b866dbb2466', 'width': 108, 'height': 76}, {'url': 'https://external-preview.redd.it/VCB0VQJ-z7wLr3e2_Gd7Dv0GcguQdGqNhI0MUuX6tTw.jpg?width=216&crop=smart&auto=webp&s=ff1fe1bdad1af7ed3d44acb4520f7c20d407dba3', 'width': 216, 'height': 152}, {'url': 'https://external-preview.redd.it/VCB0VQJ-z7wLr3e2_Gd7Dv0GcguQdGqNhI0MUuX6tTw.jpg?width=320&crop=smart&auto=webp&s=53e9807ecf546bf344ab9680adde91865d2391f1', 'width': 320, 'height': 225}, {'url': 'https://external-preview.redd.it/VCB0VQJ-z7wLr3e2_Gd7Dv0GcguQdGqNhI0MUuX6tTw.jpg?width=640&crop=smart&auto=webp&s=50923b27fdd2f97fd8db6832f930a8c3e28bd400', 'width': 640, 'height': 451}, {'url': 'https://external-preview.redd.it/VCB0VQJ-z7wLr3e2_Gd7Dv0GcguQdGqNhI0MUuX6tTw.jpg?width=960&crop=smart&auto=webp&s=f6b8c14282e42dec8e5f2962ac1979c3dfa40889', 'width': 960, 'height': 677}], 'variants': {}, 'id': 'WPjelcquD_u-ain3_Xq2PgHFEKThTXXAdYMt3inrWC0'}], 'enabled': False}
How to Calculate Attention In ollama to fool model that text is generate by model itself
1
[removed]
2025-01-06T11:02:34
https://www.reddit.com/r/LocalLLaMA/comments/1huwk4t/how_to_calculate_attention_in_ollama_to_fool/
Rough_Metal_9999
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huwk4t
false
null
t3_1huwk4t
/r/LocalLLaMA/comments/1huwk4t/how_to_calculate_attention_in_ollama_to_fool/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/1YP9j8OXB50qnTXrdIK94-e56RuM2Drxwu5ry9f733A.jpg?auto=webp&s=d2c9ff82d9c941ad3952e9af73818d006dddf93b', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/1YP9j8OXB50qnTXrdIK94-e56RuM2Drxwu5ry9f733A.jpg?width=108&crop=smart&auto=webp&s=1c8fd0e422b1d7a14f736e1e840f084cb59bbecf', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/1YP9j8OXB50qnTXrdIK94-e56RuM2Drxwu5ry9f733A.jpg?width=216&crop=smart&auto=webp&s=19a703643ac5d5814d81115e6c4b6a53fe2504af', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/1YP9j8OXB50qnTXrdIK94-e56RuM2Drxwu5ry9f733A.jpg?width=320&crop=smart&auto=webp&s=60703e35ed66abebc41506d8a6ab4c65c9276a9b', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/1YP9j8OXB50qnTXrdIK94-e56RuM2Drxwu5ry9f733A.jpg?width=640&crop=smart&auto=webp&s=d61521fd6668a11b7e223178aa045ead8916ccdd', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/1YP9j8OXB50qnTXrdIK94-e56RuM2Drxwu5ry9f733A.jpg?width=960&crop=smart&auto=webp&s=bf6dedd36fdacddf76030dccff8c57c1871df0fa', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/1YP9j8OXB50qnTXrdIK94-e56RuM2Drxwu5ry9f733A.jpg?width=1080&crop=smart&auto=webp&s=d3dc7f2c34ce07f513f84eb3f72dbe937116c01c', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'k2doSQLWhnOfKI5bJEHPabOh3FpHR1IW1Y8UkuY-uOE'}], 'enabled': False}
The EU should finance an open source European healthcare LLM
1
I think one of the most promising function of LLMs is the creation of medical prognosis from symptoms. There are vast, freely available medical databases, so the EU should finance an open source medical (healthcare) AI. What do you think?
2025-01-06T11:29:50
https://www.reddit.com/r/LocalLLaMA/comments/1huwyke/the_eu_should_finance_an_open_source_european/
custodiam99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huwyke
false
null
t3_1huwyke
/r/LocalLLaMA/comments/1huwyke/the_eu_should_finance_an_open_source_european/
false
false
self
1
null
F5 TTS
1
Has anyone tried to run a quantized version of F5 on Windows or Linux? How is the inference speed and vram usage? I know about the f5-tts-mlx repo, but that is for mac only.
2025-01-06T12:11:17
https://www.reddit.com/r/LocalLLaMA/comments/1huxlro/f5_tts/
MemePromotionLLC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huxlro
false
null
t3_1huxlro
/r/LocalLLaMA/comments/1huxlro/f5_tts/
false
false
self
1
null
Model for nuanced language understanding?
1
What models can you recommend for tasks like verb extraction, replacement of common nouns by proper nouns, changing passive to active tone etc.? The models should be capable of dealing at least with English, French and German. I'm not particularly impressed with Teuken so far, but I'd listen to your experiences.
2025-01-06T12:12:46
https://www.reddit.com/r/LocalLLaMA/comments/1huxmm7/model_for_nuanced_language_understanding/
Patentsmatter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huxmm7
false
null
t3_1huxmm7
/r/LocalLLaMA/comments/1huxmm7/model_for_nuanced_language_understanding/
false
false
self
1
null
Running local LLMs in background, overnight, periodically, or continuously
1
[removed]
2025-01-06T12:29:18
https://www.reddit.com/r/LocalLLaMA/comments/1huxwnp/running_local_llms_in_background_overnight/
NewTestAccount2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huxwnp
false
null
t3_1huxwnp
/r/LocalLLaMA/comments/1huxwnp/running_local_llms_in_background_overnight/
false
false
self
1
null
Running local LLMs in background, overnight, periodically, or continuously
1
[removed]
2025-01-06T12:31:06
https://www.reddit.com/r/LocalLLaMA/comments/1huxxs5/running_local_llms_in_background_overnight/
NewTestAccount2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huxxs5
false
null
t3_1huxxs5
/r/LocalLLaMA/comments/1huxxs5/running_local_llms_in_background_overnight/
false
false
self
1
null
Running local LLMs in background, overnight, periodically, or continuously
1
[removed]
2025-01-06T12:34:05
https://www.reddit.com/r/LocalLLaMA/comments/1huxzge/running_local_llms_in_background_overnight/
NewTestAccount2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huxzge
false
null
t3_1huxzge
/r/LocalLLaMA/comments/1huxzge/running_local_llms_in_background_overnight/
false
false
self
1
null
Model Highlight: Qwentile 2.5-32B-Instruct (Short Review)
1
Qwentile 2.5-32B-Instruct: [https://huggingface.co/maldv/Qwentile2.5-32B-Instruct](https://huggingface.co/maldv/Qwentile2.5-32B-Instruct) [https://huggingface.co/bartowski/Qwentile2.5-32B-Instruct-GGUF/tree/main](https://huggingface.co/bartowski/Qwentile2.5-32B-Instruct-GGUF/tree/main) I've tested a lot of different models based on Qwen 2.5 32b. And this one I've found really balanced and interesting. I'm using my own real-world handmade tests based on different categories: \-JS, HTML, CSS Coding. \-Logic and Math. \-Pop Knowledge. \-Data formatting and completion. \-Multi-language. \-RPG system following (using dices). \-Heavy NSFW. \-Heavy violence. Usually models have strong and weak points at those tests, this is okay. But Qwentile 2.5-32B have some interesting results - it is passes not great, but fine on all tests: \-Made me a one-shot tetris game, dice game, snake game, calculator. \-Followed okay on logic and math questions. \-Have some mistakes in details with pop knowledge, but all general info was okay and correct. \-Formatted me some data for work with about 95% level of correctness. \-Have okay multi-language support (add in system prompt language that you want it to talk). \-Followed fine RPG system (something like adventure-books with dices and formulas). \-Have no problem with NSFW parts, sex scenes a bit naive, but model is not thirsty like others. \-Have a fine violence, despair, darkness, panic, death understanding. \-No problem with censoring, maybe 1-2 re-rolls with really have stuff. Why it is important? \-This models is really a great all-rounder for most cases. \-It have NO color, don't try to be horny, positive, dark, judgmental etc. \-It have no problem with censoring at 95%, don't tell you what good or bad. \-It is fairly smart, follow instructions fine, keep things simple and clean. \-It writes really fine, stick to characters and mood, fast learned the author's style. \-Have no problems with any kind of scenes, violence etc. \-It love to follow system prompt. If you looking for model with low censorship level, a model that can do any kind of job (for 32b) just fine, this is THE guy. Cheers.
2025-01-06T12:40:24
https://www.reddit.com/r/LocalLLaMA/comments/1huy37d/model_highlight_qwentile_2532binstruct_short/
-Ellary-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huy37d
false
null
t3_1huy37d
/r/LocalLLaMA/comments/1huy37d/model_highlight_qwentile_2532binstruct_short/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/TxCyRFFE_gTgbc8KKpKSxHDnEGcNMULo4KaGTw6jD9k.jpg?auto=webp&s=a8f2351a68c9da7ff95cb4d987bccb2f178820a1', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/TxCyRFFE_gTgbc8KKpKSxHDnEGcNMULo4KaGTw6jD9k.jpg?width=108&crop=smart&auto=webp&s=71399aa6cfadb281b2357c9d36e1064ce7abc5ea', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/TxCyRFFE_gTgbc8KKpKSxHDnEGcNMULo4KaGTw6jD9k.jpg?width=216&crop=smart&auto=webp&s=3909ffb2a42ad3e784e7c9f759d257688356d147', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/TxCyRFFE_gTgbc8KKpKSxHDnEGcNMULo4KaGTw6jD9k.jpg?width=320&crop=smart&auto=webp&s=55c40f0e07853ea9f5ae74de1d72537a5bd3e8a7', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/TxCyRFFE_gTgbc8KKpKSxHDnEGcNMULo4KaGTw6jD9k.jpg?width=640&crop=smart&auto=webp&s=24969f4d4e32c8aa378d1e38d77e548b85260943', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/TxCyRFFE_gTgbc8KKpKSxHDnEGcNMULo4KaGTw6jD9k.jpg?width=960&crop=smart&auto=webp&s=dff2dfc6ba9d17ba1d36875082d643befbfc0193', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/TxCyRFFE_gTgbc8KKpKSxHDnEGcNMULo4KaGTw6jD9k.jpg?width=1080&crop=smart&auto=webp&s=bf5f6383f4ac6225ef4fd651fcce4b9af482978e', 'width': 1080, 'height': 583}], 'variants': {}, 'id': 'AVwX09uc0mCb5qKsYJjzf5f69R45T4vXu2MsUpwYruI'}], 'enabled': False}
How do you share access to your models?
1
Hey, I wonder how you share access to your models? I'm talking about colleagues or friends etc. To give a little context, I'm an assistant professor in CS in a small european univerisity and I work with people from other fields (law, biology etc.). Over time, I've trained a number of models (LLM, RAG), some of which produce and generate content reliable enough to be used on a daily basis. As my colleagues know nothing about computers, the only way for them to use them is via a website. The problem is that I come from a maths/statistics background, so training models is no problem, but setting up a web interface (LLM + RAG) with user accounts and a chat is extremely complicated for me. I have access to a server with 48GB GPUs and the university allows me to host the website for colleagues/students. What tools do you use today for this type of project? I see a lot of repo for local RAG, but it's hard to find my way around for larger projects. Are some tools easier to use than others? Are there any reliable educational resources for this type of project? I'm exclusively doing LLM + RAG (\~100-200k documents) and we'll be a dozen people using it.
2025-01-06T12:43:19
https://www.reddit.com/r/LocalLLaMA/comments/1huy512/how_do_you_share_access_to_your_models/
tkon3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huy512
false
null
t3_1huy512
/r/LocalLLaMA/comments/1huy512/how_do_you_share_access_to_your_models/
false
false
self
1
null
Deepseek v3 on VM
1
[removed]
2025-01-06T12:58:22
https://www.reddit.com/r/LocalLLaMA/comments/1huyema/deepseek_v3_on_vm/
Ok_Company6990
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huyema
false
null
t3_1huyema
/r/LocalLLaMA/comments/1huyema/deepseek_v3_on_vm/
false
false
self
1
null
Lighteval: the Evaluation framework from Hugging Face
1
Hi Guys! Very excited to share [Lighteval](https://github.com/huggingface/lighteval), the evaluation framework we use internally at Hugging Face. Here are the key features: * **Python API with training / eval loop**: [Simple integration with the Python API](https://huggingface.co/docs/lighteval/using-the-python-api), easily integrate Lighteval into your training loop! * **Speed**: [Use vllm as a backend for fast evals](https://huggingface.co/docs/lighteval/use-vllm-as-backend). * **Completeness**: Choose from multiple backends to launch models from almost any provider and compare closed and open-source models at the speed of light. You can choose from local backends ([transformers](https://github.com/huggingface/transformers), [vllm](https://github.com/vllm-project/vllm), [tgi](https://github.com/huggingface/text-generation-inference)) or API providers ([litellm](https://github.com/BerriAI/litellm), [inference endpoints](https://huggingface.co/inference-endpoints/dedicated)) * **Seamless Storage**: [Save results in S3 or Hugging Face Datasets](https://huggingface.co/docs/lighteval/saving-and-reading-results). * **Custom Tasks**: [Easily add custom tasks](https://huggingface.co/docs/lighteval/adding-a-custom-task). * **Versatility**: Tons of [metrics](https://huggingface.co/docs/lighteval/metric-list) and [tasks](https://huggingface.co/docs/lighteval/available-tasks) ready to go. Here is how to get started fast and evaluate llama-3.1-70B-Instruct on the gsm8k benchmark and compare results with openai's o1-mini! pip install lighteval[vllm,litellm] lighteval vllm "pretrained=meta-llama/Llama-3.1-70B-Instruct,dtype=bfloat16" "lighteval|gsm8k|5|1" --use-chat-template lighteval endpoint litellm "o1-mini" "lighteval|gsm8k|5|1" --use-chat-template If you have strong opinions on evaluation and think there are still things missing, don't hesitate to help us; we would be delighted to have your help and build what will help us get better and safer AI.
2025-01-06T12:58:59
https://www.reddit.com/r/LocalLLaMA/comments/1huyezc/lighteval_the_evaluation_framework_from_hugging/
HauntingMoment
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huyezc
false
null
t3_1huyezc
/r/LocalLLaMA/comments/1huyezc/lighteval_the_evaluation_framework_from_hugging/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/8SDpQQTasQ3ocfZSqFOrL_haIMLbeUbrQ1XAkc-BiQI.jpg?auto=webp&s=2a6b8a49d7b67a8318cc8fef499d72723ece0936', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/8SDpQQTasQ3ocfZSqFOrL_haIMLbeUbrQ1XAkc-BiQI.jpg?width=108&crop=smart&auto=webp&s=707178a6c44470b67c7e60085e5c4de03972fd45', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/8SDpQQTasQ3ocfZSqFOrL_haIMLbeUbrQ1XAkc-BiQI.jpg?width=216&crop=smart&auto=webp&s=a0f2b2c2d9c3b6539c1eefdc2e72e3d07d1cfa20', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/8SDpQQTasQ3ocfZSqFOrL_haIMLbeUbrQ1XAkc-BiQI.jpg?width=320&crop=smart&auto=webp&s=fd4a058e744430703b73f8f3351bd59a081e609c', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/8SDpQQTasQ3ocfZSqFOrL_haIMLbeUbrQ1XAkc-BiQI.jpg?width=640&crop=smart&auto=webp&s=c6f30ada7289633adafa18787dd0b10f7852a53b', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/8SDpQQTasQ3ocfZSqFOrL_haIMLbeUbrQ1XAkc-BiQI.jpg?width=960&crop=smart&auto=webp&s=301032ebe666218f62d91f2ce0d7bbb114e25ac6', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/8SDpQQTasQ3ocfZSqFOrL_haIMLbeUbrQ1XAkc-BiQI.jpg?width=1080&crop=smart&auto=webp&s=51172f22679c95588eb705cfd66b197ae90ecfb3', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'A1347W_niQhvM1NTFHb9GngYHpY7eV3z_Yh-99IMPm4'}], 'enabled': False}
DeepSeek v3 - Data privacy
1
Hey folks, Quick question about **DeepSeek v3** – anyone else feel there might be some data privacy concerns with it? It’s a Chinese model and doesn’t run on well-known cloud providers like Azure or AWS, which makes me wonder how secure the data is. I’m planning to use it for **commercial purposes**, so I really want to make sure everything is safe and legit. Has anyone looked into how their infrastructure works or if they’ve done any security checks? Would love to hear your thoughts or experiences. Also, if you know of any good alternatives, I’m all ears! Cheers! 😊
2025-01-06T13:02:28
https://www.reddit.com/r/LocalLLaMA/comments/1huyhl0/deepseek_v3_data_privacy/
Ilarom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huyhl0
false
null
t3_1huyhl0
/r/LocalLLaMA/comments/1huyhl0/deepseek_v3_data_privacy/
false
false
self
1
null
🚀 Calling All Devs! Let's Build an AI-Powered Voice Browser for Accessibility 💻🎙️
1
Hey Reddit Devs! I’m working on an open-source project that aims to make web browsing truly accessible for everyone, especially for individuals with disabilities such as blindness, motor impairments, or those who rely on voice interaction. The Vision: Imagine a browser where every action—from opening a tab to navigating complex e-commerce sites—is driven entirely by voice commands + AI. A browser that not only listens to your instructions but also talks back, guiding you through the process in a seamless, user-friendly way. The Problem We're Solving: Web accessibility has always been a challenge, especially for people with disabilities. Many existing solutions are clunky, incomplete, or require heavy manual effort. We’re creating a solution that bridges this gap, allowing users to browse, interact, and shop online with ease, hands-free! How It Works: 1. Voice Command to Text: Users give commands like "Open a new tab" or "Search for Amazon.in". These commands are processed using voice-to-text models (e.g., Whisper, Vosk). 2. AI-Powered Decisions: The text, combined with the HTML markup of the current page, is sent to an LLM (like OpenAI’s GPT) to determine the intent and action. 3. Execute the Task: The browser uses automation frameworks (like Playwright) to perform the action, whether it's clicking a link, filling a form, or applying filters. 4. Voice Feedback: Once completed, the browser speaks back the result, e.g., "I’ve opened a new tab. What would you like to do next?". A Simple Example: User: "Open a new tab." Browser: "I’ve opened a new tab, now what can I do for you?" User: "Search for Amazon.in." Browser: Opens Amazon and responds, "I’ve got the site for you. What’s next?" User: "Search for mobiles under 10k." Browser: Fills the search bar, applies filters, and responds, "Results are ready. Would you like to explore further?" Why This Matters: This project has the potential to revolutionize web accessibility and bring an entirely new dimension to browsing for those who need it the most. It's also a great opportunity to showcase how AI and voice technology can solve real-world problems. Where You Come In: We’re looking for developers who: Are passionate about open-source projects and accessibility. Have experience with AI/LLMs, Playwright, speech recognition, or web development. Want to collaborate on an impactful project that can change lives. What You’ll Gain: Be part of an innovative project that blends AI, voice tech, and automation. Make a direct impact on improving web accessibility for millions. Collaborate with like-minded developers in the open-source community. If this excites you and you want to contribute, drop a comment or DM me to join the project. Let’s build something meaningful together! 🌟 #OpenSource #AI #Accessibility #VoiceTech #WebAutomation
2025-01-06T13:39:46
https://www.reddit.com/r/LocalLLaMA/comments/1huz81z/calling_all_devs_lets_build_an_aipowered_voice/
maneesh_sandra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huz81z
false
null
t3_1huz81z
/r/LocalLLaMA/comments/1huz81z/calling_all_devs_lets_build_an_aipowered_voice/
false
false
self
1
null
🚀 Calling All Devs! Let's Build an AI-Powered Voice Browser for Accessibility 💻🎙️
1
[removed]
2025-01-06T13:40:23
https://www.reddit.com/r/LocalLLaMA/comments/1huz8s3/calling_all_devs_lets_build_an_aipowered_voice/
maneesh_sandra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huz8s3
false
null
t3_1huz8s3
/r/LocalLLaMA/comments/1huz8s3/calling_all_devs_lets_build_an_aipowered_voice/
false
false
self
1
null
🚀 Calling All Devs! Let's Build an AI-Powered Voice Browser for Accessibility 💻🎙️
1
[removed]
2025-01-06T13:40:39
https://www.reddit.com/r/LocalLLaMA/comments/1huz914/calling_all_devs_lets_build_an_aipowered_voice/
maneesh_sandra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huz914
false
null
t3_1huz914
/r/LocalLLaMA/comments/1huz914/calling_all_devs_lets_build_an_aipowered_voice/
false
false
self
1
null
🚀 Calling All Devs! Let's Build an AI-Powered Voice Browser for Accessibility 💻🎙️
1
[removed]
2025-01-06T13:43:06
https://www.reddit.com/r/LocalLLaMA/comments/1huzaw8/calling_all_devs_lets_build_an_aipowered_voice/
maneesh_sandra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huzaw8
false
null
t3_1huzaw8
/r/LocalLLaMA/comments/1huzaw8/calling_all_devs_lets_build_an_aipowered_voice/
false
false
self
1
null
How are agents going to be a thing when inference is still slow?
1
I'm just wondering, since I keep reading how agentic AI is the next big thing in 2025.
2025-01-06T13:47:32
https://www.reddit.com/r/LocalLLaMA/comments/1huzdzm/how_are_agents_going_to_be_a_thing_when_inference/
freecodeio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huzdzm
false
null
t3_1huzdzm
/r/LocalLLaMA/comments/1huzdzm/how_are_agents_going_to_be_a_thing_when_inference/
false
false
self
1
null
Introducing LongTalk-CoT v0.1: A Very Long Chain-of-Thought Dataset for Reasoning Model Training
1
[removed]
2025-01-06T14:03:05
https://www.reddit.com/r/LocalLLaMA/comments/1huzpw7/introducing_longtalkcot_v01_a_very_long/
transformer_ML
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huzpw7
false
null
t3_1huzpw7
/r/LocalLLaMA/comments/1huzpw7/introducing_longtalkcot_v01_a_very_long/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?auto=webp&s=a7959bd3de4a444d39e475d30532d2744e67cbca', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=108&crop=smart&auto=webp&s=b1f2b9313c129fad72056229a1efc349ce65dad6', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=216&crop=smart&auto=webp&s=08a7bf256e634d678110fcce751a0b2cab6f7650', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=320&crop=smart&auto=webp&s=5ab7eff83693193060796fc61a06fad060713db8', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=640&crop=smart&auto=webp&s=53501c885f23edcc9b7570e44220eceffae513f1', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=960&crop=smart&auto=webp&s=07be6237a8d51f573024ced54f4e73dab71687d5', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=1080&crop=smart&auto=webp&s=ef880a29e5883c11b4fafd504d5b8e75cd910735', 'width': 1080, 'height': 583}], 'variants': {}, 'id': '8Pl-tuF8qq0FGhF87hP-gp6cLVSmONxUgbO6t3Sq8gE'}], 'enabled': False}
LongTalk-CoT v0.1: A Very Long Chain-of-Thought Dataset
1
[removed]
2025-01-06T14:06:02
https://www.reddit.com/r/LocalLLaMA/comments/1huzs42/longtalkcot_v01_a_very_long_chainofthought_dataset/
transformer_ML
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huzs42
false
null
t3_1huzs42
/r/LocalLLaMA/comments/1huzs42/longtalkcot_v01_a_very_long_chainofthought_dataset/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?auto=webp&s=a7959bd3de4a444d39e475d30532d2744e67cbca', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=108&crop=smart&auto=webp&s=b1f2b9313c129fad72056229a1efc349ce65dad6', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=216&crop=smart&auto=webp&s=08a7bf256e634d678110fcce751a0b2cab6f7650', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=320&crop=smart&auto=webp&s=5ab7eff83693193060796fc61a06fad060713db8', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=640&crop=smart&auto=webp&s=53501c885f23edcc9b7570e44220eceffae513f1', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=960&crop=smart&auto=webp&s=07be6237a8d51f573024ced54f4e73dab71687d5', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=1080&crop=smart&auto=webp&s=ef880a29e5883c11b4fafd504d5b8e75cd910735', 'width': 1080, 'height': 583}], 'variants': {}, 'id': '8Pl-tuF8qq0FGhF87hP-gp6cLVSmONxUgbO6t3Sq8gE'}], 'enabled': False}
Tools to analyze an entire code base?
1
Hi, I am currently looking for tools that are capable of analyzing an entire code base. Nothing too crazy, think 10 files each 100 lines. The tool must be able to understand the context of the code base. Some of the use cases I am seeking: 1. Analyze the code base and give constructive feedback about the code quality. 2. Are there potential bugs in this code base? 3. Write unit tests for each module. 4. Write comprehensive documention. Mainly looking for open source or source available tools as some code bases can't be shared with third party providers.
2025-01-06T14:06:48
https://www.reddit.com/r/LocalLLaMA/comments/1huzsnh/tools_to_analyze_an_entire_code_base/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huzsnh
false
null
t3_1huzsnh
/r/LocalLLaMA/comments/1huzsnh/tools_to_analyze_an_entire_code_base/
false
false
self
1
null
LongTalk-CoT v0.1: A Very Long Chain-of-Thought Dataset
1
2025-01-06T14:07:06
https://huggingface.co/datasets/kenhktsui/longtalk-cot-v0.1
transformer_ML
huggingface.co
1970-01-01T00:00:00
0
{}
1huzsv9
false
null
t3_1huzsv9
/r/LocalLLaMA/comments/1huzsv9/longtalkcot_v01_a_very_long_chainofthought_dataset/
false
false
https://b.thumbs.redditm…Xd6885uAUQGo.jpg
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?auto=webp&s=a7959bd3de4a444d39e475d30532d2744e67cbca', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=108&crop=smart&auto=webp&s=b1f2b9313c129fad72056229a1efc349ce65dad6', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=216&crop=smart&auto=webp&s=08a7bf256e634d678110fcce751a0b2cab6f7650', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=320&crop=smart&auto=webp&s=5ab7eff83693193060796fc61a06fad060713db8', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=640&crop=smart&auto=webp&s=53501c885f23edcc9b7570e44220eceffae513f1', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=960&crop=smart&auto=webp&s=07be6237a8d51f573024ced54f4e73dab71687d5', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=1080&crop=smart&auto=webp&s=ef880a29e5883c11b4fafd504d5b8e75cd910735', 'width': 1080, 'height': 583}], 'variants': {}, 'id': '8Pl-tuF8qq0FGhF87hP-gp6cLVSmONxUgbO6t3Sq8gE'}], 'enabled': False}
Help with ollama and the Continue VSCode extension? Sometimes it works, sometimes it fails spectacularly
1
I am using the Continue VSCode plugin with Llama 3.1 8B running on my GPU. Sometimes it does exactly what I want it to, and sometimes it fails spectacularly. For instance, telling it "add a comment for this line of code" with a line of code added to the edit context might spit out "# Put comment here" on a previous line, or it might spit out many lines of a different part of that same file, or it might put "Here is a revised version of your code...." with a huge chunk of code below it. It generally works perfectly fine when I open a chat and ask it what a line of code does. But when trying to edit code, it goes absolutely haywire. Any idea what I might be doing wrong? Not sure if it's relevant but I have installed ollama-rocm-git from the AUR on Arch Linux and I'm running these models on a 7900 XT. I also had some even more odd behavior when using LM Studio to run the model instead I'm probably missing something pretty major, so any help pointing me in the right direction would be appreciated. My config file for Continue: { "models": [ { "title": "Llama 3.1 8B", "provider": "ollama", "model": "llama4.1:8b" } ], "tabAutocompleteModel": { "title": "Qwen2.5-Coder 1.5B", "provider": "ollama", "model": "qwen2.5-coder:1.5b" }, "contextProviders": [ { "name": "code", "params": {} }, { "name": "docs", "params": {} }, { "name": "diff", "params": {} }, { "name": "terminal", "params": {} }, { "name": "problems", "params": {} }, { "name": "folder", "params": {} }, { "name": "codebase", "params": {} } ], "slashCommands": [ { "name": "share", "description": "Export the current chat session to markdown" }, { "name": "cmd", "description": "Generate a shell command" }, { "name": "commit", "description": "Generate a git commit message" } ], "embeddingsProvider": { "provider": "ollama", "model": "nomic-embed-text" } }
2025-01-06T14:07:55
https://www.reddit.com/r/LocalLLaMA/comments/1huztgm/help_with_ollama_and_the_continue_vscode/
im_dylan_it
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huztgm
false
null
t3_1huztgm
/r/LocalLLaMA/comments/1huztgm/help_with_ollama_and_the_continue_vscode/
false
false
self
1
null
Help with summarisation prompting for Llama-3.3
1
[removed]
2025-01-06T14:08:49
https://www.reddit.com/r/LocalLLaMA/comments/1huzu3e/help_with_summarisation_prompting_for_llama33/
drsphelps
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huzu3e
false
null
t3_1huzu3e
/r/LocalLLaMA/comments/1huzu3e/help_with_summarisation_prompting_for_llama33/
false
false
self
1
null
What is your 2025 prediction about AI?
1
[removed]
2025-01-06T14:14:34
https://www.reddit.com/r/LocalLLaMA/comments/1huzyg2/what_is_your_2025_prediction_about_ai/
transformer_ML
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1huzyg2
false
null
t3_1huzyg2
/r/LocalLLaMA/comments/1huzyg2/what_is_your_2025_prediction_about_ai/
false
false
self
1
null
Hand drawn geometric shapes into computer image or svg file
1
[removed]
2025-01-06T14:31:44
https://www.reddit.com/r/LocalLLaMA/comments/1hv0bl2/hand_drawn_geometric_shapes_into_computer_image/
afnanqasim74
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv0bl2
false
null
t3_1hv0bl2
/r/LocalLLaMA/comments/1hv0bl2/hand_drawn_geometric_shapes_into_computer_image/
false
false
https://b.thumbs.redditm…u8Ufj1ej0xfc.jpg
1
null
PV-Tuning + WebAssembly: How I Ran an 8B Llama Model Inside a Web Browser
1
[removed]
2025-01-06T14:35:01
https://www.reddit.com/r/LocalLLaMA/comments/1hv0e5j/pvtuning_webassembly_how_i_ran_an_8b_llama_model/
galqiwi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv0e5j
false
null
t3_1hv0e5j
/r/LocalLLaMA/comments/1hv0e5j/pvtuning_webassembly_how_i_ran_an_8b_llama_model/
false
false
self
1
null
Benchmarking models on the NVIDIA GH200
1
Hi Everyone, I’m in the early stages of looking into solutions for an on prem deployment, and am looking into the NVIDIA GH200 and came across some benchmarks for Llama 3.1 70B using vLLM. The results I found were published by Sam Stoelinga on Substratus, and they looked really promising. Benchmark Results: Default Settings: • Successful Requests: 1000 • Benchmark Duration: 169.46 seconds • Request Throughput: 5.90 req/s • Output Token Throughput: 1022.25 tok/s • Total Token Throughput: 2393.86 tok/s • Mean Time to First Token (TTFT): 34702.73 ms • Median TTFT: 16933.34 ms • Mean Time Per Output Token (TPOT): 164.05 ms CPU Offload & Increased Context Length (120k tokens): • Successful Requests: 1000 • Benchmark Duration: 439.96 seconds • Request Throughput: 2.27 req/s • Output Token Throughput: 393.61 tok/s • Total Token Throughput: 921.91 tok/s • Mean TTFT: 23549.66 ms • Mean TPOT: 700.44 ms Full benchmarks are available here: Substratus Blog. Given the GH200’s specs (624GB total memory, 144GB HBM3e, 480GB LPDDR5X at 512GB/s), I thought it might be a solution that could achieve reasonable generation speeds with Deepseek V3 at Q4 quantization? Does anyone have experience benchmarking any other models or can confirm the tok/s speeds from this benchmark on the GH200, or know of additional resources?
2025-01-06T14:41:04
https://www.substratus.ai/blog/benchmarking-llama-3.1-70b-on-gh200-vllm
Ok-Perception2973
substratus.ai
1970-01-01T00:00:00
0
{}
1hv0iel
false
null
t3_1hv0iel
/r/LocalLLaMA/comments/1hv0iel/benchmarking_models_on_the_nvidia_gh200/
false
false
default
1
null
Options for free API for IRC Quzi bot
1
Hi all! I know this is community is mostly dealing with the local LLM, but I thought I might ask you nevertheless. I'm creating an IRC quiz bot (open source, you can take a look here: [QuizBot](https://github.com/MansionNET/QuizBot)), and my idea was to use an AI API for questions generation. The IRC server is fully self hosted locally though, in my living room :) I've made some good progress with the free tier of Mistral API, but I'm open to exploring some other free options, as we don't really have any funding at the moment. Do you have any ideas which free APIs could be ok to test? I've heard about the Huggingface options, but didn't figure out if I can used them to be honest. P.S. If anyone is interested in joining the server, hit me up and I'll share the details with you. Cheers and thanks in advance!
2025-01-06T14:54:50
https://www.reddit.com/r/LocalLLaMA/comments/1hv0t5g/options_for_free_api_for_irc_quzi_bot/
avatar_one
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv0t5g
false
null
t3_1hv0t5g
/r/LocalLLaMA/comments/1hv0t5g/options_for_free_api_for_irc_quzi_bot/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/Hw0azmURUVUDv6O7rsJTeUpQ0B4LVVzsUuqDUFMvRTo.jpg?auto=webp&s=1f21a5ccdcb678f11eb7312f9f9b2da110992898', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/Hw0azmURUVUDv6O7rsJTeUpQ0B4LVVzsUuqDUFMvRTo.jpg?width=108&crop=smart&auto=webp&s=19edb32f6604eca72e289c35dc01f5de4cf7a2f0', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/Hw0azmURUVUDv6O7rsJTeUpQ0B4LVVzsUuqDUFMvRTo.jpg?width=216&crop=smart&auto=webp&s=42fbfff961c2bf8266c71b717abbeb07e4c996fd', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/Hw0azmURUVUDv6O7rsJTeUpQ0B4LVVzsUuqDUFMvRTo.jpg?width=320&crop=smart&auto=webp&s=9e2920893ab6e1237054775a14c6fdf51dd0b51c', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/Hw0azmURUVUDv6O7rsJTeUpQ0B4LVVzsUuqDUFMvRTo.jpg?width=640&crop=smart&auto=webp&s=233f96ba2c4551f751e73497be48820089af252c', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/Hw0azmURUVUDv6O7rsJTeUpQ0B4LVVzsUuqDUFMvRTo.jpg?width=960&crop=smart&auto=webp&s=e12bf5b5315f80474148282d5df53b36f092322a', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/Hw0azmURUVUDv6O7rsJTeUpQ0B4LVVzsUuqDUFMvRTo.jpg?width=1080&crop=smart&auto=webp&s=a4d88bb23f561b49c6b0088415a376404576d203', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'yA-OZHwWqdvRxA7AHKKzJUBilyptT-fay_E6ecPP2w0'}], 'enabled': False}
Deepseek V3 on CPU-only AWS EC2 instances
1
Has anyone here managed to run Deepseek V3 on AWS EC2 using only the CPU? There are several instances with more than 700GB of RAM at an affordable price (also Spot instances), I was wondering if anyone has done this.
2025-01-06T14:58:38
https://www.reddit.com/r/LocalLLaMA/comments/1hv0w4y/deepseek_v3_on_cpuonly_aws_ec2_instances/
felipemarinho
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv0w4y
false
null
t3_1hv0w4y
/r/LocalLLaMA/comments/1hv0w4y/deepseek_v3_on_cpuonly_aws_ec2_instances/
false
false
self
1
null
AI agents as finite state machine ?
1
In his latest blog post, Sam Altman wrote: 'In 2025, we may see the first AI agents “join the workforce” and materially change the output of companies.' That's bold, and it prompted me to resume my experimentation with AI agents. Here, I’m using DSPy, [LiteLLM](https://x.com/LiteLLM), and [nebiusaistudio](https://x.com/nebiusaistudio). The agent is built around a finite state machine implementation. I am thinking about extending this concept if it makes sense. Anyone playing with AI agents and finite state machines ?? Would love any feedback. My first script: [https://gist.github.com/fsndzomga/2f5d6407733ecde19760733da4536321](https://gist.github.com/fsndzomga/2f5d6407733ecde19760733da4536321)
2025-01-06T15:05:20
https://www.reddit.com/r/LocalLLaMA/comments/1hv11ud/ai_agents_as_finite_state_machine/
franckeinstein24
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv11ud
false
null
t3_1hv11ud
/r/LocalLLaMA/comments/1hv11ud/ai_agents_as_finite_state_machine/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/_2sZJ-5D2v8qnURi6kDRNqI6lm3qHoM2g3A3r47BnCk.jpg?auto=webp&s=de46b138dc496066620aa148acccf9945f09c420', 'width': 200, 'height': 200}, 'resolutions': [{'url': 'https://external-preview.redd.it/_2sZJ-5D2v8qnURi6kDRNqI6lm3qHoM2g3A3r47BnCk.jpg?width=108&crop=smart&auto=webp&s=324f1dea964bc47170d9001b9ed969190aed2a5f', 'width': 108, 'height': 108}], 'variants': {}, 'id': 'ABjknx-s_iGXfntxBptkxkDWPvP8jkcGOkvUquSCbmk'}], 'enabled': False}
RTX 5090 rumored to have 1.8 TB/s memory bandwidth
1
As per [this](https://videocardz.com/newz/exclusive-first-look-at-geforce-rtx-5090-with-32gb-gddr7-memory) article the 5090 is rumored to have 1.8 TB/s memory bandwidth and 512 bit memory bus - which make it better than any professional card except A100/H100 which have HBM2/3 memory, 2 TB/s memory bandwidth and 5120 bit memory bus. Even though the VRAM is limited to 32GB (GDDR7), it could be the fastest for running any LLM <30B at Q6.
2025-01-06T15:20:42
https://www.reddit.com/r/LocalLLaMA/comments/1hv1efu/rtx_5090_rumored_to_have_18_tbs_memory_bandwidth/
TechNerd10191
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv1efu
false
null
t3_1hv1efu
/r/LocalLLaMA/comments/1hv1efu/rtx_5090_rumored_to_have_18_tbs_memory_bandwidth/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/yvFevDYEgJ0_onHzhBxJAFHnAp2HNx8XKHWIEx2Zi0o.jpg?auto=webp&s=a704cd648f1c14c3171434ea5710cf8a27a79c5a', 'width': 2000, 'height': 1040}, 'resolutions': [{'url': 'https://external-preview.redd.it/yvFevDYEgJ0_onHzhBxJAFHnAp2HNx8XKHWIEx2Zi0o.jpg?width=108&crop=smart&auto=webp&s=f28e6da440429db1e89cb0b2a574d6586d4c589e', 'width': 108, 'height': 56}, {'url': 'https://external-preview.redd.it/yvFevDYEgJ0_onHzhBxJAFHnAp2HNx8XKHWIEx2Zi0o.jpg?width=216&crop=smart&auto=webp&s=b774fbb599651203c90cfca364bd1a616a3d6c02', 'width': 216, 'height': 112}, {'url': 'https://external-preview.redd.it/yvFevDYEgJ0_onHzhBxJAFHnAp2HNx8XKHWIEx2Zi0o.jpg?width=320&crop=smart&auto=webp&s=aa733a95d04d484caedd19bfea94c0c8d9efb8ca', 'width': 320, 'height': 166}, {'url': 'https://external-preview.redd.it/yvFevDYEgJ0_onHzhBxJAFHnAp2HNx8XKHWIEx2Zi0o.jpg?width=640&crop=smart&auto=webp&s=b3e5bc7b598a40e4d106ff4bdbb8eefda7e1f71c', 'width': 640, 'height': 332}, {'url': 'https://external-preview.redd.it/yvFevDYEgJ0_onHzhBxJAFHnAp2HNx8XKHWIEx2Zi0o.jpg?width=960&crop=smart&auto=webp&s=9bceb7f05a970709de1032c2b52792ef00507c4f', 'width': 960, 'height': 499}, {'url': 'https://external-preview.redd.it/yvFevDYEgJ0_onHzhBxJAFHnAp2HNx8XKHWIEx2Zi0o.jpg?width=1080&crop=smart&auto=webp&s=3531b96c9a7cbff42b9ce44d5263f1616b37a54d', 'width': 1080, 'height': 561}], 'variants': {}, 'id': 'AgcEIcSjQMCnNzQQUhBFjQjCiZrUu5j6N7_kA1qwZ4o'}], 'enabled': False}
Looking to host a webapp that runs a local llm
1
Where would I start for this? I have a flask app I want to host. Would I use something like a vps? Any recommendations? Currently its running fine on my laptop. I have heard of a2 and bluehost any others i should look at? AWS?
2025-01-06T15:31:42
https://www.reddit.com/r/LocalLLaMA/comments/1hv1nbx/looking_to_host_a_webapp_that_runs_a_local_llm/
klop2031
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv1nbx
false
null
t3_1hv1nbx
/r/LocalLLaMA/comments/1hv1nbx/looking_to_host_a_webapp_that_runs_a_local_llm/
false
false
self
1
null
Local Claude 3.5 coding equivalent
1
[removed]
2025-01-06T15:33:36
https://www.reddit.com/r/LocalLLaMA/comments/1hv1ovn/local_claude_35_coding_equivalent/
Sparky-Sundevil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv1ovn
false
null
t3_1hv1ovn
/r/LocalLLaMA/comments/1hv1ovn/local_claude_35_coding_equivalent/
false
false
self
1
null
Latest Creative Writing LLM?
1
Hi! I've been digging around a bit and searching for an LLM that is tailed towards just text completion writing (preferable with a large content) that has little slop. I tried base models but of course getting the AI to follow a narrative is difficult without giving a TON of examples. And I've tried roleplay models but they're trained on a ton of synthetic data that is more for Is there a model that has a good mix of instruction following a free flow writing that's trained primarily off stories and human made content?
2025-01-06T16:00:52
https://www.reddit.com/r/LocalLLaMA/comments/1hv2by7/latest_creative_writing_llm/
jeremiahn4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv2by7
false
null
t3_1hv2by7
/r/LocalLLaMA/comments/1hv2by7/latest_creative_writing_llm/
false
false
self
1
null
Local model that supports tools and vision
1
does anyone know of any? seems to be one or the other for ollama.
2025-01-06T16:07:41
https://www.reddit.com/r/LocalLLaMA/comments/1hv2hyz/local_model_that_supports_tools_and_vision/
megadonkeyx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv2hyz
false
null
t3_1hv2hyz
/r/LocalLLaMA/comments/1hv2hyz/local_model_that_supports_tools_and_vision/
false
false
self
1
null
llama vision couldn't figure it out.
1
2025-01-06T16:08:00
https://i.redd.it/p9bw3edidebe1.jpeg
Spirited-Lunch1027
i.redd.it
1970-01-01T00:00:00
0
{}
1hv2i7w
false
null
t3_1hv2i7w
/r/LocalLLaMA/comments/1hv2i7w/llama_vision_couldnt_figure_it_out/
false
false
https://b.thumbs.redditm…3F1O4Xq7JlpE.jpg
1
{'images': [{'source': {'url': 'https://preview.redd.it/p9bw3edidebe1.jpeg?auto=webp&s=2c27971daba5774eda765875c3d0a38a8b019723', 'width': 1138, 'height': 406}, 'resolutions': [{'url': 'https://preview.redd.it/p9bw3edidebe1.jpeg?width=108&crop=smart&auto=webp&s=0c11cfa57da5fe99e6f98d0aab8604513bf3e6fc', 'width': 108, 'height': 38}, {'url': 'https://preview.redd.it/p9bw3edidebe1.jpeg?width=216&crop=smart&auto=webp&s=8375b905c55c5fc888782a6ac0c4148934b54d2c', 'width': 216, 'height': 77}, {'url': 'https://preview.redd.it/p9bw3edidebe1.jpeg?width=320&crop=smart&auto=webp&s=95e48c6f0e0c08aec81e0cdb94133948cbe61c9b', 'width': 320, 'height': 114}, {'url': 'https://preview.redd.it/p9bw3edidebe1.jpeg?width=640&crop=smart&auto=webp&s=8846db44a214faa7c64d17ecf7ad912ab5cf97c1', 'width': 640, 'height': 228}, {'url': 'https://preview.redd.it/p9bw3edidebe1.jpeg?width=960&crop=smart&auto=webp&s=04a5c3140eb8a908b6b0a2bb9cf09b7b04acee21', 'width': 960, 'height': 342}, {'url': 'https://preview.redd.it/p9bw3edidebe1.jpeg?width=1080&crop=smart&auto=webp&s=12d05b780ce3d7042b6b1768fa6477c283d9ee95', 'width': 1080, 'height': 385}], 'variants': {}, 'id': 'PoNQdhPMzMZyuKjfEtRX9UnYS_DRlfeoMtMd_LLKp6Y'}], 'enabled': True}
cheapest persistent storage for A100 GPU's?
1
[removed]
2025-01-06T16:34:38
https://www.reddit.com/r/LocalLLaMA/comments/1hv34wb/cheapest_persistent_storage_for_a100_gpus/
RealAI22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv34wb
false
null
t3_1hv34wb
/r/LocalLLaMA/comments/1hv34wb/cheapest_persistent_storage_for_a100_gpus/
false
false
self
1
null
Newbie - learning ollama, RAG, and studying scientific journals
1
I'm running ollama with a multitude of LLMs on my local computer with NVIDIA support. I am using openwebui for my interface and I have successfully utilized a knowledge base with about 200 science papers. I was studying data from the papers and started to notice, I don't think it's "getting everything" from the papers. I've now got my eyes opened to the context windows and limitations of LLMs. I am sure I have just more learning to do. Is there a way to "read in" the 200+ science journals into my RAG in a way where I am not losing some of it and I can still analyze it with llama3.3 or gemma2 ? TYIA
2025-01-06T16:38:24
https://www.reddit.com/r/LocalLLaMA/comments/1hv384i/newbie_learning_ollama_rag_and_studying/
heyflyguy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv384i
false
null
t3_1hv384i
/r/LocalLLaMA/comments/1hv384i/newbie_learning_ollama_rag_and_studying/
false
false
self
1
null
LLM Creative Story-Writing Benchmark
1
2025-01-06T16:38:31
https://github.com/lechmazur/writing
zero0_one1
github.com
1970-01-01T00:00:00
0
{}
1hv387z
false
null
t3_1hv387z
/r/LocalLLaMA/comments/1hv387z/llm_creative_storywriting_benchmark/
false
false
https://b.thumbs.redditm…AyVOqFmkOuPg.jpg
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/q1x6iJ_7NVfT2Q2KsKjrH32R1lKtxD1KsPTaV5ZMgjw.jpg?auto=webp&s=411cb55b13917e2091ed37352fc9d896d4579a1d', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/q1x6iJ_7NVfT2Q2KsKjrH32R1lKtxD1KsPTaV5ZMgjw.jpg?width=108&crop=smart&auto=webp&s=cecae645e6286c0b405109e5f5c8e45df60e7f75', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/q1x6iJ_7NVfT2Q2KsKjrH32R1lKtxD1KsPTaV5ZMgjw.jpg?width=216&crop=smart&auto=webp&s=20a3ade5a5af18a88c8ab1eb87475bbd721c43f0', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/q1x6iJ_7NVfT2Q2KsKjrH32R1lKtxD1KsPTaV5ZMgjw.jpg?width=320&crop=smart&auto=webp&s=ecb3764d20ed523ed6a5246b83e0c9add9a77016', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/q1x6iJ_7NVfT2Q2KsKjrH32R1lKtxD1KsPTaV5ZMgjw.jpg?width=640&crop=smart&auto=webp&s=d6f7420947c4ca7764aa8e26625a8fdffd9fcacd', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/q1x6iJ_7NVfT2Q2KsKjrH32R1lKtxD1KsPTaV5ZMgjw.jpg?width=960&crop=smart&auto=webp&s=d50aba75deb4109ec0644ce51b91e9d651eced6e', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/q1x6iJ_7NVfT2Q2KsKjrH32R1lKtxD1KsPTaV5ZMgjw.jpg?width=1080&crop=smart&auto=webp&s=b82541ee26484708a8e18996f843482fe0256bca', 'width': 1080, 'height': 540}], 'variants': {}, 'id': '6b4fzKOxXQOCpgXIhgMejM8qnIbqKLNvmFWYOPRWHk0'}], 'enabled': False}
Controlling LLMs with Physical Interfaces via Dynamic Prompts
1
[removed]
2025-01-06T16:42:22
https://www.reddit.com/r/LocalLLaMA/comments/1hv3bm8/controlling_llms_with_physical_interfaces_via/
vectorizr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv3bm8
false
null
t3_1hv3bm8
/r/LocalLLaMA/comments/1hv3bm8/controlling_llms_with_physical_interfaces_via/
false
false
self
1
null
Dolphin 3.0 is here! 🐬🐬
1
[removed]
2025-01-06T16:51:09
https://www.reddit.com/r/LocalLLaMA/comments/1hv3jde/dolphin_30_is_here/
clduab11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv3jde
false
null
t3_1hv3jde
/r/LocalLLaMA/comments/1hv3jde/dolphin_30_is_here/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/RDtnHLr3enIirLUTYyEWWf8SjhkYvz9YIuZ89YFUL_4.jpg?auto=webp&s=857e5c4ba6a4a5669e3cf76a5e6d278b2df5adde', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/RDtnHLr3enIirLUTYyEWWf8SjhkYvz9YIuZ89YFUL_4.jpg?width=108&crop=smart&auto=webp&s=f0d5e2e6de4bff1b7b87819d9467904880a408d3', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/RDtnHLr3enIirLUTYyEWWf8SjhkYvz9YIuZ89YFUL_4.jpg?width=216&crop=smart&auto=webp&s=740dd0ceef67852a058afad4b48d0e4f9d904ff4', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/RDtnHLr3enIirLUTYyEWWf8SjhkYvz9YIuZ89YFUL_4.jpg?width=320&crop=smart&auto=webp&s=9b4a839b6c21c7c30ce6bf779fe348eb475a318a', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/RDtnHLr3enIirLUTYyEWWf8SjhkYvz9YIuZ89YFUL_4.jpg?width=640&crop=smart&auto=webp&s=35762ca011564a31bd0387d1d535874e2bbb33c2', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/RDtnHLr3enIirLUTYyEWWf8SjhkYvz9YIuZ89YFUL_4.jpg?width=960&crop=smart&auto=webp&s=b664a3a0b11500279a6930616207d1f9a2f91794', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/RDtnHLr3enIirLUTYyEWWf8SjhkYvz9YIuZ89YFUL_4.jpg?width=1080&crop=smart&auto=webp&s=bbe0920c765146a1ece5c1b8b29299fd078d59e9', 'width': 1080, 'height': 583}], 'variants': {}, 'id': 'UQM08j_aV_PUC29YoQT-sX6TRyPytU2JBaGzaNQYXro'}], 'enabled': False}
Run DeepSeek-V3 with 96GB VRAM + 256 GB RAM under Linux
1
My company rig is described in [https://www.reddit.com/r/LocalLLaMA/comments/1gjovjm/4x\_rtx\_3090\_threadripper\_3970x\_256\_gb\_ram\_llm/](https://www.reddit.com/r/LocalLLaMA/comments/1gjovjm/4x_rtx_3090_threadripper_3970x_256_gb_ram_llm/) 1. Set up CUDA 12.x 2. set up llama.cpp git clone https://github.com/ggerganov/llama.cpp/ cd llama.cpp cmake -B build -DGGML_CUDA=ON -DGGML_CUDA_F16=ON ; cmake --build build --config Release --parallel $(nproc) Your llama.cpp with recently merged DeepSeek V3 support is ready! 3. Now download the model: cd ../ mkdir DeepSeek-V3-Q3_K_M cd DeepSeek-V3-Q3_K_M for i in {1..8} ; do wget "https://huggingface.co/bullerwins/DeepSeek-V3-GGUF/resolve/main/DeepSeek-V3-Q3_K_M/DeepSeek-V3-Q3_K_M-0000$i-of-00008.gguf?download=true" -o DeepSeek-V3-Q3_K_M-0000$i-of-00008.gguf ; done 4. Now run it on localhost on port 1234: cd ../ ./llama.cpp/build/bin/llama-server \ --host localhost \ --port 1234 \ --model ./DeepSeek-V3-Q3_K_M/DeepSeek-V3-Q3_K_M-00001-of-00008.gguf \ --alias DeepSeek-V3-Q3-4k \ --temp 0.1 \ -ngl 15 \ --split-mode layer -ts 3,4,4,4 \ -c 4096 \ --numa distribute **Done!** When you ask it something, e.g. using \`time curl ...\`: time curl 'http://localhost:1234/v1/chat/completions' -X POST -H 'Content-Type: application/json' -d '{"model\_name": "DeepSeek-V3-Q3-4k","messages":\[{"role":"system","content":"You are an AI coding assistant. You explain as minimum as possible."},{"role":"user","content":"Write prime numbers from 1 to 100, no coding"}\], "stream": false}' you get output like {"choices":[{"finish_reason":"stop","index":0,"message":{"content":"2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97.","role":"assistant"}}],"created":1736179690,"model":"DeepSeek-V3-Q3-4k","system_fingerprint":"b4418-b56f079e","object":"chat.completion","usage":{"completion_tokens":75,"prompt_tokens":29,"total_tokens":104},"id":"chatcmpl-gYypY7Ysa1ludwppicuojr1anMTUSFV2","timings":{"prompt_n":28,"prompt_ms":2382.742,"prompt_per_token_ms":85.09792857142858,"prompt_per_second":11.751167352571112,"predicted_n":75,"predicted_ms":19975.822,"predicted_per_token_ms":266.3442933333333,"predicted_per_second":3.754538862030308}} real0m22.387s user0m0.003s sys0m0.008s or in \`journalctl -f\` something like Jan 06 18:01:42 hostname llama-server[1753310]: slot release: id 0 | task 5720 | stop processing: n_past = 331, truncated = 0 Jan 06 18:01:42 hostname llama-server[1753310]: slot print_timing: id 0 | task 5720 | Jan 06 18:01:42 hostname llama-server[1753310]: prompt eval time = 1292.85 ms / 12 tokens ( 107.74 ms per token, 9.28 tokens per second) Jan 06 18:01:42 hostname llama-server[1753310]: eval time = 89758.14 ms / 318 tokens ( 282.26 ms per token, 3.54 tokens per second) Jan 06 18:01:42 hostname llama-server[1753310]: total time = 91050.99 ms / 330 tokens Jan 06 18:01:42 hostname llama-server[1753310]: srv update_slots: all slots are idle Jan 06 18:01:42 hostname llama-server[1753310]: request: POST /v1/chat/completions 172.17.0.2 200 Good luck, fellow rig-builders!
2025-01-06T16:55:49
https://www.reddit.com/r/LocalLLaMA/comments/1hv3ne8/run_deepseekv3_with_96gb_vram_256_gb_ram_under/
EmilPi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv3ne8
false
null
t3_1hv3ne8
/r/LocalLLaMA/comments/1hv3ne8/run_deepseekv3_with_96gb_vram_256_gb_ram_under/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280, 'height': 640}, 'resolutions': [{'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s'}], 'enabled': False}
DEEPSEEK NEEDS MERCH
1
[Please I needs it, I wants it](https://preview.redd.it/xepockhnoebe1.png?width=570&format=png&auto=webp&s=521f7d1fc0908153040f53ce34f7d977abc7ef44)
2025-01-06T17:11:32
https://www.reddit.com/r/LocalLLaMA/comments/1hv41dh/deepseek_needs_merch/
hippobreeder3000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv41dh
false
null
t3_1hv41dh
/r/LocalLLaMA/comments/1hv41dh/deepseek_needs_merch/
false
false
https://a.thumbs.redditm…5M9Dve7RNw14.jpg
1
null
How can I eject a model on LM Studio through Open Web-UI?
1
[removed]
2025-01-06T17:34:13
https://www.reddit.com/r/LocalLLaMA/comments/1hv4lzj/how_can_i_eject_a_model_on_lm_studio_through_open/
Bowbowjowjow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv4lzj
false
null
t3_1hv4lzj
/r/LocalLLaMA/comments/1hv4lzj/how_can_i_eject_a_model_on_lm_studio_through_open/
false
false
self
1
null
Wich LLM i should use?
1
[removed]
2025-01-06T17:46:32
https://www.reddit.com/r/LocalLLaMA/comments/1hv4wx4/wich_llm_i_should_use/
LOUAY-Sausage
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv4wx4
false
null
t3_1hv4wx4
/r/LocalLLaMA/comments/1hv4wx4/wich_llm_i_should_use/
false
false
self
1
null
Is 8gb Vram enough for TTS models ?
1
[removed]
2025-01-06T17:47:52
https://www.reddit.com/r/LocalLLaMA/comments/1hv4y34/is_8gb_vram_enough_for_tts_models/
Bilalbillzanahi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv4y34
false
null
t3_1hv4y34
/r/LocalLLaMA/comments/1hv4y34/is_8gb_vram_enough_for_tts_models/
false
false
self
1
null
Continuous tasks or long-time use-cases for local LLMs
1
[removed]
2025-01-06T18:30:36
https://www.reddit.com/r/LocalLLaMA/comments/1hv5zrs/continuous_tasks_or_longtime_usecases_for_local/
NewTestAccount2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv5zrs
false
null
t3_1hv5zrs
/r/LocalLLaMA/comments/1hv5zrs/continuous_tasks_or_longtime_usecases_for_local/
false
false
self
1
null
Continuous tasks or long-time use-cases for local LLMs
1
Hi everyone, I used to run various tasks on my PC overnight - sometimes some brute-force calculations, sometimes downloads, sometimes games. It was always exciting to wake up and see the results. A little similar to a vacuum robot or a dishwasher: set it up, let it do its job, and return to a completed task. Sure, I could do these chores manually, and perhaps even faster and more accurately, but setting them up is much quicker, and the results are *good enough*. When I upgraded my PC, I decided to make it LLM-capable. I built a machine with an RTX 3090 and started running local models. There were use-cases I had only read about, like using LLMs for journaling, introspection, and sharing my thoughts in general (like a quasi-therapy). I wouldn't like to share my thoughts with a third-party provider, so that was the most exciting use-case to me. And man, those things are incredible. I still cannot believe how such advanced technology can run on a local machine. Anyway, currently, I use local models for basic tasks like chatting, brainstorming, introspection, and writing corrections. Since I run them locally, electricity is basically my only cost and it isn't too expensive. In this sense local models are similar to mentioned vacuum robot or a dishwasher. And I'm looking to realize more of the "run it and see the results" potential of local LLMs. So, I wanted to ask you all about existing use-cases or tools that make local LLMs work continuously, run in the background, or operate on a schedule. Here are some ideas I've been considering: For continuous (overnight) runs: 1. Metadata and documentation - continuously run a bot against some archive to propose metadata changes, fix mistakes, and create documentation. 2. Brainstorming and research - start with an initial idea or prompt and: - generate insights from different "experts" (system prompts) and synthesize them once in a while - conduct web searches - There is already a tool to perform a research in a continuous manner (Automated-AI-Web-Researcher-Ollama) 1. Text optimization - continuously refine text for structure, flow, vocabulary, grammar, etc. 2. Image generation - vaguely describe an initial prompt and continuously generate detailed prompt variations and pictures based on them until explicitly stopped. For background runs: 1. Translator - a context-aware translation service. 2. Phone server - access my local setup from a phone. Periodic running (every morning, every hour, etc.): 1. Inbox synthesizer - summarize and prioritize emails. 2. Daily feed - synthesize content from various sources. 3. Task organization - periodically organize captured tasks, breaking down large tasks, consolidating small or redundant ones, and extracting the most urgent or important. There are some tasks that could be automated, but it wouldn't make sense to do so - like creating notes on a topic you're trying to learn. For me, the process of creating notes is as important to learning as keeping and reviewing them. So, automating this process would reduce the learning potential. However, besides such some use-cases, there are likely hundreds of other tasks that could be automated, increasing productivity and saving a lot of time. TLDR: I'm looking for some ways to use local LLMs for continuous (infinitely iterative) tasks, in the background, or on a schedule. Any ideas, tools, or use-cases you've found particularly useful? Thanks in advance for any input!
2025-01-06T18:51:19
https://www.reddit.com/r/LocalLLaMA/comments/1hv6iel/continuous_tasks_or_longtime_usecases_for_local/
NewTestAccount2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv6iel
false
null
t3_1hv6iel
/r/LocalLLaMA/comments/1hv6iel/continuous_tasks_or_longtime_usecases_for_local/
false
false
self
1
null
You wouldn't download an AI?
1
2025-01-06T19:18:56
https://altayakkus.substack.com/p/you-wouldnt-download-an-ai
mrdevlar
altayakkus.substack.com
1970-01-01T00:00:00
0
{}
1hv76y0
false
null
t3_1hv76y0
/r/LocalLLaMA/comments/1hv76y0/you_wouldnt_download_an_ai/
false
false
https://b.thumbs.redditm…r1JUUHetwt0E.jpg
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/bBuhfxa8pXiiBPk6KMHyOhFkTXkbEMgYuLRTGBs4wtQ.jpg?auto=webp&s=06daa08279ce782f377e36b1dbbb2ac695738a33', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/bBuhfxa8pXiiBPk6KMHyOhFkTXkbEMgYuLRTGBs4wtQ.jpg?width=108&crop=smart&auto=webp&s=0b6c42fbe305e8ade3bb8b16ceb121280f98ee33', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/bBuhfxa8pXiiBPk6KMHyOhFkTXkbEMgYuLRTGBs4wtQ.jpg?width=216&crop=smart&auto=webp&s=6a4b48d35e769c2fdc38b8c3ca94f1e7b5703547', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/bBuhfxa8pXiiBPk6KMHyOhFkTXkbEMgYuLRTGBs4wtQ.jpg?width=320&crop=smart&auto=webp&s=98cffe1c3e9e3aee4b50f75392977412e3f43762', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/bBuhfxa8pXiiBPk6KMHyOhFkTXkbEMgYuLRTGBs4wtQ.jpg?width=640&crop=smart&auto=webp&s=046b143352faba71234f4ca8bf160e32667355d5', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/bBuhfxa8pXiiBPk6KMHyOhFkTXkbEMgYuLRTGBs4wtQ.jpg?width=960&crop=smart&auto=webp&s=7bab03ea494eb5dd4c748c96250e279423233c8d', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/bBuhfxa8pXiiBPk6KMHyOhFkTXkbEMgYuLRTGBs4wtQ.jpg?width=1080&crop=smart&auto=webp&s=78455672a9f9011bdc749ee397e1ade87c4040a2', 'width': 1080, 'height': 540}], 'variants': {}, 'id': '2STcgxmZaJ5KA7u1E27TPsPObRO8ulQIna8RFn2wkVk'}], 'enabled': False}
I'm sorry WHAT? AMD Ryzen AI Max+ 395 2.2x faster than 4090
1
https://preview.redd.it/…c0e2372344c35a
2025-01-06T19:24:50
https://www.reddit.com/r/LocalLLaMA/comments/1hv7c54/im_sorry_what_amd_ryzen_ai_max_395_22x_faster/
KvAk_AKPlaysYT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv7c54
false
null
t3_1hv7c54
/r/LocalLLaMA/comments/1hv7c54/im_sorry_what_amd_ryzen_ai_max_395_22x_faster/
false
false
https://b.thumbs.redditm…yxTOv0Ar0JUY.jpg
1
null
2.2x faster at tokens/sec vs rtx 4090 24gb using LLama 3.1 70B-Q4!
1
[At AMD CES 2025](https://preview.redd.it/winh95qkcfbe1.png?width=1203&format=png&auto=webp&s=cf5d7f0b3f23aed3ad440a52fab284172999fa37)
2025-01-06T19:25:16
https://www.reddit.com/r/LocalLLaMA/comments/1hv7cia/22x_faster_at_tokenssec_vs_rtx_4090_24gb_using/
No_Training9444
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv7cia
false
null
t3_1hv7cia
/r/LocalLLaMA/comments/1hv7cia/22x_faster_at_tokenssec_vs_rtx_4090_24gb_using/
false
false
https://b.thumbs.redditm…Zce4LQZrWCME.jpg
1
null
Announcement made at AMD at CES 2025 - New Ryzen CPU (AMD Ryzen AI max+ 395) for laptop runs a 70B(q4) 2 times faster than a 4090 discrete desktop GPU
1
2025-01-06T19:30:17
https://i.redd.it/2ny5lppkdfbe1.png
takuonline
i.redd.it
1970-01-01T00:00:00
0
{}
1hv7gto
false
null
t3_1hv7gto
/r/LocalLLaMA/comments/1hv7gto/announcement_made_at_amd_at_ces_2025_new_ryzen/
false
false
https://b.thumbs.redditm…XOMzxYsrxCis.jpg
1
{'images': [{'source': {'url': 'https://preview.redd.it/2ny5lppkdfbe1.png?auto=webp&s=0437ee34172186c8ed114020ddf70e12f775525a', 'width': 2328, 'height': 1061}, 'resolutions': [{'url': 'https://preview.redd.it/2ny5lppkdfbe1.png?width=108&crop=smart&auto=webp&s=1eddaad243f80a3cdd51886359f29568f941d12d', 'width': 108, 'height': 49}, {'url': 'https://preview.redd.it/2ny5lppkdfbe1.png?width=216&crop=smart&auto=webp&s=532075d290fa0cdf376af84fb8b36175aaeb0201', 'width': 216, 'height': 98}, {'url': 'https://preview.redd.it/2ny5lppkdfbe1.png?width=320&crop=smart&auto=webp&s=324de6da1e4a6dca2075a341e516258048f95c8b', 'width': 320, 'height': 145}, {'url': 'https://preview.redd.it/2ny5lppkdfbe1.png?width=640&crop=smart&auto=webp&s=9b4d6bec0d5ea7493e348469fece8e8516fdf1e1', 'width': 640, 'height': 291}, {'url': 'https://preview.redd.it/2ny5lppkdfbe1.png?width=960&crop=smart&auto=webp&s=25df95c8eb706ace3bde6d5f6d97fddeb6b5be58', 'width': 960, 'height': 437}, {'url': 'https://preview.redd.it/2ny5lppkdfbe1.png?width=1080&crop=smart&auto=webp&s=7983b44027097749c5a54f090fbcb155c0b22cc2', 'width': 1080, 'height': 492}], 'variants': {}, 'id': 'x1-UXKw_TuAFuLMmvQWhPA0v1dm7cfIRHCsw7pWsqAI'}], 'enabled': True}
AMD Ryzen AI Max+ 395 Llama 3.1 70B-Q4 twice as fast RTX 4090
1
https://preview.redd.it/…or consumers.
2025-01-06T19:33:05
https://www.reddit.com/r/LocalLLaMA/comments/1hv7j8k/amd_ryzen_ai_max_395_llama_31_70bq4_twice_as_fast/
Any_Pressure4251
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv7j8k
false
null
t3_1hv7j8k
/r/LocalLLaMA/comments/1hv7j8k/amd_ryzen_ai_max_395_llama_31_70bq4_twice_as_fast/
false
false
https://a.thumbs.redditm…0NpxzjoHBO94.jpg
1
null
context?
1
2025-01-06T19:34:14
https://i.redd.it/u9srthaaefbe1.png
GOGONUT6543
i.redd.it
1970-01-01T00:00:00
0
{}
1hv7k8x
false
null
t3_1hv7k8x
/r/LocalLLaMA/comments/1hv7k8x/context/
false
false
https://b.thumbs.redditm…IkJ6FSletPVw.jpg
1
{'images': [{'source': {'url': 'https://preview.redd.it/u9srthaaefbe1.png?auto=webp&s=e95c16bc2bc2d99a2cfe14d1b907bd8dcbc63a7b', 'width': 1424, 'height': 424}, 'resolutions': [{'url': 'https://preview.redd.it/u9srthaaefbe1.png?width=108&crop=smart&auto=webp&s=ab1aa10b6e452f4b603cc01d7bbda3acb97dbab9', 'width': 108, 'height': 32}, {'url': 'https://preview.redd.it/u9srthaaefbe1.png?width=216&crop=smart&auto=webp&s=bbfeb2691dcd5826891ab638e33d886aa223631b', 'width': 216, 'height': 64}, {'url': 'https://preview.redd.it/u9srthaaefbe1.png?width=320&crop=smart&auto=webp&s=4ab383579cdc7481120f50937d09a1a34be8b63d', 'width': 320, 'height': 95}, {'url': 'https://preview.redd.it/u9srthaaefbe1.png?width=640&crop=smart&auto=webp&s=75dbf6c45cb794be0a227dc55014dd6a2d593365', 'width': 640, 'height': 190}, {'url': 'https://preview.redd.it/u9srthaaefbe1.png?width=960&crop=smart&auto=webp&s=63d88f37e7f5d56df8eb182e09ab4e5264e695c3', 'width': 960, 'height': 285}, {'url': 'https://preview.redd.it/u9srthaaefbe1.png?width=1080&crop=smart&auto=webp&s=2930c6f5115ae700f9e36dc92ed241c401cd49fb', 'width': 1080, 'height': 321}], 'variants': {}, 'id': '6W_nxrozIlSeJhD4y_I41lxt7XM8Wxmv52Y6cY3R_18'}], 'enabled': True}
RAG on Local PC
1
[removed]
2025-01-06T19:35:39
https://www.reddit.com/r/LocalLLaMA/comments/1hv7lgj/rag_on_local_pc/
anjan42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv7lgj
false
null
t3_1hv7lgj
/r/LocalLLaMA/comments/1hv7lgj/rag_on_local_pc/
false
false
self
1
null
Best model for writing?
1
[removed]
2025-01-06T19:52:56
https://www.reddit.com/r/LocalLLaMA/comments/1hv80yu/best_model_for_writing/
One_Appointment_6035
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv80yu
false
null
t3_1hv80yu
/r/LocalLLaMA/comments/1hv80yu/best_model_for_writing/
false
false
self
1
null
Qwen2.5 14B on a Raspberry Pi
1
2025-01-06T19:54:59
https://www.reddit.com/gallery/1hv82tg
pepijndevos
reddit.com
1970-01-01T00:00:00
0
{}
1hv82tg
false
null
t3_1hv82tg
/r/LocalLLaMA/comments/1hv82tg/qwen25_14b_on_a_raspberry_pi/
false
false
https://a.thumbs.redditm…JeMKw03T_6x4.jpg
1
null
8945h with 780M running LM Studio
1
Hi there! I know that the my processor's built in GPU and NPU aren't that high powered, I use dGPUs instead, but I am still interrested, wether it would be possible to use the GPU and NPU this thing has in LM Studio. When I see runtimes in the menu, it says, ROCm isn't compatible, but Vulcan IS. Note that I am running Win11 and not Linux. My intention is just to find out what's up with that Ryzen AI marketing they did with this processor.
2025-01-06T20:24:02
https://www.reddit.com/r/LocalLLaMA/comments/1hv8tg5/8945h_with_780m_running_lm_studio/
Endercraft2007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv8tg5
false
null
t3_1hv8tg5
/r/LocalLLaMA/comments/1hv8tg5/8945h_with_780m_running_lm_studio/
false
false
self
1
null
Cheap a100 or scam?
1
[removed]
2025-01-06T20:31:19
https://www.reddit.com/r/LocalLLaMA/comments/1hv900x/cheap_a100_or_scam/
Applsauce54
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv900x
false
null
t3_1hv900x
/r/LocalLLaMA/comments/1hv900x/cheap_a100_or_scam/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/TtT6YqLX-IG5kN-DSGA6Vy0FrCmMM7mnljqEQpTMwo8.jpg?auto=webp&s=7a7a1813217b0653953f26fc5e83d3ee4e600c13', 'width': 400, 'height': 375}, 'resolutions': [{'url': 'https://external-preview.redd.it/TtT6YqLX-IG5kN-DSGA6Vy0FrCmMM7mnljqEQpTMwo8.jpg?width=108&crop=smart&auto=webp&s=b1b9c8929aea540d1415a6e183e8b629dc81ade5', 'width': 108, 'height': 101}, {'url': 'https://external-preview.redd.it/TtT6YqLX-IG5kN-DSGA6Vy0FrCmMM7mnljqEQpTMwo8.jpg?width=216&crop=smart&auto=webp&s=c3e0c9bc10b59ac2aded546053e9bc8198205074', 'width': 216, 'height': 202}, {'url': 'https://external-preview.redd.it/TtT6YqLX-IG5kN-DSGA6Vy0FrCmMM7mnljqEQpTMwo8.jpg?width=320&crop=smart&auto=webp&s=9a81520da639990ea64da8739013f6f001a40d65', 'width': 320, 'height': 300}], 'variants': {}, 'id': 'IHU_NyYFlj8stfyXMLnqNnNm5hOH2dlIpfTzlKm6ERw'}], 'enabled': False}
Little to no code AI automation tools, any recommendations?
1
I am looking for a nice tool to build POCs that leverage AI agents with little to no code, from what I see there are so many. Are there any specific tools you'd recommend? And why?
2025-01-06T20:31:28
https://www.reddit.com/r/LocalLLaMA/comments/1hv905j/little_to_no_code_ai_automation_tools_any/
Better_Resource_4765
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv905j
false
null
t3_1hv905j
/r/LocalLLaMA/comments/1hv905j/little_to_no_code_ai_automation_tools_any/
false
false
self
1
null