title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Beyond RAG: Building a Knowledge Management System That Enhances Rather Than Replaces Thought | 11 | 2025-01-02T23:09:01 | https://nsavage.substack.com/p/beyond-rag-building-a-knowledge-management | Naga | nsavage.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1hs7enp | false | null | t3_1hs7enp | /r/LocalLLaMA/comments/1hs7enp/beyond_rag_building_a_knowledge_management_system/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'fzAtgbwYpWMCVPnqRfF944TV8Ai-bHG230SMh8SLWvM', 'resolutions': [{'height': 101, 'url': 'https://external-preview.redd.it/cwKNjdzuHIhqda-jAsK609i5QySWxvCz6VReo_06xWA.jpg?width=108&crop=smart&auto=webp&s=19399e2cc1eb315bcc40d43b63ffd047703d401b', 'width': 108}, {'height': 202, 'url': 'https://external-preview.redd.it/cwKNjdzuHIhqda-jAsK609i5QySWxvCz6VReo_06xWA.jpg?width=216&crop=smart&auto=webp&s=1b164c029a9d38d11163b6797f45e4ba2053cfd6', 'width': 216}, {'height': 300, 'url': 'https://external-preview.redd.it/cwKNjdzuHIhqda-jAsK609i5QySWxvCz6VReo_06xWA.jpg?width=320&crop=smart&auto=webp&s=e5132eac1efa2350e4578c6aea90424dfdc409d5', 'width': 320}, {'height': 600, 'url': 'https://external-preview.redd.it/cwKNjdzuHIhqda-jAsK609i5QySWxvCz6VReo_06xWA.jpg?width=640&crop=smart&auto=webp&s=837a0ff4b0da2842e11ff05e738f185c05477469', 'width': 640}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cwKNjdzuHIhqda-jAsK609i5QySWxvCz6VReo_06xWA.jpg?auto=webp&s=63b9ffd275964e3441e0014ac1f6ce735e829978', 'width': 640}, 'variants': {}}]} |
||
deepseek suks | 0 | Ive been using the API and everything after its first response is complete garbage. Can't keep context, can't follow instructions. Whats all the hype for? I'm back to using gpt4o for my api calls | 2025-01-02T23:55:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hs8gvr/deepseek_suks/ | RouteGuru | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs8gvr | false | null | t3_1hs8gvr | /r/LocalLLaMA/comments/1hs8gvr/deepseek_suks/ | false | false | self | 0 | null |
(noob) how do I build a rig? | 5 | *I’m not as tech savvy as I'd like to believe myself to be so please bare with me if I sound like an idiot at any point.*
This project has been something I've wanted to do but never have just from his daunting it all looks when I actually look into it.
I want to put together a dedicated LLM PC tower that’s only function would be to sit in a corner and be a personal server I can connect various applications to wirelessly (obsidian MD, SillyTavern) staying on and running 24/7 and loaded with enough VRAM to run a right proper high quality model. My primary use case would be note taking and analysis of a lot of writings and documents. I personal assistant to put it plainly.
I've been on the user end of local models for a few years and want to get a bit more intimate with what I’m using. I've been stuck with 12b and bellow models on my 4090 laptop(16gigs) and can't exactly have it running alongside my other graphic extensive productivity software, so being able to simply connect to a dedicated device built to run those wild +100b models would be a dream(though really the bare minimum goal is high context 70b).
My question is how... Would I do this? I’m aware people have made exactly what I’m describing, but most haven’t divulged the details of how they put it together. I wouldn’t know the first step, how do people hook up like 7 graphics cards together like that? Is it just like building a normal computer but with a bunch of custom boards, software, and cases? I really want to make a rig but I’m paralysed with uncertainty and lack of knowlege.
Are there any guide or resources for someone as clueless as me that I could read up on? Or ideally get some tips from those who have built their own and how they did it? | 2025-01-03T00:07:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hs8qat/noob_how_do_i_build_a_rig/ | IZA_does_the_art | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs8qat | false | null | t3_1hs8qat | /r/LocalLLaMA/comments/1hs8qat/noob_how_do_i_build_a_rig/ | false | false | self | 5 | null |
Llama 3.3 is trolling me | 1 | 2025-01-03T00:48:19 | Funny_Acanthaceae285 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hs9nl8 | false | null | t3_1hs9nl8 | /r/LocalLLaMA/comments/1hs9nl8/llama_33_is_trolling_me/ | false | false | 1 | {'enabled': True, 'images': [{'id': '8yMODR9Ny-FIbwxws8CCixqTFEkwcwOecvDl55SjhaA', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ddlb98wqeoae1.jpeg?width=108&crop=smart&auto=webp&s=443450135577586545381ab2f2a066f1e8134599', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ddlb98wqeoae1.jpeg?width=216&crop=smart&auto=webp&s=48fe6ee8048766f7c228410b6a41a9e2d45a8e29', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ddlb98wqeoae1.jpeg?width=320&crop=smart&auto=webp&s=72dbd756b11e4351fd5ed29a3c3f484c0b87c194', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/ddlb98wqeoae1.jpeg?width=640&crop=smart&auto=webp&s=ca46459f48ea0a5b4c294f8791819245f16411ed', 'width': 640}], 'source': {'height': 1600, 'url': 'https://preview.redd.it/ddlb98wqeoae1.jpeg?auto=webp&s=c9bf3d1a173418ab196c09a9a035784ffe69137c', 'width': 720}, 'variants': {}}]} |
|||
Very basic LLM | 0 | It seems like you always get an LLM that knows about Shakespeare and geometry. Is there a way to find an LLM that is basically college educated with little bias? tyia | 2025-01-03T00:52:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hs9qsr/very_basic_llm/ | heyflyguy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs9qsr | false | null | t3_1hs9qsr | /r/LocalLLaMA/comments/1hs9qsr/very_basic_llm/ | false | false | self | 0 | null |
Twoie: TUI for LLMs and Development | 4 | Little TUI thing I made for a project I'm working on:
Access models from OpenAI, Google, Groq, SambaNova, and Deepseek(so far). Paste and edit responses all from the command line. Use on any Linux based terminal including Termux on Android with minimal requirements. Installation is super simple. Just use your favorite package manager to make sure you have tmux, jq, curl and dialog installed, chmod the shell files and run twoie.sh.
Not revolutionary but I had fun making it, and it does what I want it to - connect through terminal even native android Termux shell no proot needed. Looking for more free providers to add. Connects to Ollama also. Change the IP address in all\_chat.sh. Selecting Ollama or Google pull the list of available models to choose from. Other providers you can hard code the specific models you want.
[https://github.com/mrhappynice/twoie](https://github.com/mrhappynice/twoie)
Short vid: [https://www.youtube.com/watch?v=5ASjYKRf-pw](https://www.youtube.com/watch?v=5ASjYKRf-pw)
Peace :) | 2025-01-03T00:57:44 | https://www.reddit.com/r/LocalLLaMA/comments/1hs9v09/twoie_tui_for_llms_and_development/ | mr_happy_nice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs9v09 | false | null | t3_1hs9v09 | /r/LocalLLaMA/comments/1hs9v09/twoie_tui_for_llms_and_development/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'MGOvxLvRg1PXqDwZ0W_WPr7z8jy0d5fj0YlTFxYBgIY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lSYSM8T_ZVa3voxbplE8Rk36W4ZEl4jq8HnByMcQKSk.jpg?width=108&crop=smart&auto=webp&s=908f7af033970260d8df4612c8c2b0b0632b3aed', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lSYSM8T_ZVa3voxbplE8Rk36W4ZEl4jq8HnByMcQKSk.jpg?width=216&crop=smart&auto=webp&s=f33e255c4ab06ce8af2b2e72e699603870b69a79', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lSYSM8T_ZVa3voxbplE8Rk36W4ZEl4jq8HnByMcQKSk.jpg?width=320&crop=smart&auto=webp&s=f192fdfd14d8249ab52a4a5d2a90f45011c30cd0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lSYSM8T_ZVa3voxbplE8Rk36W4ZEl4jq8HnByMcQKSk.jpg?width=640&crop=smart&auto=webp&s=242d1b64bcb7ba988f13cc67b84354ccc7d3e685', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lSYSM8T_ZVa3voxbplE8Rk36W4ZEl4jq8HnByMcQKSk.jpg?width=960&crop=smart&auto=webp&s=93651c2a9857087b669d693123a5dbbab86da841', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lSYSM8T_ZVa3voxbplE8Rk36W4ZEl4jq8HnByMcQKSk.jpg?width=1080&crop=smart&auto=webp&s=db7df4b60d15fea0d291edff036d993a55151da7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lSYSM8T_ZVa3voxbplE8Rk36W4ZEl4jq8HnByMcQKSk.jpg?auto=webp&s=08d0820533b96d9f970ca5fcad964b2edfa2eb4f', 'width': 1200}, 'variants': {}}]} |
So what happened to the 1.58bit models "revolution" ? | 178 | Haven't heard about that in quite a while now | 2025-01-03T01:05:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hsa0tm/so_what_happened_to_the_158bit_models_revolution/ | Kriima | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsa0tm | false | null | t3_1hsa0tm | /r/LocalLLaMA/comments/1hsa0tm/so_what_happened_to_the_158bit_models_revolution/ | false | false | self | 178 | null |
Pinokio | 1 | [removed] | 2025-01-03T01:10:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hsa5ff/pinokio/ | ComprehensiveOne7974 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsa5ff | false | null | t3_1hsa5ff | /r/LocalLLaMA/comments/1hsa5ff/pinokio/ | false | false | self | 1 | null |
Does Mergoo still work? | 1 | I’m trying to create a MoE( or pseudo-MoE) out of three Qwen 2.5 3B models. I planned on using Mergo to do it.
But I’m running into an issue with transformers. It appears shard_checkpoint is deprecated, which seems to be a crucial for Mergoo.
I was going to use the an older version of transformers, but I’m not sure when it was deprecated and I’d rather not waste time if Mergoo is too outdated (no updates for about 9
months).
Does anyone know if another easy way to make a MoE? All the libraries I seem to find are older and no longer get maintained.
| 2025-01-03T01:20:24 | https://www.reddit.com/r/LocalLLaMA/comments/1hsact2/does_mergoo_still_work/ | OrangeESP32x99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsact2 | false | null | t3_1hsact2 | /r/LocalLLaMA/comments/1hsact2/does_mergoo_still_work/ | false | false | self | 1 | null |
pinokio | 1 | [removed] | 2025-01-03T01:25:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hsagsb/pinokio/ | ComprehensiveOne7974 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsagsb | false | null | t3_1hsagsb | /r/LocalLLaMA/comments/1hsagsb/pinokio/ | false | false | self | 1 | null |
Table Search? | 1 | [removed] | 2025-01-03T01:27:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hsaii3/table_search/ | Ill_Recipe7620 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsaii3 | false | null | t3_1hsaii3 | /r/LocalLLaMA/comments/1hsaii3/table_search/ | false | false | self | 1 | null |
4090s for sale on eBay without VRAM or Core. What is causing this? | 139 | As the title states, there are literally hundreds of 4090s for sale on eBay with no core and no VRAM.
Why are there so many of these? My initial guess is people are finding an alternative way to use the core and VRAM in a more space efficient manner, like putting them on a 1 or 2 slot configuration, but that would also require the PCB which a lot of sellers leave in.
wtf is going on?
Here are some links
https://www.ebay.com/itm/266748557761?mkcid=16&mkevt=1&mkrid=711-127632-2357-0&ssspo=SGKN60xfT8q&sssrc=4429486&ssuid=z-8ls6hzt3w&var=&widget_ver=artemis&media=COPY
https://www.ebay.com/itm/266708800731?mkcid=16&mkevt=1&mkrid=711-127632-2357-0&ssspo=SGKN60xfT8q&sssrc=4429486&ssuid=z-8ls6hzt3w&var=&widget_ver=artemis&media=COPY
https://www.ebay.com/itm/266807084573?mkcid=16&mkevt=1&mkrid=711-127632-2357-0&ssspo=SGKN60xfT8q&sssrc=4429486&ssuid=z-8ls6hzt3w&var=&widget_ver=artemis&media=COPY | 2025-01-03T01:41:31 | https://www.reddit.com/r/LocalLLaMA/comments/1hsasu5/4090s_for_sale_on_ebay_without_vram_or_core_what/ | Mephidia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsasu5 | false | null | t3_1hsasu5 | /r/LocalLLaMA/comments/1hsasu5/4090s_for_sale_on_ebay_without_vram_or_core_what/ | false | false | self | 139 | {'enabled': False, 'images': [{'id': 'gyqG6pmkp-43n_GGFZOG0mCKG4k5uyws0GTOCuCvBGA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ZyaFQbq_jyVl0Zs-WkZdHniZBJbtVtajKzMDS1ab5qs.jpg?width=108&crop=smart&auto=webp&s=a2a917e2d39fd47ca5bbf8d6e63e72029682d250', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ZyaFQbq_jyVl0Zs-WkZdHniZBJbtVtajKzMDS1ab5qs.jpg?width=216&crop=smart&auto=webp&s=4a4c470600c644a7a1ee6a619048b839b5ffd315', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ZyaFQbq_jyVl0Zs-WkZdHniZBJbtVtajKzMDS1ab5qs.jpg?width=320&crop=smart&auto=webp&s=a9a660337616121df36a343d3f09e93b74d6878b', 'width': 320}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/ZyaFQbq_jyVl0Zs-WkZdHniZBJbtVtajKzMDS1ab5qs.jpg?auto=webp&s=0aac92f8b436a2bd0c86ef458bf79ffe0172d03a', 'width': 400}, 'variants': {}}]} |
How about n8n and MLX models? | 1 | [removed] | 2025-01-03T01:52:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hsb1bh/how_about_n8n_and_mlx_models/ | TerribleIndication18 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsb1bh | false | null | t3_1hsb1bh | /r/LocalLLaMA/comments/1hsb1bh/how_about_n8n_and_mlx_models/ | false | false | self | 1 | null |
Help / tool suggestions for talking to LLMs from Python/R on linux systems without admin privileges, like on a SLURM-based HPC? | 1 | [removed] | 2025-01-03T02:00:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hsb74k/help_tool_suggestions_for_talking_to_llms_from/ | DepressionSux420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsb74k | false | null | t3_1hsb74k | /r/LocalLLaMA/comments/1hsb74k/help_tool_suggestions_for_talking_to_llms_from/ | false | false | self | 1 | null |
Best local tool for article/paper writing? | 4 | ChatGPT's writing mode is cool, but... well, it's ChatGPT, and the number of requests is limited. Chat-style feeds, meanwhile, aren't really great for article writing as they have no CoT-style overview of how revisions are being done, and tools like Cline aren't really suited.
Does anyone have any recommendations for a specific local tool which has an article writing mode? Ideally something which allows you to keep feeding the article back into an API of your choice with new prompts, as with ChatGPT's writer mode? Bonus if it allows you to cite and refer to sources, NotebookLM-style. | 2025-01-03T03:22:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hscuus/best_local_tool_for_articlepaper_writing/ | Recoil42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hscuus | false | null | t3_1hscuus | /r/LocalLLaMA/comments/1hscuus/best_local_tool_for_articlepaper_writing/ | false | false | self | 4 | null |
2025 is game changer for open source ai ! | 204 | 2025-01-03T03:46:21 | Evening_Action6217 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hsdb7e | false | null | t3_1hsdb7e | /r/LocalLLaMA/comments/1hsdb7e/2025_is_game_changer_for_open_source_ai/ | false | false | 204 | {'enabled': True, 'images': [{'id': 'jJJM8Uu9J9akyIJZrYUaohAPNb1jgeO8Q1bL-1XtmDA', 'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/0w25q1iiapae1.jpeg?width=108&crop=smart&auto=webp&s=652eed8a928a869f1f445938546ee228fbc5e62c', 'width': 108}, {'height': 186, 'url': 'https://preview.redd.it/0w25q1iiapae1.jpeg?width=216&crop=smart&auto=webp&s=947dd46d3a3f80be70ff2f112c8986491a9bc0b6', 'width': 216}, {'height': 276, 'url': 'https://preview.redd.it/0w25q1iiapae1.jpeg?width=320&crop=smart&auto=webp&s=23c79fed6780c20333acf680db36702942c5ce54', 'width': 320}, {'height': 552, 'url': 'https://preview.redd.it/0w25q1iiapae1.jpeg?width=640&crop=smart&auto=webp&s=b4035434f69368dc469c435f981f8ec0bf018a18', 'width': 640}, {'height': 828, 'url': 'https://preview.redd.it/0w25q1iiapae1.jpeg?width=960&crop=smart&auto=webp&s=4807dc5cd62c50056bd5a6769b64ba70ae9ffeb5', 'width': 960}, {'height': 932, 'url': 'https://preview.redd.it/0w25q1iiapae1.jpeg?width=1080&crop=smart&auto=webp&s=7c9bb753f9b3d716a2da3c7126d2d1298bc7e77a', 'width': 1080}], 'source': {'height': 932, 'url': 'https://preview.redd.it/0w25q1iiapae1.jpeg?auto=webp&s=a36fc92f64bcc1544c359937ea3e4efe797f93e5', 'width': 1080}, 'variants': {}}]} |
|||
Segmentation fault on Mistral large 2411 q5/q6 gguf quants (bartowski/mradermacher) | 3 | Hi folks, did anybody else experienced any problems with large quants of Mistral Large 2411 on Mac version of koboldcpp?
Q3/Q4 quants work fine, but Q5/Q6 immediately produces "segmentation fault" error:
"Line 2: 2499 Segmentation fault:11"
I'm trying to use it on Mac Studio with 128gb of memory, so it supposed to have enough vram for that quants.
Any hints are welcome! Thx for the reading 😊 | 2025-01-03T04:00:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hsdkqh/segmentation_fault_on_mistral_large_2411_q5q6/ | Bitter_Square6273 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsdkqh | false | null | t3_1hsdkqh | /r/LocalLLaMA/comments/1hsdkqh/segmentation_fault_on_mistral_large_2411_q5q6/ | false | false | self | 3 | null |
2 OLMo 2 Furious | 135 | 2025-01-03T04:10:20 | https://arxiv.org/abs/2501.00656 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1hsdrpg | false | null | t3_1hsdrpg | /r/LocalLLaMA/comments/1hsdrpg/2_olmo_2_furious/ | false | false | default | 135 | null |
|
Is this worth it for inference and fine tuning 70b models | 1 | This v100 8x dgx server is available for 11k on ebay. Thats almost 128gb of vram connected via sxm.
Is this a bad deal? Link here | 2025-01-03T04:40:09 | https://www.ebay.com/itm/267003982127?chn=ps&norover=1&mkevt=1&mkrid=711-117182-37290-0&mkcid=2&mkscid=101&itemid=267003982127&targetid=2299003535995&device=m&mktype=pla&googleloc=9031039&poi=&campaignid=21203633013&mkgroupid=162035688435&rlsatarget=pla-2299003535995&abcId=9407526&merchantid=118876363&gad_source=1&gbraid=0AAAAAD_QDh9hDbgXB5vTJiDo4VtfXmIik&gclid=Cj0KCQiAj9m7BhD1ARIsANsIIvAZ5En7tF2XKWOcAy8cEHrz7XzcFGV1EGc1DDhNxVRGTnCjRgRQ_EQaAkdFEALw_wcB | tungstenmamba | ebay.com | 1970-01-01T00:00:00 | 0 | {} | 1hsebbm | false | null | t3_1hsebbm | /r/LocalLLaMA/comments/1hsebbm/is_this_worth_it_for_inference_and_fine_tuning/ | false | false | 1 | {'enabled': False, 'images': [{'id': '1JhVqaS6aSHeOhp7fUwRQMpV4R8Gk1hRMVafkjAYgtk', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/5IiQ8wvnMbg9PwlVznhPuIlSi7bcKqByhTt0H5vR2qg.jpg?width=108&crop=smart&auto=webp&s=8ba8ec1f83d6809459c8789376e3227b2aa3c448', 'width': 108}, {'height': 170, 'url': 'https://external-preview.redd.it/5IiQ8wvnMbg9PwlVznhPuIlSi7bcKqByhTt0H5vR2qg.jpg?width=216&crop=smart&auto=webp&s=9b81319a63c31472f888975e853247e03789fa6b', 'width': 216}, {'height': 252, 'url': 'https://external-preview.redd.it/5IiQ8wvnMbg9PwlVznhPuIlSi7bcKqByhTt0H5vR2qg.jpg?width=320&crop=smart&auto=webp&s=1a7eedf1e9171d5d43db1fc0dd5fe4202ab41648', 'width': 320}], 'source': {'height': 316, 'url': 'https://external-preview.redd.it/5IiQ8wvnMbg9PwlVznhPuIlSi7bcKqByhTt0H5vR2qg.jpg?auto=webp&s=7b2e92230012e1f597def537d1d3189d8817a848', 'width': 400}, 'variants': {}}]} |
|
Deepseek V3 hosted on Fireworks (no data collection, $0.9/m, 25t/s) | 154 | Model: [https://fireworks.ai/models/fireworks/deepseek-v3](https://fireworks.ai/models/fireworks/deepseek-v3)
Announcement: [https://x.com/FireworksAI\_HQ/status/1874231432203337849](https://x.com/FireworksAI_HQ/status/1874231432203337849)
Fireworks is hosting deepseek! It's a nice option because they don't collect/sell data (unlike Deepseek's API). More expensive for now ($0.9/m) but deepseek is raising their prices in. Perf isn't great (25t/s), but decent).
They also say they are working on fine-tuning support in the twitter thread.
Apologies if this has already been posted, but reddit search didn't find it. | 2025-01-03T04:56:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hselkx/deepseek_v3_hosted_on_fireworks_no_data/ | davernow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hselkx | false | null | t3_1hselkx | /r/LocalLLaMA/comments/1hselkx/deepseek_v3_hosted_on_fireworks_no_data/ | false | false | self | 154 | {'enabled': False, 'images': [{'id': 'iwzpTurcMEZJMliWurrLM_mE3xmBYQPnLa_J02hd11o', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ultQP85zcyCGB5lm2vjQz9byDo3wkmHtFHYMDwbUyVc.jpg?width=108&crop=smart&auto=webp&s=87d48a2c9f511a6d2f96968c6081008a44760c68', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/ultQP85zcyCGB5lm2vjQz9byDo3wkmHtFHYMDwbUyVc.jpg?width=216&crop=smart&auto=webp&s=fdde6d09a016b4cf1ad4038e9c09edc743e6f229', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/ultQP85zcyCGB5lm2vjQz9byDo3wkmHtFHYMDwbUyVc.jpg?width=320&crop=smart&auto=webp&s=cbbccb3362d46b6ae0d28433a2218748504385ec', 'width': 320}], 'source': {'height': 314, 'url': 'https://external-preview.redd.it/ultQP85zcyCGB5lm2vjQz9byDo3wkmHtFHYMDwbUyVc.jpg?auto=webp&s=0c325dc10c8d2caab30eb6ec3eee876358e54a41', 'width': 600}, 'variants': {}}]} |
Train a 7B model that outperforms GPT-4o? | 1 | [removed] | 2025-01-03T05:20:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hsf19j/train_a_7b_model_that_outperforms_gpt4o/ | Lynncc6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsf19j | false | null | t3_1hsf19j | /r/LocalLLaMA/comments/1hsf19j/train_a_7b_model_that_outperforms_gpt4o/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ecQgyhvB3lUNxRPJZa4mLC61mYf64oFsQVkP0iYIxNk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6Djx0rdI_BemsO18-ezr_SwTxsyzbfnEtRrBYOTXFqU.jpg?width=108&crop=smart&auto=webp&s=5e08c0109c5159bace3c0e56afa82a168efcc3cc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6Djx0rdI_BemsO18-ezr_SwTxsyzbfnEtRrBYOTXFqU.jpg?width=216&crop=smart&auto=webp&s=e4dc4200ed285b7e9510a446e303f924dfcd5ba2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6Djx0rdI_BemsO18-ezr_SwTxsyzbfnEtRrBYOTXFqU.jpg?width=320&crop=smart&auto=webp&s=ee5a4fc820c61a1e9b9653630d0dc4a3dd17fef7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6Djx0rdI_BemsO18-ezr_SwTxsyzbfnEtRrBYOTXFqU.jpg?width=640&crop=smart&auto=webp&s=7941ec3e61cafc823c5848b88b7ad72ad465d950', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6Djx0rdI_BemsO18-ezr_SwTxsyzbfnEtRrBYOTXFqU.jpg?width=960&crop=smart&auto=webp&s=9273cd1a0e32d8449a83efa2a808e1de0a4c7575', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6Djx0rdI_BemsO18-ezr_SwTxsyzbfnEtRrBYOTXFqU.jpg?width=1080&crop=smart&auto=webp&s=dfc5ead2a2586375a95a65c45665284c5a33df60', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6Djx0rdI_BemsO18-ezr_SwTxsyzbfnEtRrBYOTXFqU.jpg?auto=webp&s=70fda66e68f9cd97d1d95cf945ea8aedba9a9f05', 'width': 1200}, 'variants': {}}]} |
|
Minimal screenshot management tool using moondream + qwen2:1.5b + mxbai for local image understanding | 15 | My intention was to test how far are small language models on specific tasks would go and how they can be combined to make a working, useful app! So, I built this minimal tool that combines several local models for screenshot management and semantic search.
Repo: [https://github.com/tisu19021997/snipai](https://github.com/tisu19021997/snipai)
Demo: [https://www.youtube.com/watch?v=ftmSr9TE6wA](https://www.youtube.com/watch?v=ftmSr9TE6wA)
**Models (served on** `ollama`**)**
* `moondream` for image description generation
* `qwen2:1.5b` for extracting image tags
* `mxbai-embed-large` for generating vector embeddings for semantic search - also do [binary quantization](https://www.mixedbread.ai/blog/binary-mrl) to save storage and speed up retrieval
**Tech stacks**
* `PyQt6` for the UI
* `sqlite` for both regular database operations and vector storage (via `sqlite-vec`) - which is already packed with Python
* `ollama` for model serving
* `networkx` for graph-based visualization of semantic relationships between screenshots (work in progress)
* Tested on my M1 Pro 16gb Mac.
I'd love to hear your thoughts, feedback or any feature suggestions! I'm currently working more on the **semantic similarity graph** (by trying small image embedding models instead of text) and trying to **finetune** more small models for tasks like image tagging, naming, and description generation.
[Demo](https://reddit.com/link/1hsf9qh/video/g7orkavmtpae1/player)
| 2025-01-03T05:34:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hsf9qh/minimal_screenshot_management_tool_using/ | Hairy-Map2785 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsf9qh | false | null | t3_1hsf9qh | /r/LocalLLaMA/comments/1hsf9qh/minimal_screenshot_management_tool_using/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'HCeBUPoDP11rYmz_WcL9dMejAQWHHqp_l-6bQmz_uGU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oWlsxreB_LquLfdBqUzA8wPOqqzrcpResb9blns8kgw.jpg?width=108&crop=smart&auto=webp&s=70b0a4130fce102f2f20e92976ff5dfe775894b9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oWlsxreB_LquLfdBqUzA8wPOqqzrcpResb9blns8kgw.jpg?width=216&crop=smart&auto=webp&s=73ae27d8445788b829ec72b06622e1c6342f29c2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oWlsxreB_LquLfdBqUzA8wPOqqzrcpResb9blns8kgw.jpg?width=320&crop=smart&auto=webp&s=01b1719fba201bc22c63d2aa085f1bad0d86422c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oWlsxreB_LquLfdBqUzA8wPOqqzrcpResb9blns8kgw.jpg?width=640&crop=smart&auto=webp&s=61570d7e03536fafedf945432789300fc03134ad', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oWlsxreB_LquLfdBqUzA8wPOqqzrcpResb9blns8kgw.jpg?width=960&crop=smart&auto=webp&s=36ecbbd6033b59826fbc4484f6dc1a380fbbf12e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oWlsxreB_LquLfdBqUzA8wPOqqzrcpResb9blns8kgw.jpg?width=1080&crop=smart&auto=webp&s=a735e7ceb99fa9d72a89b82cf3db3a7597bfa90f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oWlsxreB_LquLfdBqUzA8wPOqqzrcpResb9blns8kgw.jpg?auto=webp&s=6cadfeb02d345e304523f592120632dc031a977d', 'width': 1200}, 'variants': {}}]} |
|
What happened to Moshi? | 54 | Moshi being a good Voice to Voice model with dual channel support, why it's not a hot topic?
Also, another V2V model [hertz-dev](https://si.inc/hertz-dev/) didn't catch the hype. | 2025-01-03T06:11:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hsfuxn/what_happened_to_moshi/ | Interesting-Fish-542 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsfuxn | false | null | t3_1hsfuxn | /r/LocalLLaMA/comments/1hsfuxn/what_happened_to_moshi/ | false | false | self | 54 | null |
Incorporating dashboard/UI images into RAG | 1 | [removed] | 2025-01-03T06:13:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hsfvyh/incorporating_dashboardui_images_into_rag/ | mlstudies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsfvyh | false | null | t3_1hsfvyh | /r/LocalLLaMA/comments/1hsfvyh/incorporating_dashboardui_images_into_rag/ | false | false | self | 1 | null |
"Which is the Best LLM Model for Coding with 4070 Ti SUPER 16GB VRAM?" | 1 | [removed] | 2025-01-03T06:25:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hsg2yd/which_is_the_best_llm_model_for_coding_with_4070/ | Brief-Ad-1131 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsg2yd | false | null | t3_1hsg2yd | /r/LocalLLaMA/comments/1hsg2yd/which_is_the_best_llm_model_for_coding_with_4070/ | false | false | self | 1 | null |
cortex supports small-thinker-3B, a small reasoning model fine-tuned from the Qwen2.5-3b-Instruct | 177 | 2025-01-03T07:28:58 | https://v.redd.it/h5pgzq6zdqae1 | emreckartal | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hsh1ci | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/h5pgzq6zdqae1/DASHPlaylist.mpd?a=1738481352%2CODRkY2JkMjhhYzNjYmVkYzk2ZmZhYWQwNmJjM2E3Nzk1ZGY1Njc1ZDg0YzQ5NzE0ZTIwNWNkNDcxY2Y2ZmY3Mg%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/h5pgzq6zdqae1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/h5pgzq6zdqae1/HLSPlaylist.m3u8?a=1738481352%2CZjE0MjBjMWVkOTgxMzZjMjY2N2E3YzQzNjk5NGRjMzA4Y2ZmMzU0MDdmYWQwYmVhNjFmNzIzMTg1YmU2ODBmMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/h5pgzq6zdqae1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1288}} | t3_1hsh1ci | /r/LocalLLaMA/comments/1hsh1ci/cortex_supports_smallthinker3b_a_small_reasoning/ | false | false | 177 | {'enabled': False, 'images': [{'id': 'd2FwaThyNnpkcWFlMQVdpKeGGhA5H2lr_KNUg7gtQcy7pVzqhwT-IWglYUfp', 'resolutions': [{'height': 90, 'url': 'https://external-preview.redd.it/d2FwaThyNnpkcWFlMQVdpKeGGhA5H2lr_KNUg7gtQcy7pVzqhwT-IWglYUfp.png?width=108&crop=smart&format=pjpg&auto=webp&s=d03742a7a47e122809dbb1ce981b8441f9306a52', 'width': 108}, {'height': 181, 'url': 'https://external-preview.redd.it/d2FwaThyNnpkcWFlMQVdpKeGGhA5H2lr_KNUg7gtQcy7pVzqhwT-IWglYUfp.png?width=216&crop=smart&format=pjpg&auto=webp&s=d38d333ff6a9032d287c755006c0b9bfb5af4af8', 'width': 216}, {'height': 268, 'url': 'https://external-preview.redd.it/d2FwaThyNnpkcWFlMQVdpKeGGhA5H2lr_KNUg7gtQcy7pVzqhwT-IWglYUfp.png?width=320&crop=smart&format=pjpg&auto=webp&s=5ee03c58dab46fa9f22fccbacb792618ba0ae7c9', 'width': 320}, {'height': 536, 'url': 'https://external-preview.redd.it/d2FwaThyNnpkcWFlMQVdpKeGGhA5H2lr_KNUg7gtQcy7pVzqhwT-IWglYUfp.png?width=640&crop=smart&format=pjpg&auto=webp&s=44d6256cfd6ee46505f9645ff01e25e6ebb7431d', 'width': 640}, {'height': 804, 'url': 'https://external-preview.redd.it/d2FwaThyNnpkcWFlMQVdpKeGGhA5H2lr_KNUg7gtQcy7pVzqhwT-IWglYUfp.png?width=960&crop=smart&format=pjpg&auto=webp&s=56d088b3a35128af28951224723b9beda21c5383', 'width': 960}, {'height': 905, 'url': 'https://external-preview.redd.it/d2FwaThyNnpkcWFlMQVdpKeGGhA5H2lr_KNUg7gtQcy7pVzqhwT-IWglYUfp.png?width=1080&crop=smart&format=pjpg&auto=webp&s=aad1d4d6c188af42e164d71d11549b99c3413bd8', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/d2FwaThyNnpkcWFlMQVdpKeGGhA5H2lr_KNUg7gtQcy7pVzqhwT-IWglYUfp.png?format=pjpg&auto=webp&s=d4aa7f27cabaf4889effc97355d94a37b7d5ff8c', 'width': 1288}, 'variants': {}}]} |
||
A simple framework to decide if fine-tuning is worth it | 22 | Hi everyone!
I’m building a startup in the LLM customization space, and in every first call with enterprise customers this year, we’ve ended up brainstorming the same thing – **does their application really need a fine-tuned model, or is a stock LLM enough?**
We’ve distilled all those conversations into a simple framework to help answer that question. It’s been useful for scoring leads, prioritizing, and giving our customers more clarity – saving everyone time.
Here's the link: [https://genloop.ai/should-you-fine-tune](https://genloop.ai/should-you-fine-tune)
It’s built from a regression model based on past calls and the value we’ve seen in case studies. I’d love to hear your thoughts – does this line up with your experience? Any feedback is welcome!
Wishing everyone a great 2025 for open-source and open-weight models! | 2025-01-03T07:49:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hshb5z/a_simple_framework_to_decide_if_finetuning_is/ | SirComprehensive7453 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hshb5z | false | null | t3_1hshb5z | /r/LocalLLaMA/comments/1hshb5z/a_simple_framework_to_decide_if_finetuning_is/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'g_GDt7OJ7h_RO1vg8c-KowRwUYVTrinksVm3roWztxo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/9pEBvKiynJtRyY0OdMgG5gwLANWwvDZN1LUq6aB09J0.jpg?width=108&crop=smart&auto=webp&s=de2e25d00bdf7dfadd06b3aa356511853d76f918', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/9pEBvKiynJtRyY0OdMgG5gwLANWwvDZN1LUq6aB09J0.jpg?width=216&crop=smart&auto=webp&s=9d23f1f122a6711c05d1d84b33ca4f87498b75cd', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/9pEBvKiynJtRyY0OdMgG5gwLANWwvDZN1LUq6aB09J0.jpg?width=320&crop=smart&auto=webp&s=9d375ecfd1efb9dc5e140a9729777211dfbe2609', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/9pEBvKiynJtRyY0OdMgG5gwLANWwvDZN1LUq6aB09J0.jpg?width=640&crop=smart&auto=webp&s=29fd642fdb007ba7e28b311f1e2fd78989623088', 'width': 640}], 'source': {'height': 473, 'url': 'https://external-preview.redd.it/9pEBvKiynJtRyY0OdMgG5gwLANWwvDZN1LUq6aB09J0.jpg?auto=webp&s=47a102d0e0297043a5e12b6837eed90dd7692f98', 'width': 900}, 'variants': {}}]} |
🚀 Introducing **Titan Sight**: Seamless Web Search Integration for LLM Agents with Advanced Caching and Free Options! 🧠🔍 | 1 | [removed] | 2025-01-03T08:03:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hshiep/introducing_titan_sight_seamless_web_search/ | Powerful_Soup7645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hshiep | false | null | t3_1hshiep | /r/LocalLLaMA/comments/1hshiep/introducing_titan_sight_seamless_web_search/ | false | false | self | 1 | null |
Seeking AI-Driven Compliance Solutions for Banking Regulations | 0 | Hello everyone,
I’m a compliance officer at a bank, and I’m looking to integrate AI solutions to ensure our internal policies and procedures align with our central bank’s regulations and guidelines. Specifically, I want to upload relevant circulars and guidance notes into an AI system that can spot potential violations—both direct and indirect—within our existing documentation.
If anyone has experience using AI to systematically analyze complex regulatory frameworks or identify compliance gaps, I’d greatly appreciate any insights or recommendations you can share. I’m also interested in any best practices for handling large volumes of text and producing concise, actionable reports for internal stakeholders.
Thank you in advance for your help! | 2025-01-03T08:11:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hshmb8/seeking_aidriven_compliance_solutions_for_banking/ | king554 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hshmb8 | false | null | t3_1hshmb8 | /r/LocalLLaMA/comments/1hshmb8/seeking_aidriven_compliance_solutions_for_banking/ | false | false | self | 0 | null |
Recommend a Mac model for local llms? | 1 | Planning to wait for m4 max.
Would a m4 mini or mini pro be good enough?
What kinds of models would they be able to support without the processing time being unusable. Like more than 10secs for a simple hello. | 2025-01-03T08:31:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hshw4j/recommend_a_mac_model_for_local_llms/ | juzatypicaltroll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hshw4j | false | null | t3_1hshw4j | /r/LocalLLaMA/comments/1hshw4j/recommend_a_mac_model_for_local_llms/ | false | false | self | 1 | null |
Most cost effective chatbot service | 5 | My college is organising a event and we want to have a chatbot on its site. I have worked with these stuff but never with a product scale or mindset.
So if anybody knows whats the most cost effective(even free) way to build a small chatbot that fits my need! | 2025-01-03T08:38:12 | https://www.reddit.com/r/LocalLLaMA/comments/1hshzso/most_cost_effective_chatbot_service/ | Zealousideal_Cut5161 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hshzso | false | null | t3_1hshzso | /r/LocalLLaMA/comments/1hshzso/most_cost_effective_chatbot_service/ | false | false | self | 5 | null |
Running Llama 3.1 70B FP16 precision, locally with RTX 4090 24GB VRAM for 1 output token/sec | 1 | [removed] | 2025-01-03T08:48:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hsi4vd/running_llama_31_70b_fp16_precision_locally_with/ | gweizzz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsi4vd | false | null | t3_1hsi4vd | /r/LocalLLaMA/comments/1hsi4vd/running_llama_31_70b_fp16_precision_locally_with/ | false | false | self | 1 | null |
vision model for whole comic/manga understanding | 0 | is there a local vision model that can understand comic/manga and describe all dialogues and actions? i had no luck with qwen2 vl 7b | 2025-01-03T08:49:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hsi5o1/vision_model_for_whole_comicmanga_understanding/ | swagerka21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsi5o1 | false | null | t3_1hsi5o1 | /r/LocalLLaMA/comments/1hsi5o1/vision_model_for_whole_comicmanga_understanding/ | false | false | self | 0 | null |
deploy 70b fp16 llama3.1 on my rtx4090 | 1 | [removed] | 2025-01-03T08:53:04 | https://www.reddit.com/r/LocalLLaMA/comments/1hsi7dp/deploy_70b_fp16_llama31_on_my_rtx4090/ | gweizzz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsi7dp | false | null | t3_1hsi7dp | /r/LocalLLaMA/comments/1hsi7dp/deploy_70b_fp16_llama31_on_my_rtx4090/ | false | false | self | 1 | null |
rtx4090 for llama3.1 70b fp16 targeting 1token/sec | 1 | [removed] | 2025-01-03T09:04:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hsidgl/rtx4090_for_llama31_70b_fp16_targeting_1tokensec/ | gweizzz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsidgl | false | null | t3_1hsidgl | /r/LocalLLaMA/comments/1hsidgl/rtx4090_for_llama31_70b_fp16_targeting_1tokensec/ | false | false | self | 1 | null |
Is there a gguf version of llama 3.2 3B spinquant? Please link | 0 | https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct-SpinQuant_INT4_EO8 | 2025-01-03T09:10:23 | https://www.reddit.com/r/LocalLLaMA/comments/1hsig9z/is_there_a_gguf_version_of_llama_32_3b_spinquant/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsig9z | false | null | t3_1hsig9z | /r/LocalLLaMA/comments/1hsig9z/is_there_a_gguf_version_of_llama_32_3b_spinquant/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Es3UvgjzdEQHK5jLpybd_g3hvJXkNPw68k3_6SUzdCk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7aychf-C4BLj6ITkBsLGf6T97SfA6NmE0-OoZ2EYHjA.jpg?width=108&crop=smart&auto=webp&s=58bf51358b4e175f14bcd4274e1a175f88ad9e50', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7aychf-C4BLj6ITkBsLGf6T97SfA6NmE0-OoZ2EYHjA.jpg?width=216&crop=smart&auto=webp&s=5a85cd2ab388a80474bb7027a19cc4cd83c623fc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7aychf-C4BLj6ITkBsLGf6T97SfA6NmE0-OoZ2EYHjA.jpg?width=320&crop=smart&auto=webp&s=8bbd51f198bb250bbe4a02e0c614cc060f1642a7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7aychf-C4BLj6ITkBsLGf6T97SfA6NmE0-OoZ2EYHjA.jpg?width=640&crop=smart&auto=webp&s=99782c1e18db3eeb28278ce441af37240c9f9558', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7aychf-C4BLj6ITkBsLGf6T97SfA6NmE0-OoZ2EYHjA.jpg?width=960&crop=smart&auto=webp&s=3c6577410da070319baf5f76ad76175f35b6613f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7aychf-C4BLj6ITkBsLGf6T97SfA6NmE0-OoZ2EYHjA.jpg?width=1080&crop=smart&auto=webp&s=c69f3b8e4bf8e7239780c72951094335e846b725', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7aychf-C4BLj6ITkBsLGf6T97SfA6NmE0-OoZ2EYHjA.jpg?auto=webp&s=775da442bc6a88238a67c9c1de2fa63a134269d7', 'width': 1200}, 'variants': {}}]} |
🚀 Enhancing LLAMA-2_7B with Chain-of-Thought and Web Search Capabilities | 0 | Hey everyone! 👋
[Repo Link](https://github.com/threatthriver/LLAMA-2_7B)
I've just upgraded Meta's LLAMA-2\_7B model by adding **chain-of-thought** reasoning and **web search** functionality! 🌐💡 This means the model can now work through complex problems step by step, and also fetch real-time info from the web to give you more accurate answers. 🧠✨
# What's New?
* **Chain-of-Thought Reasoning:** 🧩 The model can now break down problems, like answering "How many 'r's are in 'strawberry'?", with logical steps to arrive at the correct answer. 🔍 It’s a small but powerful change that improves how it handles reasoning tasks!
* **Web Search Integration:** 🌍 Now, LLAMA-2\_7B can search the web for the most up-to-date information, making it much more reliable when answering questions about current events or specific facts.
# How Does It Work?
* **Chain-of-Thought:** 💭 By guiding the model to think step by step, it improves how it answers tricky questions.
* **Web Search:** 🔍 It can pull live data from the internet, so responses aren’t just based on pre-trained knowledge—they’re current and relevant.
# The Results?
* For example, when asked about the strawberry question, it correctly identified how many 'r's are in "strawberry" (without breaking a sweat 😅).
* It also gave accurate answers to other questions requiring fresh data, thanks to the web search feature.
# What's Next?
I’m planning to keep improving its reasoning abilities and expand the web search feature. 🚀
But hey, if you want to make it even better, **feel free to improve it**! The code is open-source, so you can contribute and make it smarter! Let’s collaborate and take it to the next level together! 🔧💡
Feel free to check out the updates and let me know your thoughts! Happy coding! 💻👨💻
| 2025-01-03T09:15:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hsiiud/enhancing_llama2_7b_with_chainofthought_and_web/ | Alone-Hunt-7507 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsiiud | false | null | t3_1hsiiud | /r/LocalLLaMA/comments/1hsiiud/enhancing_llama2_7b_with_chainofthought_and_web/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'NXBDgn1RjN_E13R5ltbDVFfcwLtKl0-5XUKx1etWVyg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pe3MEP3abUMNujmArCRZAitEfU6UuwkRdw_0UKMFf4Q.jpg?width=108&crop=smart&auto=webp&s=118c57b6b42cc7d69d2a7587ba2b1216e1d0e7b1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pe3MEP3abUMNujmArCRZAitEfU6UuwkRdw_0UKMFf4Q.jpg?width=216&crop=smart&auto=webp&s=8cd00414885916a15808f8ef713def5d3c89061e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pe3MEP3abUMNujmArCRZAitEfU6UuwkRdw_0UKMFf4Q.jpg?width=320&crop=smart&auto=webp&s=abfdd85392ce44306ad66b99338e620967adc206', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pe3MEP3abUMNujmArCRZAitEfU6UuwkRdw_0UKMFf4Q.jpg?width=640&crop=smart&auto=webp&s=32f7a57a80a755194814b4027e1505dbccada1da', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pe3MEP3abUMNujmArCRZAitEfU6UuwkRdw_0UKMFf4Q.jpg?width=960&crop=smart&auto=webp&s=29da3c200260a56174327c05fcd46258d9366d43', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pe3MEP3abUMNujmArCRZAitEfU6UuwkRdw_0UKMFf4Q.jpg?width=1080&crop=smart&auto=webp&s=fcce4723231b4dcdb154e7e80c97a7cfcf3186a7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pe3MEP3abUMNujmArCRZAitEfU6UuwkRdw_0UKMFf4Q.jpg?auto=webp&s=fe8e4318640ff1dd656d19bb985ee2337f2fb835', 'width': 1200}, 'variants': {}}]} |
I built a context-aware self-improved translator using Ollama | 28 | Project: [https://github.com/CyrusCKF/translator/](https://github.com/CyrusCKF/translator/)
Download for Windows: [https://github.com/CyrusCKF/translator/releases](https://github.com/CyrusCKF/translator/releases)
Machine translation has always been a challenging task. While tools like Google Translate or DeepL are good at general tasks, they become underwhelming for creative media projects. They often lose context and tone in translation, resulting in inconsistent styles.
To solve this, I chose to use LLM, specifically Ollama, for local models. While not as accurate, LLM provides more natural-sounding sentences and options for context. Advanced techniques can also be used to overcome the limited capacity of local models. This is how I developed this project.
Features of the project
* Free and Offline Translation
* Context-Aware Input
* Custom Example Pairs
* Self-Refining Results
* Translation Confidence Calculation
The project is built using React and Electron. You may download the zip or run the project using npm. | 2025-01-03T09:23:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hsimsl/i_built_a_contextaware_selfimproved_translator/ | m19990328 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsimsl | false | null | t3_1hsimsl | /r/LocalLLaMA/comments/1hsimsl/i_built_a_contextaware_selfimproved_translator/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': '5v_OqeQM8-w3SlHtH_z4vqM0Sj2LJ_h02752Z9U_Jf0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aEXkBRC-Ltmam5GoE_3WbKWqVWGvNTwyBfPtIFB3HBo.jpg?width=108&crop=smart&auto=webp&s=813f8841ce5f1187eb46922aecad61ac8575e606', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aEXkBRC-Ltmam5GoE_3WbKWqVWGvNTwyBfPtIFB3HBo.jpg?width=216&crop=smart&auto=webp&s=12ab0043c2096b38b7b98f97e92218deb952c05b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aEXkBRC-Ltmam5GoE_3WbKWqVWGvNTwyBfPtIFB3HBo.jpg?width=320&crop=smart&auto=webp&s=75a5dc212a3d177eaed182c9e7cd697e992b5423', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aEXkBRC-Ltmam5GoE_3WbKWqVWGvNTwyBfPtIFB3HBo.jpg?width=640&crop=smart&auto=webp&s=f34f9a2d980f535737929d06461cdbc372858230', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aEXkBRC-Ltmam5GoE_3WbKWqVWGvNTwyBfPtIFB3HBo.jpg?width=960&crop=smart&auto=webp&s=42987cd314cca08d43c0996a6de21340e064037b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aEXkBRC-Ltmam5GoE_3WbKWqVWGvNTwyBfPtIFB3HBo.jpg?width=1080&crop=smart&auto=webp&s=21b63de3a0b03b18602392c90b0c7d4cd994e14f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aEXkBRC-Ltmam5GoE_3WbKWqVWGvNTwyBfPtIFB3HBo.jpg?auto=webp&s=e5cb6a0e2840c0a29a748f0fb4855bb8f38d24c4', 'width': 1200}, 'variants': {}}]} |
Train a 7B model that outperforms GPT-4o? | 1 | [removed] | 2025-01-03T09:25:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hsinr2/train_a_7b_model_that_outperforms_gpt4o/ | Lynncc6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsinr2 | false | null | t3_1hsinr2 | /r/LocalLLaMA/comments/1hsinr2/train_a_7b_model_that_outperforms_gpt4o/ | false | false | 1 | null |
|
Am I the only one uninterested in frontier models? SLMs feel more interesting. | 1 | [removed] | 2025-01-03T09:27:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hsio9o/am_i_the_only_one_uninterested_in_frontier_models/ | HugoCortell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsio9o | false | null | t3_1hsio9o | /r/LocalLLaMA/comments/1hsio9o/am_i_the_only_one_uninterested_in_frontier_models/ | false | false | self | 1 | null |
Looking for small LLM Suggestions for Summarizing Dutch PDFs on NVIDIA Jetson Orin Nano | 1 | [removed] | 2025-01-03T09:38:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hsiu2b/looking_for_small_llm_suggestions_for_summarizing/ | GroundbreakingTea195 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsiu2b | false | null | t3_1hsiu2b | /r/LocalLLaMA/comments/1hsiu2b/looking_for_small_llm_suggestions_for_summarizing/ | false | false | self | 1 | null |
Ideal temperature value for Agents? | 1 | [removed] | 2025-01-03T09:44:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hsix0x/ideal_temperature_value_for_agents/ | Raise_Fickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsix0x | false | null | t3_1hsix0x | /r/LocalLLaMA/comments/1hsix0x/ideal_temperature_value_for_agents/ | false | false | self | 1 | null |
RTX4090 on Llama 3.1 FP16 at 1 Token/sec | 1 | [removed] | 2025-01-03T09:44:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hsixea/rtx4090_on_llama_31_fp16_at_1_tokensec/ | gweizzz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsixea | false | null | t3_1hsixea | /r/LocalLLaMA/comments/1hsixea/rtx4090_on_llama_31_fp16_at_1_tokensec/ | false | false | self | 1 | null |
Is this LoRA implementation correct? | 4 | I was trying to fine-tune Moondream2 by using LoRA. But, I got weird loss curves.
Here is the link to the code: [LoRA-finetune](https://colab.research.google.com/drive/1cNSme3Tc3vynxUmov-JN3uEzbU3mbG9W#scrollTo=K6xt85GATTLv)
| 2025-01-03T10:22:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hsjgpz/is_this_lora_implementation_correct/ | reso_ams | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsjgpz | false | null | t3_1hsjgpz | /r/LocalLLaMA/comments/1hsjgpz/is_this_lora_implementation_correct/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]} |
Can anyone download sonnet 3, or sonnet 3.5 independently without claude chatbot and create a chatbot which would give uncensored replies? if yes how to do it? | 0 | I am writing a novel scene and I did not know how to write a a scene with horror undertone and I seeked guidance from claude chatbot for examples(which uses several sonnet models) however, it constantly saw that it is not appropriate or sorry I can't do that. And it happened several time. Now how to create chatbot using sonnet models to give uncensored advice(and how to download them independently)? | 2025-01-03T10:29:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hsjk3z/can_anyone_download_sonnet_3_or_sonnet_35/ | CompetitionKnown5708 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsjk3z | false | null | t3_1hsjk3z | /r/LocalLLaMA/comments/1hsjk3z/can_anyone_download_sonnet_3_or_sonnet_35/ | false | false | self | 0 | null |
Testing LLMs on Cryptic Puzzles – How Smart Are They, Really? | 1 | [removed] | 2025-01-03T10:35:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hsjn5n/testing_llms_on_cryptic_puzzles_how_smart_are/ | geloop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsjn5n | false | null | t3_1hsjn5n | /r/LocalLLaMA/comments/1hsjn5n/testing_llms_on_cryptic_puzzles_how_smart_are/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ec_jCm0DY6KFGH8fL9PVsl5TTQIktHpXEfBMZPPRfvs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/MRAG9gVoUqHiG6xoLdglDgJ7w8ltrtl14x9XIhlpMuE.jpg?width=108&crop=smart&auto=webp&s=cdb25df25641fbb543414ca57827cf637f49c602', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/MRAG9gVoUqHiG6xoLdglDgJ7w8ltrtl14x9XIhlpMuE.jpg?width=216&crop=smart&auto=webp&s=350436ef540bb868cae78b5b5d273376eae15c46', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/MRAG9gVoUqHiG6xoLdglDgJ7w8ltrtl14x9XIhlpMuE.jpg?width=320&crop=smart&auto=webp&s=938698bb59fe87769c78d5c6df1a6a977816f7a6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/MRAG9gVoUqHiG6xoLdglDgJ7w8ltrtl14x9XIhlpMuE.jpg?width=640&crop=smart&auto=webp&s=ee1eaaba2ecf0fd62205570103721ae227f3dd79', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/MRAG9gVoUqHiG6xoLdglDgJ7w8ltrtl14x9XIhlpMuE.jpg?width=960&crop=smart&auto=webp&s=7f827858f6d771d20eba1463ced341d48365e539', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/MRAG9gVoUqHiG6xoLdglDgJ7w8ltrtl14x9XIhlpMuE.jpg?width=1080&crop=smart&auto=webp&s=becc82ddccf623dff48490eee3cbddb4cb92293e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/MRAG9gVoUqHiG6xoLdglDgJ7w8ltrtl14x9XIhlpMuE.jpg?auto=webp&s=c4cdad353c63b18bd4bfa0365aae3afea3a1bb4d', 'width': 1200}, 'variants': {}}]} |
AI Industries need to operate as foundational research labs | 1 | As the cycle time between AI research and AI product development shortens, industries which have strong in-house research capabilities, will win in the long-term
Here's a very interesting blogpost: [https://open.substack.com/pub/vizuara/p/ai-industries-need-to-operate-as?r=4ssvv2&utm\_campaign=post&utm\_medium=web&showWelcomeOnShare=false](https://open.substack.com/pub/vizuara/p/ai-industries-need-to-operate-as?r=4ssvv2&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false) | 2025-01-03T10:58:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hsjzhs/ai_industries_need_to_operate_as_foundational/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsjzhs | false | null | t3_1hsjzhs | /r/LocalLLaMA/comments/1hsjzhs/ai_industries_need_to_operate_as_foundational/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'X9zWnPTN122UwhANQ_jQh_QNagLsIK_StmkFKkv87is', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/x1akthH3FBRPGqbB4a4j4Gzatv2xKipIVtgVb5CB_xY.jpg?width=108&crop=smart&auto=webp&s=851ba5963bb17ffa255131de736b98984d438d1a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/x1akthH3FBRPGqbB4a4j4Gzatv2xKipIVtgVb5CB_xY.jpg?width=216&crop=smart&auto=webp&s=7b7f6db4cc1837a5c2e9e7bde7bb92ae34c5c5f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/x1akthH3FBRPGqbB4a4j4Gzatv2xKipIVtgVb5CB_xY.jpg?width=320&crop=smart&auto=webp&s=61b0129570ffa015c402818abe831ce06ff3f75a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/x1akthH3FBRPGqbB4a4j4Gzatv2xKipIVtgVb5CB_xY.jpg?width=640&crop=smart&auto=webp&s=8a414fbc4e291b8baff7506fa4fe4065b54f336b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/x1akthH3FBRPGqbB4a4j4Gzatv2xKipIVtgVb5CB_xY.jpg?width=960&crop=smart&auto=webp&s=7b7d0653b0c4ba9afd835ebef2e851b4101ded8b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/x1akthH3FBRPGqbB4a4j4Gzatv2xKipIVtgVb5CB_xY.jpg?width=1080&crop=smart&auto=webp&s=8a49b845b4311bfd746b5a19f05c401530a0992f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/x1akthH3FBRPGqbB4a4j4Gzatv2xKipIVtgVb5CB_xY.jpg?auto=webp&s=d4fcb7b34afd698b807da527ab730849f20012b5', 'width': 1200}, 'variants': {}}]} |
Train a 7B model that outperforms GPT-4o ? | 205 | 2025-01-03T11:14:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hsk8h8/train_a_7b_model_that_outperforms_gpt4o/ | Lynncc6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsk8h8 | false | null | t3_1hsk8h8 | /r/LocalLLaMA/comments/1hsk8h8/train_a_7b_model_that_outperforms_gpt4o/ | false | false | 205 | {'enabled': False, 'images': [{'id': 'ecQgyhvB3lUNxRPJZa4mLC61mYf64oFsQVkP0iYIxNk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6Djx0rdI_BemsO18-ezr_SwTxsyzbfnEtRrBYOTXFqU.jpg?width=108&crop=smart&auto=webp&s=5e08c0109c5159bace3c0e56afa82a168efcc3cc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6Djx0rdI_BemsO18-ezr_SwTxsyzbfnEtRrBYOTXFqU.jpg?width=216&crop=smart&auto=webp&s=e4dc4200ed285b7e9510a446e303f924dfcd5ba2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6Djx0rdI_BemsO18-ezr_SwTxsyzbfnEtRrBYOTXFqU.jpg?width=320&crop=smart&auto=webp&s=ee5a4fc820c61a1e9b9653630d0dc4a3dd17fef7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6Djx0rdI_BemsO18-ezr_SwTxsyzbfnEtRrBYOTXFqU.jpg?width=640&crop=smart&auto=webp&s=7941ec3e61cafc823c5848b88b7ad72ad465d950', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6Djx0rdI_BemsO18-ezr_SwTxsyzbfnEtRrBYOTXFqU.jpg?width=960&crop=smart&auto=webp&s=9273cd1a0e32d8449a83efa2a808e1de0a4c7575', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6Djx0rdI_BemsO18-ezr_SwTxsyzbfnEtRrBYOTXFqU.jpg?width=1080&crop=smart&auto=webp&s=dfc5ead2a2586375a95a65c45665284c5a33df60', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6Djx0rdI_BemsO18-ezr_SwTxsyzbfnEtRrBYOTXFqU.jpg?auto=webp&s=70fda66e68f9cd97d1d95cf945ea8aedba9a9f05', 'width': 1200}, 'variants': {}}]} |
||
The best open source LLM for Python developers? | 1 | [removed] | 2025-01-03T11:40:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hskmly/the_best_open_source_llm_for_python_developers/ | Straight-Internal903 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hskmly | false | null | t3_1hskmly | /r/LocalLLaMA/comments/1hskmly/the_best_open_source_llm_for_python_developers/ | false | false | self | 1 | null |
Data Chips to Chirpy token ratio (Kage game) | 1 | [removed] | 2025-01-03T12:02:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hskzc7/data_chips_to_chirpy_token_ratio_kage_game/ | Far_Illustrator_3507 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hskzc7 | false | null | t3_1hskzc7 | /r/LocalLLaMA/comments/1hskzc7/data_chips_to_chirpy_token_ratio_kage_game/ | false | false | self | 1 | null |
I'm looking for a tool to edit documents, much like ChatGPT allows. (but locally) | 1 | [removed] | 2025-01-03T12:04:29 | https://www.reddit.com/r/LocalLLaMA/comments/1hsl0em/im_looking_for_a_tool_to_edit_documents_much_like/ | LuminousDragon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsl0em | false | null | t3_1hsl0em | /r/LocalLLaMA/comments/1hsl0em/im_looking_for_a_tool_to_edit_documents_much_like/ | false | false | self | 1 | null |
What is the best smallest model to run locally? | 1 | [removed] | 2025-01-03T12:09:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hsl2zv/what_is_the_best_smallest_model_to_run_locally/ | 0y0s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsl2zv | false | null | t3_1hsl2zv | /r/LocalLLaMA/comments/1hsl2zv/what_is_the_best_smallest_model_to_run_locally/ | false | false | self | 1 | null |
LLM Text Tool | 1 | [removed] | 2025-01-03T12:18:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hsl8u4/llm_text_tool/ | hoiru | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsl8u4 | false | null | t3_1hsl8u4 | /r/LocalLLaMA/comments/1hsl8u4/llm_text_tool/ | false | false | self | 1 | null |
First #1 trending robotics datasets on HF? | 1 | [removed] | 2025-01-03T12:28:31 | https://www.reddit.com/r/LocalLLaMA/comments/1hslebj/first_1_trending_robotics_datasets_on_hf/ | Lynncc6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hslebj | false | null | t3_1hslebj | /r/LocalLLaMA/comments/1hslebj/first_1_trending_robotics_datasets_on_hf/ | false | false | 1 | {'enabled': False, 'images': [{'id': '7yQTx0YyQsDbj2bd8XPdYbnRiTXeY0dUb5Za7WDFQJ8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/J9ciWnL7gnSvftKMkto55DVeAgMOI2HFvmKsC-aVffM.jpg?width=108&crop=smart&auto=webp&s=7301b3ab289448fb4274070aa317d023a4adc082', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/J9ciWnL7gnSvftKMkto55DVeAgMOI2HFvmKsC-aVffM.jpg?width=216&crop=smart&auto=webp&s=251eb2a1f4ab66db59613c7850ac80388c247c74', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/J9ciWnL7gnSvftKMkto55DVeAgMOI2HFvmKsC-aVffM.jpg?width=320&crop=smart&auto=webp&s=701b16b30722a9107c2944e9b6c317922cc01818', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/J9ciWnL7gnSvftKMkto55DVeAgMOI2HFvmKsC-aVffM.jpg?width=640&crop=smart&auto=webp&s=c35d98590a98083c83a72d6524750d164f045413', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/J9ciWnL7gnSvftKMkto55DVeAgMOI2HFvmKsC-aVffM.jpg?width=960&crop=smart&auto=webp&s=30cfb2ba831fad3478a7a4f95d89bde946b3e853', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/J9ciWnL7gnSvftKMkto55DVeAgMOI2HFvmKsC-aVffM.jpg?width=1080&crop=smart&auto=webp&s=43b850a36e27747c3b42ac22971ca6b1798833d2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/J9ciWnL7gnSvftKMkto55DVeAgMOI2HFvmKsC-aVffM.jpg?auto=webp&s=3f5933933854a29f6ae89f22b8b689c1ae55c776', 'width': 1200}, 'variants': {}}]} |
|
Model for photo retouching | 1 | [removed] | 2025-01-03T12:30:12 | https://www.reddit.com/r/LocalLLaMA/comments/1hslfcj/model_for_photo_retouching/ | aleeesashaaa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hslfcj | false | null | t3_1hslfcj | /r/LocalLLaMA/comments/1hslfcj/model_for_photo_retouching/ | false | false | self | 1 | null |
Issues with disk space while installing Tabby API | 1 | [removed] | 2025-01-03T12:40:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hsllhb/issues_with_disk_space_while_installing_tabby_api/ | GraphBirdl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsllhb | false | null | t3_1hsllhb | /r/LocalLLaMA/comments/1hsllhb/issues_with_disk_space_while_installing_tabby_api/ | false | false | self | 1 | null |
Are professional AI prompter services or freelancers established yet? | 1 | [removed] | 2025-01-03T12:43:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hsln3y/are_professional_ai_prompter_services_or/ | lipstickandchicken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsln3y | false | null | t3_1hsln3y | /r/LocalLLaMA/comments/1hsln3y/are_professional_ai_prompter_services_or/ | false | false | self | 1 | null |
What is the best smallest model to run locally? | 1 | [removed] | 2025-01-03T12:49:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hslrc8/what_is_the_best_smallest_model_to_run_locally/ | 0y0s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hslrc8 | false | null | t3_1hslrc8 | /r/LocalLLaMA/comments/1hslrc8/what_is_the_best_smallest_model_to_run_locally/ | false | false | self | 1 | null |
Incorporating dashboard/UI images into RAG | 1 | [removed] | 2025-01-03T13:02:24 | https://www.reddit.com/r/LocalLLaMA/comments/1hslzf3/incorporating_dashboardui_images_into_rag/ | nlpllama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hslzf3 | false | null | t3_1hslzf3 | /r/LocalLLaMA/comments/1hslzf3/incorporating_dashboardui_images_into_rag/ | false | false | self | 1 | null |
LLM as survival knowledge base | 203 | The idea is not new, but worth discussing anyways.
LLMs are a source of archived knowledge. Unlike books, they can provide instant advices based on description of specific situation you are in, tools you have, etc.
I've been playing with popular local models to see if they can be helpful in random imaginary situations, and most of them do a good job explaining basics. Much better than a random movie or TV series, where people do wrong stupid actions most of the time.
I would like to hear if anyone else did similar research and have a specific favorite models that can be handy in case of "apocalypse" situations. | 2025-01-03T13:11:23 | https://www.reddit.com/r/LocalLLaMA/comments/1hsm57o/llm_as_survival_knowledge_base/ | NickNau | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsm57o | false | null | t3_1hsm57o | /r/LocalLLaMA/comments/1hsm57o/llm_as_survival_knowledge_base/ | false | false | self | 203 | null |
Concurrent processing of documents in RAG Pipeline? | 1 | We are planning to create rag pipeline. It would be used is to upload a PDFs and extract 50-100 data points from it. Each batch will consist of 100-200 PDFs.
Tech Stack: Ollama, LLamaIndex, Qdrant DB, llama3.2, Opik
Has anyone tried to do this in local setup? Please share your experiences if any.
To handle concurrent processing of documents, we are planning to use Celery. Has anyone tried Celery or any other alternatives.
Please suggest any alternative approaches or open-source tools that could streamline this process. Links to such tools would be appreciated.
We cannot call third party APIs. Data is confidential | 2025-01-03T13:43:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hsmr2k/concurrent_processing_of_documents_in_rag_pipeline/ | pathakskp23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsmr2k | false | null | t3_1hsmr2k | /r/LocalLLaMA/comments/1hsmr2k/concurrent_processing_of_documents_in_rag_pipeline/ | false | false | self | 1 | null |
Looking for jailbreak guides | 0 | Hi! I'm working on automatasing jailbreak for llms, maybe any resource around prompts/chats for jailbreak LLM? Also looking for "how deep model jailbreaked" benchmark | 2025-01-03T13:50:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hsmvnm/looking_for_jailbreak_guides/ | Mysterious_Hearing14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsmvnm | false | null | t3_1hsmvnm | /r/LocalLLaMA/comments/1hsmvnm/looking_for_jailbreak_guides/ | false | false | self | 0 | null |
Ok so here’s my wacky idea | 0 | 1. Get one of those multimodal models that inputs and outputs images and text. I know google gemini does it, I’m not sure there’s an open weights one yet, *anyway*
2. Hook up your own webcam, auto feed it in as often as possible, with the system prompt ‘you are being shown the feed from the user’s webcam, generate images showing what *you* (the AI) might be doing on the ‘other side’. Know what I mean? image-in image-out chat turn based interaction but with pictures instead of text. If you got it up to I dunno 3 or 4 turns per minute I bet it could be fun.
I’m sure that’s not 100% clear. Shower thoughts. I’d like to see it done though. | 2025-01-03T14:30:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hsnonm/ok_so_heres_my_wacky_idea/ | FunnyAsparagus1253 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsnonm | false | null | t3_1hsnonm | /r/LocalLLaMA/comments/1hsnonm/ok_so_heres_my_wacky_idea/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '52gzGdhAgzj9pzN3fK7fkVvHcDiNGN4RlpJTqx7JhSI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AH3x_lpJz-uc6DdNvifyK8wrfm0f_2NXjul6ZcFtH3w.jpg?width=108&crop=smart&auto=webp&s=dd11a588b12830296201389ce1594b52e9ce2979', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AH3x_lpJz-uc6DdNvifyK8wrfm0f_2NXjul6ZcFtH3w.jpg?width=216&crop=smart&auto=webp&s=d89bc12d6c93e7179520cd3dbfbcbdc5764c489f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AH3x_lpJz-uc6DdNvifyK8wrfm0f_2NXjul6ZcFtH3w.jpg?width=320&crop=smart&auto=webp&s=80b2e06a65afc047ad3b4d8c106c4d326f542a36', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AH3x_lpJz-uc6DdNvifyK8wrfm0f_2NXjul6ZcFtH3w.jpg?width=640&crop=smart&auto=webp&s=9fce01f79873aa815a59cb44267f4a1ec047c555', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AH3x_lpJz-uc6DdNvifyK8wrfm0f_2NXjul6ZcFtH3w.jpg?width=960&crop=smart&auto=webp&s=84737ac60ed8b31ffb146cc9197f926cee4ab389', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AH3x_lpJz-uc6DdNvifyK8wrfm0f_2NXjul6ZcFtH3w.jpg?width=1080&crop=smart&auto=webp&s=0b78627757570a2a38cb5ae3b745814de9f64132', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AH3x_lpJz-uc6DdNvifyK8wrfm0f_2NXjul6ZcFtH3w.jpg?auto=webp&s=56c73970d3ef89df04decbbf95bc3a77f13c000f', 'width': 1200}, 'variants': {}}]} |
Project Automation - New Framework | 1 | [removed] | 2025-01-03T14:56:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hso8mg/project_automation_new_framework/ | xavier1764 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hso8mg | false | null | t3_1hso8mg | /r/LocalLLaMA/comments/1hso8mg/project_automation_new_framework/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'jpvnnppDy-6JkNdNpo64bW2InCoiyTSLNK_h7QQiDpY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lRrkvvLrX9CwhJHEYHYWZphOsDXe19GYk-vvECwhAHA.jpg?width=108&crop=smart&auto=webp&s=edb40bf1eaae0a7eebb20f49fb5fb07c10dba262', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lRrkvvLrX9CwhJHEYHYWZphOsDXe19GYk-vvECwhAHA.jpg?width=216&crop=smart&auto=webp&s=e106faafe86f8cb10e0d2b5511222d037bec7866', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lRrkvvLrX9CwhJHEYHYWZphOsDXe19GYk-vvECwhAHA.jpg?width=320&crop=smart&auto=webp&s=16a783f82fd4063e1122e12ee1935023cd45823d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lRrkvvLrX9CwhJHEYHYWZphOsDXe19GYk-vvECwhAHA.jpg?width=640&crop=smart&auto=webp&s=a97c614b0630ffca7d790567f7bb12e4f93f8bb9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lRrkvvLrX9CwhJHEYHYWZphOsDXe19GYk-vvECwhAHA.jpg?width=960&crop=smart&auto=webp&s=ced694f9c31aeb5b40b41836953f4079be2d73eb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lRrkvvLrX9CwhJHEYHYWZphOsDXe19GYk-vvECwhAHA.jpg?width=1080&crop=smart&auto=webp&s=fac902350b1a8a913c477029e32a96e035e498cd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lRrkvvLrX9CwhJHEYHYWZphOsDXe19GYk-vvECwhAHA.jpg?auto=webp&s=59fc0f1d8bd98e2e73404c01fe15da4a179ffdd6', 'width': 1200}, 'variants': {}}]} |
MLC chat app on Android | 1 | [removed] | 2025-01-03T15:04:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hsofgl/mlc_chat_app_on_android/ | Unusual_Reserve_2657 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsofgl | false | null | t3_1hsofgl | /r/LocalLLaMA/comments/1hsofgl/mlc_chat_app_on_android/ | false | false | self | 1 | null |
How would quantum computing change LLMs? | 0 | Would we not even need LLMs, but have something different and much more powerful? Would quantum not be usfeful to LLMs? | 2025-01-03T15:09:53 | https://www.reddit.com/r/LocalLLaMA/comments/1hsok7l/how_would_quantum_computing_change_llms/ | MoneyKenny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsok7l | false | null | t3_1hsok7l | /r/LocalLLaMA/comments/1hsok7l/how_would_quantum_computing_change_llms/ | false | false | self | 0 | null |
Deepseek-V3 GGUF's | 193 | Thanks to u/fairydreaming's work, quants have been uploaded: [https://huggingface.co/bullerwins/DeepSeek-V3-GGUF/tree/main](https://huggingface.co/bullerwins/DeepSeek-V3-GGUF/tree/main)
Can someone upload t/s with 512gb ddr4 ram and a single 3090? | 2025-01-03T15:19:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hsort6/deepseekv3_ggufs/ | fraschm98 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsort6 | false | null | t3_1hsort6 | /r/LocalLLaMA/comments/1hsort6/deepseekv3_ggufs/ | false | false | self | 193 | {'enabled': False, 'images': [{'id': 'WUUDFdFngvZ6PsFeJjcgDXIhUEZnm3qDYW-fMggab5g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/e28Wyjblz1jlzZHHCqbNagadZPCdEEDMHQOS1o1xfs4.jpg?width=108&crop=smart&auto=webp&s=accf990e1a4dac854a1bf85395568cb77265f7f9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/e28Wyjblz1jlzZHHCqbNagadZPCdEEDMHQOS1o1xfs4.jpg?width=216&crop=smart&auto=webp&s=fc036be92fba248f2fdf79fb564666c3ca641b10', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/e28Wyjblz1jlzZHHCqbNagadZPCdEEDMHQOS1o1xfs4.jpg?width=320&crop=smart&auto=webp&s=f137235e2b9d90883a910f7dff646ec9ae0d4540', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/e28Wyjblz1jlzZHHCqbNagadZPCdEEDMHQOS1o1xfs4.jpg?width=640&crop=smart&auto=webp&s=12fa34b9c047d0adf8a8aa8b16bbf440542fbd1d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/e28Wyjblz1jlzZHHCqbNagadZPCdEEDMHQOS1o1xfs4.jpg?width=960&crop=smart&auto=webp&s=716411f572ab25d68e928e58978b915d62bcfd51', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/e28Wyjblz1jlzZHHCqbNagadZPCdEEDMHQOS1o1xfs4.jpg?width=1080&crop=smart&auto=webp&s=725d2cd9565a8158524bb1e38de314f16803106a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/e28Wyjblz1jlzZHHCqbNagadZPCdEEDMHQOS1o1xfs4.jpg?auto=webp&s=166193b1e857f838dadd18c179bf0f1ebef8ac15', 'width': 1200}, 'variants': {}}]} |
Video cards for a new server | 2 | Do AMD cards work as well as NVIDIA or are they a pain? I can get a Radeon RX 7600 XT with 16GB RAM for $330, I can't even find an NVIDIA card with more than 12GB. What if I put 2 of those in a server, I could build the whole thing for about $1000, would that perform well? | 2025-01-03T15:38:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hsp7h7/video_cards_for_a_new_server/ | igorbirman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsp7h7 | false | null | t3_1hsp7h7 | /r/LocalLLaMA/comments/1hsp7h7/video_cards_for_a_new_server/ | false | false | self | 2 | null |
How to build a Large Language Model. My notes from a fantistic NeurIPS 2024 tutorial. | 1 | 2025-01-03T15:38:58 | https://manuelsh.github.io/blog/2025/NIPS-building-llm-workshop/ | Manuel_SH | manuelsh.github.io | 1970-01-01T00:00:00 | 0 | {} | 1hsp7pw | false | null | t3_1hsp7pw | /r/LocalLLaMA/comments/1hsp7pw/how_to_build_a_large_language_model_my_notes_from/ | false | false | default | 1 | null |
|
Missing closing tags when LLM outputs XML | 3 | I like to use xml tags to structure LLM (agent) output. For most tasks I find it simpler and more readable than JSON. It's also supposed to work best with anthropic models. However, local LLMs tend to omit the very last closing tag. Is it possible it has to do with the sampler (llama.cpp)? Happens with Qwen2.5 32B and Mistral Small, which are currently my preferred smallish models for agentic stuff.
I can prevent the problem pretty much if I give it few-shot examples, where the last closing tag is followed by some random text, which just gets ignored when the output is being parsed. But I don't like that very much.
Anyone experience the same issue? Do you have a good solution? | 2025-01-03T15:45:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hspd85/missing_closing_tags_when_llm_outputs_xml/ | mnze_brngo_7325 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hspd85 | false | null | t3_1hspd85 | /r/LocalLLaMA/comments/1hspd85/missing_closing_tags_when_llm_outputs_xml/ | false | false | self | 3 | null |
Seeking Advice on Building a Mobile App with a Locally Running Vision-Language Model | 1 | [removed] | 2025-01-03T16:06:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hspujn/seeking_advice_on_building_a_mobile_app_with_a/ | Mark__27 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hspujn | false | null | t3_1hspujn | /r/LocalLLaMA/comments/1hspujn/seeking_advice_on_building_a_mobile_app_with_a/ | false | false | self | 1 | null |
Can Chain of Thought prompting ever do harm? | 1 | [removed] | 2025-01-03T16:11:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hspyo3/can_chain_of_thought_prompting_ever_do_harm/ | greentea387 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hspyo3 | false | null | t3_1hspyo3 | /r/LocalLLaMA/comments/1hspyo3/can_chain_of_thought_prompting_ever_do_harm/ | false | false | self | 1 | null |
Really ? Wow we gonna see more advanced open source models then | 470 | 2025-01-03T16:15:44 | Evening_Action6217 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hsq1xf | false | null | t3_1hsq1xf | /r/LocalLLaMA/comments/1hsq1xf/really_wow_we_gonna_see_more_advanced_open_source/ | false | false | 470 | {'enabled': True, 'images': [{'id': 'xrmzqE9ILfHjmkNbmiIrhHa6BQe1x-ZVWU6CEhd39uQ', 'resolutions': [{'height': 191, 'url': 'https://preview.redd.it/qznkhyb70tae1.png?width=108&crop=smart&auto=webp&s=4560f2d137440c1eefc75aab154bb37201089767', 'width': 108}, {'height': 383, 'url': 'https://preview.redd.it/qznkhyb70tae1.png?width=216&crop=smart&auto=webp&s=c2d89334bdfea351f45a288c97d919cee087b25c', 'width': 216}, {'height': 567, 'url': 'https://preview.redd.it/qznkhyb70tae1.png?width=320&crop=smart&auto=webp&s=bd3a3db81cc909d4958a211f4d82d5c4f4574989', 'width': 320}, {'height': 1135, 'url': 'https://preview.redd.it/qznkhyb70tae1.png?width=640&crop=smart&auto=webp&s=f6bc89a0c3f6bf73f5a7623ea0bc206c7c9b8bea', 'width': 640}, {'height': 1703, 'url': 'https://preview.redd.it/qznkhyb70tae1.png?width=960&crop=smart&auto=webp&s=b7e880114bf98728907d2ebf17835fd3bb387a49', 'width': 960}, {'height': 1916, 'url': 'https://preview.redd.it/qznkhyb70tae1.png?width=1080&crop=smart&auto=webp&s=ce15251efc3547cfe41dc1189cb245c82b8b197f', 'width': 1080}], 'source': {'height': 1916, 'url': 'https://preview.redd.it/qznkhyb70tae1.png?auto=webp&s=ee5a7b1a9d76c83a23c352728fa42f35d5f8cc4e', 'width': 1080}, 'variants': {}}]} |
|||
Using speculative decoding through API | 2 | Hi
Are Fireworks AI and OpenAI the only cloud providers offering speculative decoding through their APIs?
Right now, I'm using Qwen 2.5 Coder through Fireworks AI, but their input/output price is more than 10x higher compared to DeepInfra for example. | 2025-01-03T16:26:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hsqapt/using_speculative_decoding_through_api/ | Round_Mixture_7541 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsqapt | false | null | t3_1hsqapt | /r/LocalLLaMA/comments/1hsqapt/using_speculative_decoding_through_api/ | false | false | self | 2 | null |
RAG for structured data | 9 | I'm new to this whole RAG thing. Just learning my way around. Looks like the defacto-standard is to use an embedding model to convert the first ~512 natural language words into a ~1024-element vector and store the document in a vector database using the vector as the key because it encodes the semantic meaning of the beginning of the document. Then the DB takes another embedding key and finds the documents closest to it in the vector space.
I have a bunch of concerns around this:
* I've seen people splitting documents into pages and adding each page into a vector DB using its embedding. This sounds like a really bad idea to me because pages are arbitrary splits. I'd split by heading. Which begs the question: can you do this hierarchically? e.g. recursively summarize documents and insert the summaries into the vector DB with links to the subdocuments.
* A lot of the data I want to index is structured, e.g. an associative array mapping country name to GDP, or a network of country names and the languages spoken there.
For example, if I ask any AI that I have access to a question like "List the total combined GDP of all people by the language they speak" none of them leverage RAG because they fail to decompose the problem and they just end up guessing from their "memories" of which countries speak which languages which can be wrong by a factor of ~3.
How are you guys doing RAG on structured data? Or is RAG the wrong tool for this job? | 2025-01-03T16:29:44 | https://www.reddit.com/r/LocalLLaMA/comments/1hsqdow/rag_for_structured_data/ | PurpleUpbeat2820 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsqdow | false | null | t3_1hsqdow | /r/LocalLLaMA/comments/1hsqdow/rag_for_structured_data/ | false | false | self | 9 | null |
What’s the best local LLM for coding? | 0 | What’s a good LLM you can use in VSCode that would run locally on laptop? | 2025-01-03T16:31:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hsqeyk/whats_the_best_local_llm_for_coding/ | kiteTumbler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsqeyk | false | null | t3_1hsqeyk | /r/LocalLLaMA/comments/1hsqeyk/whats_the_best_local_llm_for_coding/ | false | false | self | 0 | null |
Need Help for ways to the Classification of documents | 1 | [removed] | 2025-01-03T16:41:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hsqn7l/need_help_for_ways_to_the_classification_of/ | Pleasant_Drink_4245 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsqn7l | false | null | t3_1hsqn7l | /r/LocalLLaMA/comments/1hsqn7l/need_help_for_ways_to_the_classification_of/ | false | false | self | 1 | null |
From Dylan Patel of SemiAnalysis: 1) "4o, o1, o1 preview, o1 pro are all the same size model". 2) The reason o1 is more expensive than gpt-4o is "related to seqlen kvcache overhead". 3) "o1 pro is same model [as o1] with adjustments at inference time". | 103 | Source: These 3 X posts:
https://x.com/dylan522p/status/1869077942305009886 .
https://x.com/dylan522p/status/1869082407653314888 .
https://x.com/dylan522p/status/1869085209649692860 .
Presumably these details are also in the paywalled part of SemiAnalysis article 'Scaling Laws – O1 Pro Architecture, Reasoning Training Infrastructure, Orion and Claude 3.5 Opus “Failures”': https://semianalysis.com/2024/12/11/scaling-laws-o1-pro-architecture-reasoning-training-infrastructure-orion-and-claude-3-5-opus-failures/ . | 2025-01-03T16:52:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hsqx07/from_dylan_patel_of_semianalysis_1_4o_o1_o1/ | Wiskkey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsqx07 | false | null | t3_1hsqx07 | /r/LocalLLaMA/comments/1hsqx07/from_dylan_patel_of_semianalysis_1_4o_o1_o1/ | false | false | self | 103 | {'enabled': False, 'images': [{'id': 'cjL3jXiBkg2VjlhDqwTEqdVhOgw0QEtD2W3_TF4V3YM', 'resolutions': [{'height': 185, 'url': 'https://external-preview.redd.it/5LmwX6Cv0oOQUv3jg8BowY4xwsq1K13ipTUiRQ7NcvQ.jpg?width=108&crop=smart&auto=webp&s=c858857920d1a4836806a7ca1526b48239bca63e', 'width': 108}, {'height': 370, 'url': 'https://external-preview.redd.it/5LmwX6Cv0oOQUv3jg8BowY4xwsq1K13ipTUiRQ7NcvQ.jpg?width=216&crop=smart&auto=webp&s=b9b547ace780f271917e5ea2c4c8ed1d479b7008', 'width': 216}, {'height': 548, 'url': 'https://external-preview.redd.it/5LmwX6Cv0oOQUv3jg8BowY4xwsq1K13ipTUiRQ7NcvQ.jpg?width=320&crop=smart&auto=webp&s=c573beada04ca0acb9e058764920972736112b06', 'width': 320}, {'height': 1096, 'url': 'https://external-preview.redd.it/5LmwX6Cv0oOQUv3jg8BowY4xwsq1K13ipTUiRQ7NcvQ.jpg?width=640&crop=smart&auto=webp&s=70054e9425c403c21ee1e277c807b9683075f471', 'width': 640}], 'source': {'height': 1364, 'url': 'https://external-preview.redd.it/5LmwX6Cv0oOQUv3jg8BowY4xwsq1K13ipTUiRQ7NcvQ.jpg?auto=webp&s=5b1193bcd234c77cf0988c6493229f2096313086', 'width': 796}, 'variants': {}}]} |
When is it better to run 2 GPUs? | 25 | I was just playing around with LM studio, with one 8GB GPU I got 60t/s.
When I added the second GPU (identical) it split the model between them but ran at only 7t/s.
CPU R5 5600G
GPU RX6600 (2x)
llama3.2
I also tried the llama2 13b model because it doesn't fit on one GPU and got 7t/s
Is the connection between the GPUs the bottleneck? Would a different CPU/MB combo fix it or is it always better to use one GPU, until the model gets too big to fit on it? | 2025-01-03T16:58:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hsr2ty/when_is_it_better_to_run_2_gpus/ | vesko26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsr2ty | false | null | t3_1hsr2ty | /r/LocalLLaMA/comments/1hsr2ty/when_is_it_better_to_run_2_gpus/ | false | false | self | 25 | null |
Can RAG count? | 6 | Hi, I set up a small RAG system using ollama and llama3.2.
The document I am using has a large table in it with items, let's say item1, item2, item3, item4, etc.
When I ask my LLM how many "items" are there and list them, it makes up an answer, usually 2 or 3. And the source would be that item name showing up somewhere else in the doc. Other than this issue my system has been relatively working.
Any help would be appreciated, thanks | 2025-01-03T17:04:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hsr8j1/can_rag_count/ | Pointfit_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsr8j1 | false | null | t3_1hsr8j1 | /r/LocalLLaMA/comments/1hsr8j1/can_rag_count/ | false | false | self | 6 | null |
What is the best smallest model to run locally? | 1 | [removed] | 2025-01-03T17:14:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hsrh1s/what_is_the_best_smallest_model_to_run_locally/ | 0y0s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsrh1s | false | null | t3_1hsrh1s | /r/LocalLLaMA/comments/1hsrh1s/what_is_the_best_smallest_model_to_run_locally/ | false | false | self | 1 | null |
I recompiled SmallThinker-3B-Preview into the WebGPU format, allowing you to use it directly on the web | 35 | Link to my WebLLM playground I made:
[https://shir-man.com/we-have-llm-at-home/](https://shir-man.com/we-have-llm-at-home/) (no cookies, no registration, no bs)
Link to the WASM model:
[https://huggingface.co/shirman/SmallThinker-3B-Preview-q4f16\_1-MLC-webgpu](https://huggingface.co/shirman/SmallThinker-3B-Preview-q4f16_1-MLC-webgpu)
Feel free to propose any features that you're missing | 2025-01-03T17:15:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hsri02/i_recompiled_smallthinker3bpreview_into_the/ | Shir_man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsri02 | false | null | t3_1hsri02 | /r/LocalLLaMA/comments/1hsri02/i_recompiled_smallthinker3bpreview_into_the/ | false | false | self | 35 | null |
Trying to create a local personal assistant | 2 | Hi!
I am trying to create a personal assistant that I can speak to that will speak back to me. I am technologically inclined and can follow directions quite well. I am stuck trying to figure out how to accomplish my goal or if there might be a better solution.
Currently I have an M4 Pro Mac Mini w/24gb memory. I am running LM Studio to serve my LLM and it's very fast, well fast enough for me. I am running OpenWebUI from my personal server in a docker container to access the LLM via a domain I am hosting. I have setup Pinokio on the Mac and running f5-tts for tts and I would like to incorporate it into something like this, [https://github.com/huggingface/transformers.js-examples/tree/main/moonshine-web](https://github.com/huggingface/transformers.js-examples/tree/main/moonshine-web)
I would like it to use the Moonshine web-app as a gateway for talking to it and have it talk back while using the LLM from LM Studio that I am serving.
I know it might sound stupid but my end goal is to use an iPhone or android device with my Xreal Air 2 Ultra glasses while I'm out and about to access this personal assistant and keeping my privacy.
Any suggestions would be greatly appreciated. | 2025-01-03T17:18:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hsrkrp/trying_to_create_a_local_personal_assistant/ | xkrist0pherx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsrkrp | false | null | t3_1hsrkrp | /r/LocalLLaMA/comments/1hsrkrp/trying_to_create_a_local_personal_assistant/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'dpFnn6GiZMK8tiXOJ3947CIIERqyA1KVLHrHxgtgzIM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/krsIwiYnPKwUfJd9SqzLZuXLNw7klvp9Db7RIdZQujw.jpg?width=108&crop=smart&auto=webp&s=0a26a4654a8309c53d6dc30d0a69163b78791c7e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/krsIwiYnPKwUfJd9SqzLZuXLNw7klvp9Db7RIdZQujw.jpg?width=216&crop=smart&auto=webp&s=1891374dffe758fce139f4ba23da96d20c024006', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/krsIwiYnPKwUfJd9SqzLZuXLNw7klvp9Db7RIdZQujw.jpg?width=320&crop=smart&auto=webp&s=fcc75c28a54c51af98ae04fd63651a32ca5fc8c8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/krsIwiYnPKwUfJd9SqzLZuXLNw7klvp9Db7RIdZQujw.jpg?width=640&crop=smart&auto=webp&s=96b147df172c671bf2c6151c1a65e2c30f105e8d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/krsIwiYnPKwUfJd9SqzLZuXLNw7klvp9Db7RIdZQujw.jpg?width=960&crop=smart&auto=webp&s=cd5908fc93fb410d73a6f80fa40cd5f5572da70a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/krsIwiYnPKwUfJd9SqzLZuXLNw7klvp9Db7RIdZQujw.jpg?width=1080&crop=smart&auto=webp&s=fdaacecadc68624510ad5fb69b6173bf805e5df8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/krsIwiYnPKwUfJd9SqzLZuXLNw7klvp9Db7RIdZQujw.jpg?auto=webp&s=fa7d305be54eac4214366d5177afdc6e5dda6c5d', 'width': 1200}, 'variants': {}}]} |
Wikipedia Database vs. OpenSource AI - Which to Backup Humanity's Knowledge? | 0 | If you had to choose between [Wikipedia's database](https://en.wikipedia.org/wiki/Wikipedia:Database_download) (\~19GB English compressed) or an OpenSource AI model (e.g., Llama 3.3, Qwen2.5, etc.) to preserve some of the humanity's knowledge for your own personal use, which would you pick and why?
Are there benchmarks to compare an AI model's accuracy against Wikipedia? | 2025-01-03T17:29:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hsruen/wikipedia_database_vs_opensource_ai_which_to/ | Thireus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsruen | false | null | t3_1hsruen | /r/LocalLLaMA/comments/1hsruen/wikipedia_database_vs_opensource_ai_which_to/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'sC7LHfIp3uHPuN8M0gzJLlh3gLn75KgnQY6_np0smDc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/38Nq_bUKBzclmr-cP0ANe3r0AJaJPz5h6Bwq63AiNcE.jpg?width=108&crop=smart&auto=webp&s=96953a68173c60be7ccc0f7e62ecff7891b59710', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/38Nq_bUKBzclmr-cP0ANe3r0AJaJPz5h6Bwq63AiNcE.jpg?width=216&crop=smart&auto=webp&s=168246d50c4e889e8269a3390cd3f24f6bcebdc4', 'width': 216}], 'source': {'height': 124, 'url': 'https://external-preview.redd.it/38Nq_bUKBzclmr-cP0ANe3r0AJaJPz5h6Bwq63AiNcE.jpg?auto=webp&s=0932cd767e2d293f7ed4e9e2bff91bb58bda4d7e', 'width': 220}, 'variants': {}}]} |
Model size increase in ollama when context size increases | 44 | Hi folks,
I can't wrap my head around why different models increase their size disproportionally with context length, seemingly across different model archiectures. The following is the stats from `ollama ps ` extracting only "SIZE" column for each instance:
| Context Size | llama3.2:3b | mistral-nemo:12b |
|--------------|----------------|--------------------|
| **4k** | 5.4 GB | 9.3 GB |
| **32k** | 8.0 GB | 15GB |
| **64k** | 13 GB | 22 GB |
| **128k** | 24GB | 38 GB |
I've also plot a graph of the table, which obviously show steeper slope of model size increase in mistral-nemo:12b as context size gets bigger.
I thought each token stored in quantized number should occupy the same size regardless of model architecture.
Is there something I am missing?
Is it due to difference in embedding length? Embedding dimension of mistral-nemo is 5120 vs. 3072 of llama3.2:3b.
| 2025-01-03T17:39:24 | siegevjorn | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hss2ln | false | null | t3_1hss2ln | /r/LocalLLaMA/comments/1hss2ln/model_size_increase_in_ollama_when_context_size/ | false | false | 44 | {'enabled': True, 'images': [{'id': 'ay1-sXwl7Buu__HkMjGykII85tvgyE_pxI8XnyTnRrw', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/qg51w985ftae1.jpeg?width=108&crop=smart&auto=webp&s=97e2bc2927ef4a5e2fa365bb9a1b11f538acbff2', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/qg51w985ftae1.jpeg?width=216&crop=smart&auto=webp&s=05c9f8576eca4a6b0ec5466cf7d6d3e3b85cd847', 'width': 216}, {'height': 204, 'url': 'https://preview.redd.it/qg51w985ftae1.jpeg?width=320&crop=smart&auto=webp&s=109cbc5647b607aed23ee90d2ceae966be395033', 'width': 320}, {'height': 409, 'url': 'https://preview.redd.it/qg51w985ftae1.jpeg?width=640&crop=smart&auto=webp&s=0447f05cd9bcdfc2be2555365820ee1dcad2025a', 'width': 640}], 'source': {'height': 590, 'url': 'https://preview.redd.it/qg51w985ftae1.jpeg?auto=webp&s=bc054b8867eea8efdf0bf614222e6b56e6c9510f', 'width': 921}, 'variants': {}}]} |
||
Buying used GPU to run homelab. | 0 | Just want to send out a general warning that you might get unlucky buying used GPU. Most RTX cards have been used for mining, run AIDA64, furmark or other tests before buying, send this responsibilty to the seller if you cant show up physically at sellers location.
Got unlucky with a used 3090, now in the difficult process of refund. | 2025-01-03T17:42:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hss4yp/buying_used_gpu_to_run_homelab/ | StandardLovers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hss4yp | false | null | t3_1hss4yp | /r/LocalLLaMA/comments/1hss4yp/buying_used_gpu_to_run_homelab/ | false | false | self | 0 | null |
Best local model for reasoning on a base Apple Silicon Mac | 3 | Hello! I've been trying to get a local model like Llama 3.2-3B-instruct to infer contextual information of given tab titles, and kind of group these titles into distinct categories that represent the likely purpose of the tab title.
For example a title of "MLX-how to set up" and "Deepseek-into the unknown" should be labelled something like "AI research"
But so far with prompt engineering it's still super bad at it.
This seems like a reasoning heavy task, and I'm wondering if anyone has experience working with local models that can run on even the base M1 Mac (speed doesn't matter too much) with better reasoning abilities for the categorisation task I'm trying to do.
Thanks! | 2025-01-03T17:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hssc88/best_local_model_for_reasoning_on_a_base_apple/ | Tonqer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hssc88 | false | null | t3_1hssc88 | /r/LocalLLaMA/comments/1hssc88/best_local_model_for_reasoning_on_a_base_apple/ | false | false | self | 3 | null |
Tool calling with edge models | 2 | Hey everyone,
We’ve been testing some edge models (like **Llama 3.2 3B** and **Granite 3.1 8B** dense) with function calling for a new SaaS we’re building. It's simple service designed for AI agents to search, purchase, and manage domains autonomously. It’s been an exciting journey, but we’ve hit a few quirks, especially with local LLMs.
For example, one of our core functions is listing DNS entries for a domain. This requires passing a \`domain\_id\`, which can be obtained from the \`domains()\` call that returns all the domains you own. While models like 4o-mini and haiku handle this, smaller local models often confuse the \`domain\_id\` with the domain name itself, despite it being a UUID.
We’ve tried various approaches with prompting(explicit instructions, structured responses, etc) but nothing completely resolves this issue. The performance of these smaller models is definitely improving, but they still stumble on what should be straightforward tasks.
I’m curious: Has anyone else faced similar issues when using smaller or local models for function calling in their projects? Would love to hear your experiences and insights! | 2025-01-03T18:05:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hssprf/tool_calling_with_edge_models/ | fewsats | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hssprf | false | null | t3_1hssprf | /r/LocalLLaMA/comments/1hssprf/tool_calling_with_edge_models/ | false | false | self | 2 | null |
2024 was the year GGUF took off | 1 | [https:\/\/huggingface.co\/datasets\/cfahlgren1\/hub-stats\/embed\/sql-console\/YpoTCDR](https://preview.redd.it/4apbsr5uotae1.png?width=1786&format=png&auto=webp&s=ce7db0beec4bba87a74962dccd0d60251f727a35)
| 2025-01-03T18:35:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hstgbw/2024_was_the_year_gguf_took_off/ | cfahlgren1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hstgbw | false | null | t3_1hstgbw | /r/LocalLLaMA/comments/1hstgbw/2024_was_the_year_gguf_took_off/ | false | false | 1 | null |
|
2024 was the year GGUF took off | 150 | 2025-01-03T18:37:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hsthyh/2024_was_the_year_gguf_took_off/ | cfahlgren1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsthyh | false | null | t3_1hsthyh | /r/LocalLLaMA/comments/1hsthyh/2024_was_the_year_gguf_took_off/ | false | false | 150 | null |
||
LLM + programming environment | 10 | LLMs are great but I think they would be enormously more useful if they had access to programmable environments (or at least a calculator!) and knew how to leverage that capability.
I'm slowly working towards this goal but am facing several challenges:
* The LLMs I have suck at choosing the right tool for the job. They tend to choose Python for everything when, for example, I've found that working through simple tooling problems using shell scripts works better and, for complicated problems, using statically-typed languages like OCaml tends to get me to a working solution faster because the LLM fixes type errors better than Python's run-time errors.
* Qwen is my fav model when it chooses to write code to solve a problem it prefers to pretend to run its code rather than actually running it. I guess I need to fine tune it but, as yet, no joy...
How are you approaching the challenge of integrating programming with LLMs? | 2025-01-03T18:47:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hstqll/llm_programming_environment/ | PurpleUpbeat2820 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hstqll | false | null | t3_1hstqll | /r/LocalLLaMA/comments/1hstqll/llm_programming_environment/ | false | false | self | 10 | null |
Best Software for Running Local LLMs on Windows with AMD 6800XT and 16GB VRAM | 1 | [removed] | 2025-01-03T19:00:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hsu242/best_software_for_running_local_llms_on_windows/ | ITMSPGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsu242 | false | null | t3_1hsu242 | /r/LocalLLaMA/comments/1hsu242/best_software_for_running_local_llms_on_windows/ | false | false | self | 1 | null |
Get multiple llms talking to each other? | 0 | Hello everyone.
I have lm studio. Oogabooga and other llm programs I use to run my llms.
I want to set up two or more llms talking to each other...
Any idea how I could do this? | 2025-01-03T19:07:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hsu81h/get_multiple_llms_talking_to_each_other/ | Aggressive_Special25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsu81h | false | null | t3_1hsu81h | /r/LocalLLaMA/comments/1hsu81h/get_multiple_llms_talking_to_each_other/ | false | false | self | 0 | null |
Local LLM list based on features | 1 | [removed] | 2025-01-03T19:07:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hsu817/local_llm_list_based_on_features/ | Less-Capital9689 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsu817 | false | null | t3_1hsu817 | /r/LocalLLaMA/comments/1hsu817/local_llm_list_based_on_features/ | false | false | self | 1 | null |
Going open source | 1 | [removed] | 2025-01-03T19:10:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hsub29/going_open_source/ | Huge-Princess | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsub29 | false | null | t3_1hsub29 | /r/LocalLLaMA/comments/1hsub29/going_open_source/ | false | false | self | 1 | null |
Small models for function calling that are purely instruct-trained, without cultural knowledge? | 3 | I want to train my own LLM to operate within the context of a specific application (very small subset of functionality is needed... "go here, do this, do that" etc...)
But I don't want to spend $xx,xxx dollars and several weeks on training it.
Do we have small instruct-training datasets, or maybe foundation models that I can use to do this, that DON'T have cultural information? Knowing the year that Wrath of Khan came out is great and all, but is pure noise for my use case.
| 2025-01-03T19:15:04 | https://www.reddit.com/r/LocalLLaMA/comments/1hsuey6/small_models_for_function_calling_that_are_purely/ | platistocrates | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsuey6 | false | null | t3_1hsuey6 | /r/LocalLLaMA/comments/1hsuey6/small_models_for_function_calling_that_are_purely/ | false | false | self | 3 | null |
You programming RLHF, RLHF programming you ... | 1 | 2025-01-03T19:23:21 | one-escape-left | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hsum0d | false | null | t3_1hsum0d | /r/LocalLLaMA/comments/1hsum0d/you_programming_rlhf_rlhf_programming_you/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'mfmpc-Z904ELOBgruxew5dq3d4Jl5wcbssfiHMoWFGI', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/6onsiluoxtae1.jpeg?width=108&crop=smart&auto=webp&s=7dcd7c05f38d78060f5d409ec8ede7dfce54fe37', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/6onsiluoxtae1.jpeg?width=216&crop=smart&auto=webp&s=b1d0f650543da7f777d0f90537fc5b3943299b89', 'width': 216}, {'height': 401, 'url': 'https://preview.redd.it/6onsiluoxtae1.jpeg?width=320&crop=smart&auto=webp&s=275892717151c8a069f5e0f124db6c0ec2cf78ec', 'width': 320}, {'height': 802, 'url': 'https://preview.redd.it/6onsiluoxtae1.jpeg?width=640&crop=smart&auto=webp&s=6cdad7fd02598292691eb1bf637f93a0af2e0254', 'width': 640}, {'height': 1203, 'url': 'https://preview.redd.it/6onsiluoxtae1.jpeg?width=960&crop=smart&auto=webp&s=dd2af3f610b76305fb8c0c6215f65f50197dc531', 'width': 960}], 'source': {'height': 1203, 'url': 'https://preview.redd.it/6onsiluoxtae1.jpeg?auto=webp&s=962d0482ea4c7b2ca6aedb0ef147ec7ef9647921', 'width': 960}, 'variants': {}}]} |
|||
Introducing gsh - The Generative Shell. An interactive shell like bash/zsh/fish that can talk to your local LLM to suggest, explain, run commands or make code changes for you. | 122 | 2025-01-03T19:34:17 | atinylittleshell | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hsuvkl | false | null | t3_1hsuvkl | /r/LocalLLaMA/comments/1hsuvkl/introducing_gsh_the_generative_shell_an/ | false | false | 122 | {'enabled': True, 'images': [{'id': '8MfOBXRH-914jhsWIJFB3ya4z1jXpdk50hiGurhivMo', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/japw6jg0xtae1.gif?width=108&crop=smart&format=png8&s=fb8b6344070f1aadc5e6a5960e07040953fd71cd', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/japw6jg0xtae1.gif?width=216&crop=smart&format=png8&s=72f3667ab5f8c8881736fff733f7e4b9df3da470', 'width': 216}, {'height': 205, 'url': 'https://preview.redd.it/japw6jg0xtae1.gif?width=320&crop=smart&format=png8&s=dc751c4ee45e7ad5cb74a518ad780f3d495c9450', 'width': 320}, {'height': 410, 'url': 'https://preview.redd.it/japw6jg0xtae1.gif?width=640&crop=smart&format=png8&s=63e376e49f910906bad12493ecae44bde8f14843', 'width': 640}], 'source': {'height': 493, 'url': 'https://preview.redd.it/japw6jg0xtae1.gif?format=png8&s=e2f044cbc31fdde9868fcd002765b08141024b8e', 'width': 769}, 'variants': {'gif': {'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/japw6jg0xtae1.gif?width=108&crop=smart&s=014755f3b886c74d0421f1a3ec3a9c402d81d1e9', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/japw6jg0xtae1.gif?width=216&crop=smart&s=9e1a73c35dd5888c4354e1bb70c5786c31975922', 'width': 216}, {'height': 205, 'url': 'https://preview.redd.it/japw6jg0xtae1.gif?width=320&crop=smart&s=b5a336f5a9989b2586e4d45ca2308a88420d21d8', 'width': 320}, {'height': 410, 'url': 'https://preview.redd.it/japw6jg0xtae1.gif?width=640&crop=smart&s=23b54fab6cad6df504e9325138c79fba3b74822b', 'width': 640}], 'source': {'height': 493, 'url': 'https://preview.redd.it/japw6jg0xtae1.gif?s=7cb2f25589f797ee7ed16bdf01d28e70cba457f3', 'width': 769}}, 'mp4': {'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/japw6jg0xtae1.gif?width=108&format=mp4&s=27063e5b80e61fdaf3feee268c5a9f0f51f07b18', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/japw6jg0xtae1.gif?width=216&format=mp4&s=2eceea06068f007883e62e30857847ba067748f3', 'width': 216}, {'height': 205, 'url': 'https://preview.redd.it/japw6jg0xtae1.gif?width=320&format=mp4&s=e4edc872b36b75c3737864b73504e8311445d7d6', 'width': 320}, {'height': 410, 'url': 'https://preview.redd.it/japw6jg0xtae1.gif?width=640&format=mp4&s=c12c44278de4f177280752946077af19151c3507', 'width': 640}], 'source': {'height': 493, 'url': 'https://preview.redd.it/japw6jg0xtae1.gif?format=mp4&s=d00581f0985723ced9a6e53001e6fe21aec5b446', 'width': 769}}}}]} |
|||
How many A100 80 gpu do I need to run fully quantized Llama 70B model for inference? | 0 | Inferencing is one thing. If'm doing fine tuning 70B fully quanitzied model. Do you have thoughts on thsi too? | 2025-01-03T19:49:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hsv90v/how_many_a100_80_gpu_do_i_need_to_run_fully/ | kitkatmafia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hsv90v | false | null | t3_1hsv90v | /r/LocalLLaMA/comments/1hsv90v/how_many_a100_80_gpu_do_i_need_to_run_fully/ | false | false | self | 0 | null |
2025 is important - Qwen | 168 | 2025-01-03T19:53:48 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hsvcgs | false | null | t3_1hsvcgs | /r/LocalLLaMA/comments/1hsvcgs/2025_is_important_qwen/ | false | false | 168 | {'enabled': True, 'images': [{'id': 'CrM-eUIvTImqoWdVMoo31FjQ0SfX7fOdOYTr8PfIh0E', 'resolutions': [{'height': 33, 'url': 'https://preview.redd.it/q1sgnfc33uae1.png?width=108&crop=smart&auto=webp&s=4a9b6ec06b96e88ce86fe3f5e35565fcdbba5c66', 'width': 108}, {'height': 67, 'url': 'https://preview.redd.it/q1sgnfc33uae1.png?width=216&crop=smart&auto=webp&s=5f7c3f71e533e5174eeceaa88e78085865ee32f5', 'width': 216}, {'height': 99, 'url': 'https://preview.redd.it/q1sgnfc33uae1.png?width=320&crop=smart&auto=webp&s=52bc6d017d86322d0082251947545f58aae9a002', 'width': 320}, {'height': 198, 'url': 'https://preview.redd.it/q1sgnfc33uae1.png?width=640&crop=smart&auto=webp&s=a1e929cb2dfc699488fd4db76fbc77fbdc500059', 'width': 640}], 'source': {'height': 224, 'url': 'https://preview.redd.it/q1sgnfc33uae1.png?auto=webp&s=78a9a5e0ba1b5e7bd0117feb86185886fdd10850', 'width': 722}, 'variants': {}}]} |
Subsets and Splits