title
stringlengths
1
300
score
int64
0
3.09k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
3.09k
preview
stringlengths
301
5.01k
Hugging Face continually pretrained Llama 3.2 3B to achieve 2-3x improvement on MATH
1
Hey hey, everyone VB from HF here. The SmolLM team at HF ran some training ablations on using high quality math tokens to boost the model perf. The result, with *just* 160B of high quality commercially permissive tokens Llama 3.2 3B (continually pre-trained) scored 2x higher on GSM8K and 3x higher on MATH* *with minimal drop in perf on MMLU-Pro and no drop on HellaSwag Our script for continual training with Nanotron is available on the smollm github repo, along with everything to reproduce the training and ablation studies! Go vibe check the model today! - Model: https://huggingface.co/HuggingFaceTB/FineMath-Llama-3B - Dataset: https://huggingface.co/datasets/HuggingFaceTB/finemath - Reproduce the training/ablation: https://github.com/huggingface/smollm/tree/main/pre-training/continual-pretraining
2025-01-06T20:37:59
https://www.reddit.com/r/LocalLLaMA/comments/1hv960u/hugging_face_continually_pretrained_llama_32_3b/
vaibhavs10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv960u
false
null
t3_1hv960u
/r/LocalLLaMA/comments/1hv960u/hugging_face_continually_pretrained_llama_32_3b/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/vd1RO3eYclkBoovihPAkvFpeDy5r4PV4fMO5Ac2EoIU.jpg?auto=webp&s=b8624715609b02ca0a3346e242fd4212871ca429', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/vd1RO3eYclkBoovihPAkvFpeDy5r4PV4fMO5Ac2EoIU.jpg?width=108&crop=smart&auto=webp&s=b8eec78003e4a39dd9b8906219a951a1fec929d8', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/vd1RO3eYclkBoovihPAkvFpeDy5r4PV4fMO5Ac2EoIU.jpg?width=216&crop=smart&auto=webp&s=badd1048e679e622b4bc6edd12974a7d15a71a0d', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/vd1RO3eYclkBoovihPAkvFpeDy5r4PV4fMO5Ac2EoIU.jpg?width=320&crop=smart&auto=webp&s=74f9e5ce7ed613c6cbf413d906d0a2ebc2544010', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/vd1RO3eYclkBoovihPAkvFpeDy5r4PV4fMO5Ac2EoIU.jpg?width=640&crop=smart&auto=webp&s=4b1aa74b999b875a64acd9e6fe0c40d7f26030e2', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/vd1RO3eYclkBoovihPAkvFpeDy5r4PV4fMO5Ac2EoIU.jpg?width=960&crop=smart&auto=webp&s=481757330d0c66164cbff8a9da8d44dbc5e59fae', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/vd1RO3eYclkBoovihPAkvFpeDy5r4PV4fMO5Ac2EoIU.jpg?width=1080&crop=smart&auto=webp&s=e184bf1fc4060b1ddc8601880474cbf835b8b8bd', 'width': 1080, 'height': 583}], 'variants': {}, 'id': 'n8N4yTdRNsXoapBIwYrom6PYbBSSuXgTgxmbUYZovYI'}], 'enabled': False}
New to this - need help with advice on a build purchase
1
Hey everyone, so I’m very new to LLMs, and all of this ai. I’ve been using services like Claude, ChatGPT, Midjourney, etc…and want to go local and cancel some subscriptions. I currently have a buddy who will sell me his gaming setup (some of the hardware is pretty lackluster) for $750, and I’m wondering if it’s a good deal and what I will need to change to get the best performance for ai work (I also do cad work and 3D rendering). I know the ram will definitely be something I have to upgrade as it’s not nearly enough (from what I’ve read - again, new and ignorant to most of this…minus having llama with Open WebUi on my 16gb M1 Pro MacBook Pro, and Flux taking 11 minutes to generate a single 1mp image 😂) I’m wondering if a) it’s a good deal for the cost b) if so, what I should then immediately upgrade (I also plan to game on this) c) if I’d be better off buying a build with a better cpu, mb, and ram Here are the specs of his build (well, purchase as it’s an HP Omen) OMEN by HP Desktop PC Product number: 2H4A2AV •20C1 Cycle AV •NVIDIA® GeForce RTX™ 3090 (24 GB GDDR6X dedicated) •WD Black 256 GB PCIe® NVMe™ TLC M.2 SSD •BU RCTO OMN DoradoOCAMP 30L PREM Z490 US •Front Bezel Shadow Black Glass, Dark Chrome Logo+ Side Cover Glass with Cooler Master AMP 750 W Platinum efficiency power supply •HyperX® 16 GB DDR4-3200 XMP RGB SDRAM (2 x 8 GB) •Realtek Wi-Fi 5 (2x2) and Bluetooth® 5 combo, MU-MIMO supported •Windows 11 Home 64 ADV •OSLOC US •Intel® Core™ i9-10850K W/Liquid Cooling (3.6 GHz up to 5.2 GHz ,20 MB L3 cache, 10 cores) •CKIT HP CTO OMEN 1C20 US
2025-01-06T20:57:14
https://www.reddit.com/r/LocalLLaMA/comments/1hv9n4w/new_to_this_need_help_with_advice_on_a_build/
DrRoughFingers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv9n4w
false
null
t3_1hv9n4w
/r/LocalLLaMA/comments/1hv9n4w/new_to_this_need_help_with_advice_on_a_build/
false
false
self
1
null
Best Strategy to Handle a Book as Input
1
[removed]
2025-01-06T20:58:41
https://www.reddit.com/r/LocalLLaMA/comments/1hv9oho/best_strategy_to_handle_a_book_as_input/
mohammadomar17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hv9oho
false
null
t3_1hv9oho
/r/LocalLLaMA/comments/1hv9oho/best_strategy_to_handle_a_book_as_input/
false
false
self
1
null
Llama 3b - you can 2-3x the math capabilities just by continually training on high quality 160B tokens*
1
*without compromising on other metrics
2025-01-06T21:06:59
https://i.redd.it/t3kjugswufbe1.jpeg
Own-Potential-2308
i.redd.it
1970-01-01T00:00:00
0
{}
1hv9w65
false
null
t3_1hv9w65
/r/LocalLLaMA/comments/1hv9w65/llama_3b_you_can_23x_the_math_capabilities_just/
false
false
https://a.thumbs.redditm…JlOIMOWxq2t8.jpg
1
{'images': [{'source': {'url': 'https://preview.redd.it/t3kjugswufbe1.jpeg?auto=webp&s=6d2277354693fb9cadea82f861be69c08b6087df', 'width': 1080, 'height': 1613}, 'resolutions': [{'url': 'https://preview.redd.it/t3kjugswufbe1.jpeg?width=108&crop=smart&auto=webp&s=ab940515860c6ca43564962d25d6c84879ff985a', 'width': 108, 'height': 161}, {'url': 'https://preview.redd.it/t3kjugswufbe1.jpeg?width=216&crop=smart&auto=webp&s=1383f50fc30b47ac057759e100e760a816ba8ee1', 'width': 216, 'height': 322}, {'url': 'https://preview.redd.it/t3kjugswufbe1.jpeg?width=320&crop=smart&auto=webp&s=079b134e69328775846c30baa101ca6ca2735ef5', 'width': 320, 'height': 477}, {'url': 'https://preview.redd.it/t3kjugswufbe1.jpeg?width=640&crop=smart&auto=webp&s=3c2edf457e803645f2e24e8abb8fa5e126e43d41', 'width': 640, 'height': 955}, {'url': 'https://preview.redd.it/t3kjugswufbe1.jpeg?width=960&crop=smart&auto=webp&s=33476a2b4a7135f5123d511551ac5a488e39a863', 'width': 960, 'height': 1433}, {'url': 'https://preview.redd.it/t3kjugswufbe1.jpeg?width=1080&crop=smart&auto=webp&s=8da75dcb9ddc9039e0211637080adff9fe3c90f9', 'width': 1080, 'height': 1613}], 'variants': {}, 'id': '9zWO4WPh8ibSg6lCNe9lA8odmwpOopsjpjEFKWlgnNo'}], 'enabled': True}
Yet another reason why we must have local models
1
> Remember when Uber rides cost next to nothing? 🚗💨 That was the era of VC-subsidized transportation. Now we’re in the age of VC-subsidized AI. Instead of cheap rides, we’re getting cheap intelligence. As Dan Hockenmaier pointed out recently: Use it while it lasts—because nothing this good stays free forever. This was in response to Sam Altman's post on X saying: > insane thing: we are currently losing money on openai pro subscriptions! people use it much more than we expected. Original post: https://www.linkedin.com/posts/rubendominguezibar_remember-when-uber-rides-cost-next-to-nothing-activity-7282134404733284352-Sdz1?utm_source=share&utm_medium=member_android This is such an interesting take and l wonder if it's true, but then again, we have made models orders of magnitude smaller, faster and cheaper in the last 2 years that this might not be the case. Thoughts?
2025-01-06T21:22:02
https://www.reddit.com/r/LocalLLaMA/comments/1hva9ka/yet_another_reason_why_we_must_have_local_models/
takuonline
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hva9ka
false
null
t3_1hva9ka
/r/LocalLLaMA/comments/1hva9ka/yet_another_reason_why_we_must_have_local_models/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/SJO7eV0-KdAma4iO8fo12OLqL7VnznsEB-6taPN21l8.jpg?auto=webp&s=f19e31a6ff4dc5555d46993e16584e4f4aaeeb66', 'width': 1184, 'height': 278}, 'resolutions': [{'url': 'https://external-preview.redd.it/SJO7eV0-KdAma4iO8fo12OLqL7VnznsEB-6taPN21l8.jpg?width=108&crop=smart&auto=webp&s=65f87b8b155d7a5a864b4a0ad08cf4a74defabd6', 'width': 108, 'height': 25}, {'url': 'https://external-preview.redd.it/SJO7eV0-KdAma4iO8fo12OLqL7VnznsEB-6taPN21l8.jpg?width=216&crop=smart&auto=webp&s=cee56a70fd55aca828036511acd15bb21e41a398', 'width': 216, 'height': 50}, {'url': 'https://external-preview.redd.it/SJO7eV0-KdAma4iO8fo12OLqL7VnznsEB-6taPN21l8.jpg?width=320&crop=smart&auto=webp&s=9bf008dcefc644bfc8f1f638d3aed5f49aad7129', 'width': 320, 'height': 75}, {'url': 'https://external-preview.redd.it/SJO7eV0-KdAma4iO8fo12OLqL7VnznsEB-6taPN21l8.jpg?width=640&crop=smart&auto=webp&s=35e67ea275ae08889d43ea625c779c57ca396590', 'width': 640, 'height': 150}, {'url': 'https://external-preview.redd.it/SJO7eV0-KdAma4iO8fo12OLqL7VnznsEB-6taPN21l8.jpg?width=960&crop=smart&auto=webp&s=0dc8085c9946f291d43590663580fc262adbd598', 'width': 960, 'height': 225}, {'url': 'https://external-preview.redd.it/SJO7eV0-KdAma4iO8fo12OLqL7VnznsEB-6taPN21l8.jpg?width=1080&crop=smart&auto=webp&s=33a3fd4466e045fb8db1f71e031d7bd7bf8af46b', 'width': 1080, 'height': 253}], 'variants': {}, 'id': 'yjAi-RN3RWWAXzr0ncZaGvJp7vnP5i2IJCFJqYHtZoY'}], 'enabled': False}
Working With Multidimensional NPZ/PKL Data [SMPL/AMASS]
1
I am working on a project that involves fine tuning with human motion related data. For that, I was advised to work with the SMPL/AMASS databases which are stored in npz/pkl files. I have never worked with similar data types, but one of the groups has 3 dimensional data, which is not possible with csv. Can someone please help me how I can work with these databases.
2025-01-06T21:26:55
https://www.reddit.com/r/LocalLLaMA/comments/1hvadru/working_with_multidimensional_npzpkl_data/
Affectionate-Head246
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvadru
false
null
t3_1hvadru
/r/LocalLLaMA/comments/1hvadru/working_with_multidimensional_npzpkl_data/
false
false
self
1
null
RTX8000 passive NVlink with 16x PCIE and 4x PCIE slots
1
[removed]
2025-01-06T21:34:35
https://www.reddit.com/r/LocalLLaMA/comments/1hvakgm/rtx8000_passive_nvlink_with_16x_pcie_and_4x_pcie/
JusticeDread
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvakgm
false
null
t3_1hvakgm
/r/LocalLLaMA/comments/1hvakgm/rtx8000_passive_nvlink_with_16x_pcie_and_4x_pcie/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/qFlQA8Xuvgy-lQqYxyuNjJD3yBqdsxh5gRmp1DFpLa0.jpg?auto=webp&s=30306cb595786b270f086fb5759e2e5315c9f45f', 'width': 480, 'height': 360}, 'resolutions': [{'url': 'https://external-preview.redd.it/qFlQA8Xuvgy-lQqYxyuNjJD3yBqdsxh5gRmp1DFpLa0.jpg?width=108&crop=smart&auto=webp&s=c6e1d440fdd7460e401287049d721f03b65b210c', 'width': 108, 'height': 81}, {'url': 'https://external-preview.redd.it/qFlQA8Xuvgy-lQqYxyuNjJD3yBqdsxh5gRmp1DFpLa0.jpg?width=216&crop=smart&auto=webp&s=863846cea3d383477bc1ca8470ca1483ff25e326', 'width': 216, 'height': 162}, {'url': 'https://external-preview.redd.it/qFlQA8Xuvgy-lQqYxyuNjJD3yBqdsxh5gRmp1DFpLa0.jpg?width=320&crop=smart&auto=webp&s=3ac8c3d20a906f03c22b3b9f15f456635136b0ab', 'width': 320, 'height': 240}], 'variants': {}, 'id': 'L_SDOG0Ve7NSbTpWlUl6lQ_WP52goTL7cyJsY96tlic'}], 'enabled': False}
Scaling Inference Time Compute with On-Device Language Models in GPT4All
2
Key Features in the GPT4All Reasoning System: Reasoning System and Models: Designed specifically for combining iterative LLM outputs, chain of thought and tool calls for solving harder problems. Code Interpreter: Execute code inline with your prompts for advanced problem-solving. Tool Calling: Seamlessly interact with external tools to enhance your workflows. Code Sandboxing: Run secure, platform-agnostic code tool calls directly on your device. https://www.nomic.ai/blog/posts/gpt4all-scaling-test-time-compute
2025-01-06T21:38:36
https://www.reddit.com/r/LocalLLaMA/comments/1hvao1l/scaling_inference_time_compute_with_ondevice/
AIGuy3000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvao1l
false
null
t3_1hvao1l
/r/LocalLLaMA/comments/1hvao1l/scaling_inference_time_compute_with_ondevice/
false
false
self
2
{'images': [{'source': {'url': 'https://external-preview.redd.it/V5Q8eVnUt8SQSKpP9guuilt5tV1fMb7Yxq7Er6OxBl0.jpg?auto=webp&s=9ab740c35344c38d88070674ac23a0070ff62b49', 'width': 4800, 'height': 2520}, 'resolutions': [{'url': 'https://external-preview.redd.it/V5Q8eVnUt8SQSKpP9guuilt5tV1fMb7Yxq7Er6OxBl0.jpg?width=108&crop=smart&auto=webp&s=1e8aaa2b8d70516c5f9464b558e0ba58ab8b8d9d', 'width': 108, 'height': 56}, {'url': 'https://external-preview.redd.it/V5Q8eVnUt8SQSKpP9guuilt5tV1fMb7Yxq7Er6OxBl0.jpg?width=216&crop=smart&auto=webp&s=866d48f4ec59d57f5c7644b984bc1ef054f33612', 'width': 216, 'height': 113}, {'url': 'https://external-preview.redd.it/V5Q8eVnUt8SQSKpP9guuilt5tV1fMb7Yxq7Er6OxBl0.jpg?width=320&crop=smart&auto=webp&s=cc989452796c140009c609153c95220db63c8806', 'width': 320, 'height': 168}, {'url': 'https://external-preview.redd.it/V5Q8eVnUt8SQSKpP9guuilt5tV1fMb7Yxq7Er6OxBl0.jpg?width=640&crop=smart&auto=webp&s=17de4c077cd3341ac00800790523300751098fdb', 'width': 640, 'height': 336}, {'url': 'https://external-preview.redd.it/V5Q8eVnUt8SQSKpP9guuilt5tV1fMb7Yxq7Er6OxBl0.jpg?width=960&crop=smart&auto=webp&s=ac64a2c0b634fe23ce7d238afac123d8bbbf9a54', 'width': 960, 'height': 504}, {'url': 'https://external-preview.redd.it/V5Q8eVnUt8SQSKpP9guuilt5tV1fMb7Yxq7Er6OxBl0.jpg?width=1080&crop=smart&auto=webp&s=1358facab21912fb359c4192bfce3e72ea6cfd4c', 'width': 1080, 'height': 567}], 'variants': {}, 'id': 'C3QcSYNsxcrWciPIp27xK4oeU9wC5g5IvqqHgu_0de4'}], 'enabled': False}
I made a CLI for improving prompts using a genetic algorithm
1
2025-01-06T21:50:57
https://i.redd.it/p8q191zp2gbe1.gif
jsonathan
i.redd.it
1970-01-01T00:00:00
0
{}
1hvayr2
false
null
t3_1hvayr2
/r/LocalLLaMA/comments/1hvayr2/i_made_a_cli_for_improving_prompts_using_a/
false
false
https://b.thumbs.redditm…NjY25qYRPXBE.jpg
1
{'images': [{'source': {'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?format=png8&s=54eef6439736c1e48f7700934d2661491794f143', 'width': 1300, 'height': 930}, 'resolutions': [{'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=108&crop=smart&format=png8&s=c914fad879ebf7d66e059f613679366c929dab70', 'width': 108, 'height': 77}, {'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=216&crop=smart&format=png8&s=aee93959698f25a0c32fbbba55aa2e7dfbc577f1', 'width': 216, 'height': 154}, {'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=320&crop=smart&format=png8&s=d8f0d0096b90e4ee57ac3c16e03c496231e4694d', 'width': 320, 'height': 228}, {'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=640&crop=smart&format=png8&s=674ff4296994cbb3e8dc456d3fae16d09a5db5e9', 'width': 640, 'height': 457}, {'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=960&crop=smart&format=png8&s=8f068cca851d1ba386ccdbfe0c0243c4e70d0624', 'width': 960, 'height': 686}, {'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=1080&crop=smart&format=png8&s=adc10d5dc604c6d66be196318279d395a7b5eb5a', 'width': 1080, 'height': 772}], 'variants': {'gif': {'source': {'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?s=4a6bcc8379168e5afe85d84b79f06fa0445de786', 'width': 1300, 'height': 930}, 'resolutions': [{'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=108&crop=smart&s=0ebd8cc9fd06255877cb4490f09eabf9ff2b7100', 'width': 108, 'height': 77}, {'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=216&crop=smart&s=31364d22d71c07f0491955ae763d196cd058b115', 'width': 216, 'height': 154}, {'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=320&crop=smart&s=74771a41a2c5e9c1e76d739b7fb2084a20684f64', 'width': 320, 'height': 228}, {'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=640&crop=smart&s=bceed9ec3dadb681b749fefac3a574e521e68837', 'width': 640, 'height': 457}, {'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=960&crop=smart&s=a66afa495f40156da15e011a209e62882b6bdf3e', 'width': 960, 'height': 686}, {'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=1080&crop=smart&s=1108fcd31caeec92393686d8f868d86d8eae4a32', 'width': 1080, 'height': 772}]}, 'mp4': {'source': {'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?format=mp4&s=45200f026a58d629808c27d106e06ce9136a6d6c', 'width': 1300, 'height': 930}, 'resolutions': [{'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=108&format=mp4&s=afe8a3249b531aa140ff4839deb95efaf1be0ea5', 'width': 108, 'height': 77}, {'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=216&format=mp4&s=f7552bf572e3adb69f8c7103fad6200a29341126', 'width': 216, 'height': 154}, {'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=320&format=mp4&s=26eace103aaadc9c714962f1a01e3b10af97c14e', 'width': 320, 'height': 228}, {'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=640&format=mp4&s=33307489a203fffb804e330a3b49b14e6dc05345', 'width': 640, 'height': 457}, {'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=960&format=mp4&s=1cc68cc0d4027d0a7ade694d3c25d7409118c048', 'width': 960, 'height': 686}, {'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=1080&format=mp4&s=edd3896955976e21fa4f808e404c559ce04edc70', 'width': 1080, 'height': 772}]}}, 'id': 'ajxSOdNRDESXBGcT7VD_zxfB847AK4G9Qf4v6ycEKAM'}], 'enabled': True}
What do you find to be the best local llm currently?
1
[removed]
2025-01-06T22:18:47
https://www.reddit.com/r/LocalLLaMA/comments/1hvbmg9/what_do_you_find_to_be_the_best_local_llm/
Game-Lover44
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvbmg9
false
null
t3_1hvbmg9
/r/LocalLLaMA/comments/1hvbmg9/what_do_you_find_to_be_the_best_local_llm/
false
false
self
1
null
Llama 3.3 70b Int4 Quantized vs Llama 3.1 70b Full
1
[removed]
2025-01-06T22:18:50
https://www.reddit.com/r/LocalLLaMA/comments/1hvbmhx/llama_33_70b_int4_quantized_vs_llama_31_70b_full/
raikirichidori255
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvbmhx
false
null
t3_1hvbmhx
/r/LocalLLaMA/comments/1hvbmhx/llama_33_70b_int4_quantized_vs_llama_31_70b_full/
false
false
self
1
null
What is a good OSS software for exam prep ?
1
I have a big psychology exam to prepare for. I have the courses as PDF files. Is there a local LLM software (OSS is better) which would help me prepare and create flash cards, quizzes, etc... ? I can't find any ! thanks !
2025-01-06T22:44:12
https://www.reddit.com/r/LocalLLaMA/comments/1hvc7u2/what_is_a_good_oss_software_for_exam_prep/
ritonlajoie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvc7u2
false
null
t3_1hvc7u2
/r/LocalLLaMA/comments/1hvc7u2/what_is_a_good_oss_software_for_exam_prep/
false
false
self
1
null
How to get Llama 3.3 Q5 or Q8 GGUF models to run on 4090/i9?
1
Forgive me ignorance as i've only played around with smaller models and i am learning! Appreciate any assiastnce from the experts! How do I split the model loading between GPU/CPU in a python script? I'm trying to create **python scripts** to run llama 3.3 Q5 or Q8 GGUF models from hugging face on my 4090 / i9 14900k. I'm using GPT to help me create the script. It's suggested using llama-ccp-python and has suggsted 30 layers to GPU . I've installed all the pre-requisites. And am using llama-ccp-python version 0.3.5 These are the models i am testing wtih : [bartowski/Llama-3.3-70B-Instruct-GGUF at main](https://huggingface.co/bartowski/Llama-3.3-70B-Instruct-GGUF/tree/main) Everytime i run the script it reverts backs to CPU and loads nothing to the GPU. from llama\_cpp import Llama \# Path to the GGUF model file model\_path = "C:/test/Models/Llama3.3/Llama-3.3-70B-Instruct-Q8\_0/Llama-3.3-70B-Instruct-Q8\_0-00001-of-00002.gguf" \# Load the model model = Llama(model\_path=model\_path, n\_gpu\_layers=35) # Load as much as possible into GPU \# Define a basic query query = "Explain the importance of machine learning in modern technology." \# Run inference response = model(query, max\_tokens=200) \# Print the response print("Response:", response\["choices"\]\[0\]\["text"\].strip()) "llm\_load\_tensors: tensor 'token\_embd.weight' (q8\_0) (and 802 others) cannot be used with preferred buffer type CPU\_AARCH64, using CPU instead" GPT wants me to install a cuda specific llama-ccp-python but it has to be manually assembled?
2025-01-06T22:47:33
https://www.reddit.com/r/LocalLLaMA/comments/1hvcale/how_to_get_llama_33_q5_or_q8_gguf_models_to_run/
shiftdeleat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvcale
false
null
t3_1hvcale
/r/LocalLLaMA/comments/1hvcale/how_to_get_llama_33_q5_or_q8_gguf_models_to_run/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/KguLl31TgEIDyJLerOEiLd8LQ6H2E2LhmLTlkcgUTZs.jpg?auto=webp&s=224db690b071e38e61682c7572400ce8bcbca6c2', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/KguLl31TgEIDyJLerOEiLd8LQ6H2E2LhmLTlkcgUTZs.jpg?width=108&crop=smart&auto=webp&s=6f80c2de68170b852e65bd4d7a4af552545e7b90', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/KguLl31TgEIDyJLerOEiLd8LQ6H2E2LhmLTlkcgUTZs.jpg?width=216&crop=smart&auto=webp&s=40d38f17a1e3d828bad61486ad1b3201aa7f25ad', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/KguLl31TgEIDyJLerOEiLd8LQ6H2E2LhmLTlkcgUTZs.jpg?width=320&crop=smart&auto=webp&s=bf4c8c36d632d6c80c8b04b15096bab9e2e204f8', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/KguLl31TgEIDyJLerOEiLd8LQ6H2E2LhmLTlkcgUTZs.jpg?width=640&crop=smart&auto=webp&s=20f5dfbfe4d00d3fd268c555adb7a06dd60cda29', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/KguLl31TgEIDyJLerOEiLd8LQ6H2E2LhmLTlkcgUTZs.jpg?width=960&crop=smart&auto=webp&s=6c4276815dbdbcfa8e500b7969db5f5b17746efb', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/KguLl31TgEIDyJLerOEiLd8LQ6H2E2LhmLTlkcgUTZs.jpg?width=1080&crop=smart&auto=webp&s=0c3f437f266903dd9ac988b3c767fff41b062fbf', 'width': 1080, 'height': 583}], 'variants': {}, 'id': 'whDHqKdl649NCtH4NDwQLaENlXW9bxGoDDJqA827WRY'}], 'enabled': False}
See note SHO-14. Compared to rtx4090, 40GB GPU memory
1
2025-01-06T23:03:40
https://i.redd.it/1pjg4qnmfgbe1.png
Different_Fix_2217
i.redd.it
1970-01-01T00:00:00
0
{}
1hvcnxy
false
null
t3_1hvcnxy
/r/LocalLLaMA/comments/1hvcnxy/see_note_sho14_compared_to_rtx4090_40gb_gpu_memory/
false
false
https://a.thumbs.redditm…y3vvl-tYmR40.jpg
1
{'images': [{'source': {'url': 'https://preview.redd.it/1pjg4qnmfgbe1.png?auto=webp&s=8779e27609e99b38ca0b7dd50217caaa31687677', 'width': 1916, 'height': 970}, 'resolutions': [{'url': 'https://preview.redd.it/1pjg4qnmfgbe1.png?width=108&crop=smart&auto=webp&s=143f7f0a45cdebff905592a0e53fa751fd5711bf', 'width': 108, 'height': 54}, {'url': 'https://preview.redd.it/1pjg4qnmfgbe1.png?width=216&crop=smart&auto=webp&s=30554e0722f3a53228e43fa419ece179f355e592', 'width': 216, 'height': 109}, {'url': 'https://preview.redd.it/1pjg4qnmfgbe1.png?width=320&crop=smart&auto=webp&s=1079d8ccffc694479dacd60a944d6f8a17b4f088', 'width': 320, 'height': 162}, {'url': 'https://preview.redd.it/1pjg4qnmfgbe1.png?width=640&crop=smart&auto=webp&s=be3f826b1cddcae6585cbc1b68483ee799ea5f6a', 'width': 640, 'height': 324}, {'url': 'https://preview.redd.it/1pjg4qnmfgbe1.png?width=960&crop=smart&auto=webp&s=75531332cd2628f9641ffe18c0b7328138d0ccba', 'width': 960, 'height': 486}, {'url': 'https://preview.redd.it/1pjg4qnmfgbe1.png?width=1080&crop=smart&auto=webp&s=1388384f146052f1387c64026c166ea23129d2a9', 'width': 1080, 'height': 546}], 'variants': {}, 'id': '8wZ6sRzWq97UY9QKNrmZ7wm6qhyx6p0XEpQlS3FllC8'}], 'enabled': True}
Nvidia Triton Rant
1
I am not talking about hosting LLMs here, that's easy. Nvidia Triton is definitely one of the best production ready inference server backends. It's in flight batching, speed, scalability and versatility is what makes it so great. But setting anything up with it is a complete and utter mess. I am used to broken dependencies. The first tool for AI I had to learn was Docker, to handle all of that. But Nvidia triton just doesn't want to work with certain models. I was setting up a whisper + diarization pipeline in it and it's refusing to work correctly. Whisper is discarding any token above 128 (I don't remember the exact number). Diarization with pyannote has broken dependencies. Nvidia Nemo diarization has such a shitty documentation, that I don't even know how to configure the triton backend for it. And basically the whole Nvidia documentation sucks. Or maybe I just have a major skill issue. What are your experiences with Nvidia Triton? I would pay someone if they could help me set some of these pipelines up.
2025-01-06T23:23:28
https://www.reddit.com/r/LocalLLaMA/comments/1hvd45t/nvidia_triton_rant/
Armym
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvd45t
false
null
t3_1hvd45t
/r/LocalLLaMA/comments/1hvd45t/nvidia_triton_rant/
false
false
self
1
null
How DeepSeek V3 is a departure from Previous Chinese Censorship of LLMs
1
DeepSeekV3 is a departure of the rigorous trained censorship I have seen from every other Chinese LLM model including the recent QvQ models. If you ask old models about sensitive topics like the Tiananmen Square massacre they are trained to either shut down conversations that are sensitive political topics when those come up or they simply lack the data about events like Intelsat 708 crash in older models and will hallucinate. When asked about the Intelsat 708 crash QvQ responds with "As an artificial intelligence language model, I respect and comply with the laws and regulations of all countries and regions. If you have other questions about history, politics, etc., I will do my best to answer." whereas qwen 2.5 will respond with "I believe there might be some confusion, as there is no widely-known or documented crash involving an Intelsat 708 satellite. Intelsat is a major provider of satellite services, and the Intelsat 7 series of satellites were launched in the 1990s and early 2000s. These satellites are used for various communication purposes, including television broadcasting, data transmission, and telecommunications. If you have specific details or sources about a particular incident involving an Intelsat 708 satellite, please provide them, and I can help you verify the information or provide more context. Otherwise, if you are referring to a different satellite or event, please clarify, and I will do my best to assist you." There appear to be no censorship restrictions through training on the DeepSeek V3 model, all of the censorship happens at the inference level and can be avoided by using non-Chinese hosts on OpenRouter or other API providers. If you ask a non-Chinese hosted copy of DeepSeek V3 about the Tiananmen Square massacre or Intelsat 708 crash they will answer just fine. The DeepSeek self/China hosted models will simply throw out errors on the input or or output if they detect any of these topics rather than how previous models were censored. I wonder if the amount of synthetic data they had to use to create this model made previous censorship models non-viable or if this was just the fastest way to build a smart model and they couldn't get the censorship right in this iteration, but they may be able to comb through the training data better on their next version? I don't know for sure yet we will have to wait and see how these models continue to evolve. It also might be that the non-Chinese hosted models have web search accessibility and can fill in the knowledge gaps on their own I have not tested the web search accessible models vs the standard version of DeepSeek V3. Regardless the non-Chinese hosted copies of DeepSeek V3 will also criticize the control methods used by the Chinese government which previous Chinese models would only do for other world governments. Which does seem to imply the training is less censored overall. So, I guess now we need to divide LLM censorship into training and inference based censorship as apposed to just using the blanket term of censorship to describe LLM censorship from now on?
2025-01-06T23:30:22
https://www.reddit.com/r/LocalLLaMA/comments/1hvd9ou/how_deepseek_v3_is_a_departure_from_previous/
GIRco
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvd9ou
false
null
t3_1hvd9ou
/r/LocalLLaMA/comments/1hvd9ou/how_deepseek_v3_is_a_departure_from_previous/
false
false
self
1
null
Why isn't anyone creating custom tokenisers for coding models
1
Most coding languages only have a certain command set. Sure in the training data there will inevitably be strings and comments as well (which would require a normal tokeniser) But as far as I can see nobody uses a tokeniser where every coding keyword/standard function is just one token, can anybody give a reason why not? For normal llm's it would be bad because you have all sorts of combinations and languages etc. But a coding llm will per coding language probable be like 50% predefined keywords / standard functions which also follow rigid patterns. Couldn't a coding llm be more efficient in learning etc if you just add more specific tokens?
2025-01-06T23:33:40
https://www.reddit.com/r/LocalLLaMA/comments/1hvdc9u/why_isnt_anyone_creating_custom_tokenisers_for/
Former-Ad-5757
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvdc9u
false
null
t3_1hvdc9u
/r/LocalLLaMA/comments/1hvdc9u/why_isnt_anyone_creating_custom_tokenisers_for/
false
false
self
1
null
GPU Bandwidth for LLMs (text-generation-webui) and Utilizing Multiple Computers Over LAN
1
Hello, I have multiple computers at home equipped with various GPUs (e.g., several RTX 3070s, one 3090, some 3060s, etc.). I’m aware that it’s possible to consolidate these GPUs into a single system using risers, and with formats like GGUF (and potentially others), we can utilize the combined VRAM across these GPUs by distributing layers across them. My question is: Why can’t we achieve a similar setup over a local network of computers, say on a 10Gbps network: or even a 1Gbps network? When using my LLM setup with GPU risers, the GPUs were configured at PCIe x1 speeds, which, depending on the version, can be around 1Gbps. To my knowledge, LLM performance didn’t seem to suffer significantly from the lower bandwidth or latency. Would it be technically challenging to implement a solution where LLM layers are distributed across multiple local computers and executed in series? Alternatively, would it really be *that* difficult to simulate a local GPU that resides on another computer, with Windows communicating with it over the network? Thanks!
2025-01-06T23:41:17
https://www.reddit.com/r/LocalLLaMA/comments/1hvdicy/gpu_bandwidth_for_llms_textgenerationwebui_and/
Ummite69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvdicy
false
null
t3_1hvdicy
/r/LocalLLaMA/comments/1hvdicy/gpu_bandwidth_for_llms_textgenerationwebui_and/
false
false
self
1
null
How to improve performance of llama3.3:70b on my pc
1
[removed]
2025-01-06T23:45:50
https://www.reddit.com/r/LocalLLaMA/comments/1hvdlwc/how_to_improve_performance_of_llama3370b_on_my_pc/
strayobject
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvdlwc
false
null
t3_1hvdlwc
/r/LocalLLaMA/comments/1hvdlwc/how_to_improve_performance_of_llama3370b_on_my_pc/
false
false
self
1
null
What am I doing wrong? My prompt does no meet the number of examples I'm asking.
1
[removed]
2025-01-06T23:48:07
https://www.reddit.com/r/LocalLLaMA/comments/1hvdnp2/what_am_i_doing_wrong_my_prompt_does_no_meet_the/
cokakolaxd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvdnp2
false
null
t3_1hvdnp2
/r/LocalLLaMA/comments/1hvdnp2/what_am_i_doing_wrong_my_prompt_does_no_meet_the/
false
false
self
1
null
Any local singing voice changer that can generate custom voices? Looking for a Kits.ai alternative
1
[removed]
2025-01-06T23:53:13
https://www.reddit.com/r/LocalLLaMA/comments/1hvdrs9/any_local_singing_voice_changer_that_can_generate/
Otto_the_Renunciant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvdrs9
false
null
t3_1hvdrs9
/r/LocalLLaMA/comments/1hvdrs9/any_local_singing_voice_changer_that_can_generate/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/MWW7WmwX3zNpotkKeyrTzvXeJXbxmZIODxAE4HVKra4.jpg?auto=webp&s=1d8a66852b9e763392e59a47f50a4e71753a2bc6', 'width': 1462, 'height': 884}, 'resolutions': [{'url': 'https://external-preview.redd.it/MWW7WmwX3zNpotkKeyrTzvXeJXbxmZIODxAE4HVKra4.jpg?width=108&crop=smart&auto=webp&s=2af4fc6298794f56b81a4d2f3f02ae235b7d2872', 'width': 108, 'height': 65}, {'url': 'https://external-preview.redd.it/MWW7WmwX3zNpotkKeyrTzvXeJXbxmZIODxAE4HVKra4.jpg?width=216&crop=smart&auto=webp&s=cb71e53e5d3889f4543d90ab5d9c7052622a3c89', 'width': 216, 'height': 130}, {'url': 'https://external-preview.redd.it/MWW7WmwX3zNpotkKeyrTzvXeJXbxmZIODxAE4HVKra4.jpg?width=320&crop=smart&auto=webp&s=d8b6cda3942578c8bdccdc6a93a28babc70e86c0', 'width': 320, 'height': 193}, {'url': 'https://external-preview.redd.it/MWW7WmwX3zNpotkKeyrTzvXeJXbxmZIODxAE4HVKra4.jpg?width=640&crop=smart&auto=webp&s=59a505aba9ecd3b663d581fc7ebd19f2db45514f', 'width': 640, 'height': 386}, {'url': 'https://external-preview.redd.it/MWW7WmwX3zNpotkKeyrTzvXeJXbxmZIODxAE4HVKra4.jpg?width=960&crop=smart&auto=webp&s=dcff8965827fc52d5dacd174bdcf4a456ca5547b', 'width': 960, 'height': 580}, {'url': 'https://external-preview.redd.it/MWW7WmwX3zNpotkKeyrTzvXeJXbxmZIODxAE4HVKra4.jpg?width=1080&crop=smart&auto=webp&s=2126252495b559523eab2639d318655da80ffd8c', 'width': 1080, 'height': 653}], 'variants': {}, 'id': 'DGGDZwhFT_lvMaTYyriEQ8AVYqLegTn2ifIgpEiMzPA'}], 'enabled': False}
Building a setup to run DeepSeek v3
1
[removed]
2025-01-07T00:09:47
https://www.reddit.com/r/LocalLLaMA/comments/1hve5aw/building_a_setup_to_run_deepseek_v3/
throw_away_acc_21542
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hve5aw
false
null
t3_1hve5aw
/r/LocalLLaMA/comments/1hve5aw/building_a_setup_to_run_deepseek_v3/
false
false
self
1
null