title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
it's just 262GB | 1 | 2024-12-31T23:50:24 | toodle_enthusiast | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hqr328 | false | null | t3_1hqr328 | /r/LocalLLaMA/comments/1hqr328/its_just_262gb/ | false | false | 1 | {'enabled': True, 'images': [{'id': '8SW74Bf9Ymm3fB7DvZRlLLHC-2ZaV6E6bUuBEdahBHA', 'resolutions': [{'height': 145, 'url': 'https://preview.redd.it/3ynal5ru0ltc1.jpeg?width=108&crop=smart&auto=webp&s=35c425b0b30d323288bca82c363f24fb58ff9c77', 'width': 108}, {'height': 290, 'url': 'https://preview.redd.it/3ynal5ru0ltc1.jpeg?width=216&crop=smart&auto=webp&s=9c520b87cf3f8213fc57b8ef8c790306c613d239', 'width': 216}, {'height': 430, 'url': 'https://preview.redd.it/3ynal5ru0ltc1.jpeg?width=320&crop=smart&auto=webp&s=2f2c34cdbd915db08659ef2db25da59c0e6058b6', 'width': 320}], 'source': {'height': 672, 'url': 'https://preview.redd.it/3ynal5ru0ltc1.jpeg?auto=webp&s=07bb9e0767d9b1079c65e2148c7dfb88c3ff1744', 'width': 500}, 'variants': {}}]} |
|||
My truest feeling about recent events (Claude: powerful [coding] man | Old version of GPT: middle-class | Deepseek: precious [insert price/performance] women) | 1 | 2025-01-01T00:00:11 | Kuro1103 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hqr97x | false | null | t3_1hqr97x | /r/LocalLLaMA/comments/1hqr97x/my_truest_feeling_about_recent_events_claude/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'TQZ17PVK2uL87BM8uLCKQ34eCW2VGVt5Z_KRlwRXMX4', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/wp8a85wiv9ae1.png?width=108&crop=smart&auto=webp&s=79f200b4baeafb4d0b816ca47d77cc0440925c2d', 'width': 108}, {'height': 232, 'url': 'https://preview.redd.it/wp8a85wiv9ae1.png?width=216&crop=smart&auto=webp&s=a3ede080c7b19b7e409e10dd84d1df8708a49876', 'width': 216}, {'height': 344, 'url': 'https://preview.redd.it/wp8a85wiv9ae1.png?width=320&crop=smart&auto=webp&s=f7008aa2e3430979fc3dca50b7e55d60e467d95e', 'width': 320}, {'height': 689, 'url': 'https://preview.redd.it/wp8a85wiv9ae1.png?width=640&crop=smart&auto=webp&s=900667993f6d2ee8ff2ea13ba306b42fe9273e4e', 'width': 640}], 'source': {'height': 1000, 'url': 'https://preview.redd.it/wp8a85wiv9ae1.png?auto=webp&s=89a1d149c8935e256c08a51aebc5a58714fb1c5c', 'width': 928}, 'variants': {}}]} |
|||
Let me just say. I am in love with Deep Seek v3. What a phenomenal model it is... Well done Deep Seek!! | 101 | I have using it for coding. it is fast. it remembers well. ( i am using the web directly so far). no rate limits. has web search and deep think options..Amazing..real christmas present. | 2025-01-01T00:02:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hqrb20/let_me_just_say_i_am_in_love_with_deep_seek_v3/ | appakaradi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqrb20 | false | null | t3_1hqrb20 | /r/LocalLLaMA/comments/1hqrb20/let_me_just_say_i_am_in_love_with_deep_seek_v3/ | false | false | self | 101 | null |
Deep Cleaning and Demoralizing LMSYS's 1m Chat Dataset | 99 | Recently, I decided to do some data analysis on the LMSYS-chat-1m dataset—probably the most diverse collection of human-generated prompts we have.
Then DeepSeek came along, so I re-prompted the entire cleaned dataset. I also used a classifier to flag moralizing entries (hard refusals, soft refusals, annoying warnings and important notes, etc.) Overall, DeepSeek is probably the least censored model out of all corporate models I've tested with the only other contender being Mistral Large.
There's a bunch of other goodies I included as well.
Here it is: https://huggingface.co/datasets/OpenLeecher/lmsys_chat_1m_clean
Have fun! | 2025-01-01T00:31:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hqrtg1/deep_cleaning_and_demoralizing_lmsyss_1m_chat/ | HideLord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqrtg1 | false | null | t3_1hqrtg1 | /r/LocalLLaMA/comments/1hqrtg1/deep_cleaning_and_demoralizing_lmsyss_1m_chat/ | false | false | self | 99 | {'enabled': False, 'images': [{'id': 'K8dMilSuwwnS9uvFnbkBJkulqnzzH2WYTKvtmLgR0WY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/S0NL7S9EU1q8xzcS34AZ8XrB2M-JEjAdBgFEcAvN6F0.jpg?width=108&crop=smart&auto=webp&s=639907ba30b4a3ec3a0ec5c91e2fdc9dc37f14f6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/S0NL7S9EU1q8xzcS34AZ8XrB2M-JEjAdBgFEcAvN6F0.jpg?width=216&crop=smart&auto=webp&s=9dbd99f5d007a3ca0093ca04ffc76d7f00b7ef8f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/S0NL7S9EU1q8xzcS34AZ8XrB2M-JEjAdBgFEcAvN6F0.jpg?width=320&crop=smart&auto=webp&s=ecf0e40c555ee844cc400fd163817d67d4efcbb7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/S0NL7S9EU1q8xzcS34AZ8XrB2M-JEjAdBgFEcAvN6F0.jpg?width=640&crop=smart&auto=webp&s=d0f01b61e60c1c0b7740f0acf7c74b776787594f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/S0NL7S9EU1q8xzcS34AZ8XrB2M-JEjAdBgFEcAvN6F0.jpg?width=960&crop=smart&auto=webp&s=dc24ab873c73d6a0b6effdc8118418f718bea2e8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/S0NL7S9EU1q8xzcS34AZ8XrB2M-JEjAdBgFEcAvN6F0.jpg?width=1080&crop=smart&auto=webp&s=1fbe6cfed43fc574794ce9f85ccbfa9219f65bf4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/S0NL7S9EU1q8xzcS34AZ8XrB2M-JEjAdBgFEcAvN6F0.jpg?auto=webp&s=21a5b2df927ae6e103acc18f95d9d66a11070c6e', 'width': 1200}, 'variants': {}}]} |
Unsloth : Gemma 2 <bos> token | 1 | [removed] | 2025-01-01T00:39:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hqryd4/unsloth_gemma_2_bos_token/ | Equivalent_Pair_4146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqryd4 | false | null | t3_1hqryd4 | /r/LocalLLaMA/comments/1hqryd4/unsloth_gemma_2_bos_token/ | false | false | self | 1 | null |
Gemma 2 <bos> token | 1 | [removed] | 2025-01-01T00:45:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hqs1xj/gemma_2_bos_token/ | Equivalent_Pair_4146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqs1xj | false | null | t3_1hqs1xj | /r/LocalLLaMA/comments/1hqs1xj/gemma_2_bos_token/ | false | false | self | 1 | null |
Fine Tuned Tiny Agents? | 5 | Is there any progress that I've not seen on using <3b models for basic tasks like
- Converting text copied from a webpage to markdown
- Converting mangled tabular data to CSV
- Mangled Math to Latex
Etc
I find in my experience that GPT4 and Sonnet can handle these tasks easily, but the 3b models I've tried are a little unreliable for what should be a basic task?
Is there fine-tunes for basic stuff like this? Or am I prompting wrong? (Or is it more complex than I realize) | 2025-01-01T01:06:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hqseyq/fine_tuned_tiny_agents/ | MrSomethingred | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqseyq | false | null | t3_1hqseyq | /r/LocalLLaMA/comments/1hqseyq/fine_tuned_tiny_agents/ | false | false | self | 5 | null |
Top Agent only 27% away from degree-holding humans on GAIA (General AI Assistant) benchmark (created with Yann LeCun) | 144 | 2025-01-01T01:53:23 | https://huggingface.co/spaces/gaia-benchmark/leaderboard | pseudotensor1234 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hqt79i | false | null | t3_1hqt79i | /r/LocalLLaMA/comments/1hqt79i/top_agent_only_27_away_from_degreeholding_humans/ | false | false | 144 | {'enabled': False, 'images': [{'id': 'S8GTUAXvA7GMyPfRJ_oXAJk7fejzVRtbmebcjxM5XFA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GLJwzw5I8e9YdLYKroUiyPxlSlFf2vSMLNG843fIb3s.jpg?width=108&crop=smart&auto=webp&s=2e86c35f2a2b32268456ff03a96e23db22024c71', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GLJwzw5I8e9YdLYKroUiyPxlSlFf2vSMLNG843fIb3s.jpg?width=216&crop=smart&auto=webp&s=29114b746949fbb08b32a34adbd150ed79a88932', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GLJwzw5I8e9YdLYKroUiyPxlSlFf2vSMLNG843fIb3s.jpg?width=320&crop=smart&auto=webp&s=33d173bc8055708db5e1f1e3427b45579d000f58', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GLJwzw5I8e9YdLYKroUiyPxlSlFf2vSMLNG843fIb3s.jpg?width=640&crop=smart&auto=webp&s=a4e5dfced582c1799d647c0054b9ce4848ee6843', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GLJwzw5I8e9YdLYKroUiyPxlSlFf2vSMLNG843fIb3s.jpg?width=960&crop=smart&auto=webp&s=9e54747e70e84a68d407d1b70f4c2b6b24b5d2d9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GLJwzw5I8e9YdLYKroUiyPxlSlFf2vSMLNG843fIb3s.jpg?width=1080&crop=smart&auto=webp&s=e198b3d967bef01e84273e73e157051a03039aa4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GLJwzw5I8e9YdLYKroUiyPxlSlFf2vSMLNG843fIb3s.jpg?auto=webp&s=7f9d64c6f1908298014d785447c4d99b70d6472b', 'width': 1200}, 'variants': {}}]} |
||
Do LLMs manage your application flow control or does your application code direct and control LLM work? | 13 | Feels like there are two emerging patterns in building LLM-based applications (I'll refrain from calling them agents, because it means different things to different people right now).
One of those patterns is where the LLM is responsible for flow controls and calls into the "environment" via function calling and tools usage. And the other is where developers direct work to LLMs and make prescriptive workflows such as a travel experience, a customer support agent, etc.
I don't like binary outcomes so I am sure there is room for both patterns to emerge and perhaps co-exists. But among these coarse grained buckets, how would you describe your development efforts.
| 2025-01-01T02:36:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hqtvsp/do_llms_manage_your_application_flow_control_or/ | AdditionalWeb107 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqtvsp | false | null | t3_1hqtvsp | /r/LocalLLaMA/comments/1hqtvsp/do_llms_manage_your_application_flow_control_or/ | false | false | self | 13 | null |
Are we f*cked? | 436 | I loved it how open weight models amazingly caught up closed source models in 2024. I also loved how recent small models achieved more than bigger, a couple of months old models. Again, amazing stuff.
However, I think it is still true that entities holding more compute power have better chances at solving hard problems, which in turn will bring more compute power to them.
They use algorithmic innovations (funded mostly by the public) without sharing their findings. Even the training data is mostly made by the public.
They get all the benefits and give nothing back. The closedAI even plays politics to limit others from catching up.
We coined "GPU rich" and "GPU poor" for a good reason. Whatever the paradigm, bigger models or more inference time compute, they have the upper hand. I don't see how we win this if we have not the same level of organisation that they have. We have some companies that publish some model weights, but they do it for their own good and might stop at any moment.
The only serious and community driven attempt that I am aware of was OpenAssistant, which really gave me the hope that we can win or at least not lose by a huge margin. Unfortunately, OpenAssistant discontinued, and nothing else was born afterwards that got traction.
Are we fucked? | 2025-01-01T03:20:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hqul8s/are_we_fcked/ | __Maximum__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqul8s | false | null | t3_1hqul8s | /r/LocalLLaMA/comments/1hqul8s/are_we_fcked/ | false | false | self | 436 | null |
Services like open router with tts | 0 | Hey everyone, I'm building an app that uses open router for some simple logic and translation tasks. I'm wondering if there's something similar to open router that also has endpoints for any tts models? Using a service like open router is new to me, but it's been great so far and simplified a lot of my app.
So I'm wondering if there's a similar service for tts/stt or even speech to speech that's production ready?
I'm not looking for something like eleven labs or open ais tts. Im basically wanting something that supports multiple languages and is just slightly better than built in browser tts. Speed and cost are big factors.
Thanks! | 2025-01-01T03:23:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hqummg/services_like_open_router_with_tts/ | Quixotic_Vipaka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqummg | false | null | t3_1hqummg | /r/LocalLLaMA/comments/1hqummg/services_like_open_router_with_tts/ | false | false | self | 0 | null |
Coconut by Meta AI – Better LLM Reasoning With Chain of CONTINUOUS Thought? | 58 | 2025-01-01T03:35:24 | https://aipapersacademy.com/chain-of-continuous-thought/ | Optifnolinalgebdirec | aipapersacademy.com | 1970-01-01T00:00:00 | 0 | {} | 1hqut6k | false | null | t3_1hqut6k | /r/LocalLLaMA/comments/1hqut6k/coconut_by_meta_ai_better_llm_reasoning_with/ | false | false | 58 | {'enabled': False, 'images': [{'id': 'VpuHlh9NeNtofTWWv2AO-BLXihpQ0BY1MvMzlhf00Cs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/l5l3fQF_iWsdkQGwKCz-UBXlAct9ZdSftvh45hZqLjA.jpg?width=108&crop=smart&auto=webp&s=ac3b542d63de75c0c5cf9444ddc207f8e080ec17', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/l5l3fQF_iWsdkQGwKCz-UBXlAct9ZdSftvh45hZqLjA.jpg?width=216&crop=smart&auto=webp&s=74a09c79367071d6e5a293982885744aee9a4416', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/l5l3fQF_iWsdkQGwKCz-UBXlAct9ZdSftvh45hZqLjA.jpg?width=320&crop=smart&auto=webp&s=1c3742860895be013a3a1d6eb626fcc0c66c142b', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/l5l3fQF_iWsdkQGwKCz-UBXlAct9ZdSftvh45hZqLjA.jpg?width=640&crop=smart&auto=webp&s=1246bca17e90dcfdf4b6044d835c7a8eb471cb83', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/l5l3fQF_iWsdkQGwKCz-UBXlAct9ZdSftvh45hZqLjA.jpg?width=960&crop=smart&auto=webp&s=47f9b66693cf2ffebb6435e09ed0984a21b0dbac', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/l5l3fQF_iWsdkQGwKCz-UBXlAct9ZdSftvh45hZqLjA.jpg?auto=webp&s=17b4229cce19ccbb2bf715ca27b34b8cc0bff2a1', 'width': 1024}, 'variants': {}}]} |
||
What is your acceptable TG speed? | 4 | End of the year poll! Tell me your minimum usable token generation speed!
[View Poll](https://www.reddit.com/poll/1hqv1j5) | 2025-01-01T03:50:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hqv1j5/what_is_your_acceptable_tg_speed/ | siegevjorn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqv1j5 | false | null | t3_1hqv1j5 | /r/LocalLLaMA/comments/1hqv1j5/what_is_your_acceptable_tg_speed/ | false | false | self | 4 | null |
What is your acceptable PP speed? | 0 | Second poll! What's you minimum bareable prompt processing (prompt evaluation) speed?
[View Poll](https://www.reddit.com/poll/1hqv3sc) | 2025-01-01T03:54:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hqv3sc/what_is_your_acceptable_pp_speed/ | siegevjorn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqv3sc | false | null | t3_1hqv3sc | /r/LocalLLaMA/comments/1hqv3sc/what_is_your_acceptable_pp_speed/ | false | false | self | 0 | null |
Caravan: LLM-generated interactive worlds | 1 | 2025-01-01T04:19:53 | https://horenbergerb.github.io/caravan.html | Own-Editor-7068 | horenbergerb.github.io | 1970-01-01T00:00:00 | 0 | {} | 1hqvh44 | false | null | t3_1hqvh44 | /r/LocalLLaMA/comments/1hqvh44/caravan_llmgenerated_interactive_worlds/ | false | false | default | 1 | null |
|
What models would work best ? | 1 | [removed] | 2025-01-01T05:12:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hqw92e/what_models_would_work_best/ | edplove13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqw92e | false | null | t3_1hqw92e | /r/LocalLLaMA/comments/1hqw92e/what_models_would_work_best/ | false | false | self | 1 | null |
Quantum-Enhanced LLaMA Solves IMO 2024 Problem 1: A Deep Dive into Mathematical Reasoning Through Quantum Computing | 4 | I'm excited to share a breakthrough in mathematical reasoning that combines quantum computing principles with the Meta-Llama-3.1-8B-Instruct model. Our system has successfully solved IMO 2024 Problem 1, a challenge that both GPT-4o and Claude 3.5 Sonnet struggled to overcome.
# The Mathematical Challenge
The International Mathematical Olympiad 2024 presented Problem 1:
*Determine all real numbers α such that, for every positive integer n, the integer* *⌊α⌋ + ⌊2α⌋ + ... + ⌊nα⌋ is a multiple of n.*
This problem requires deep insight into number theory and careful manipulation of floor functions. The challenge lies in handling infinite cases while arriving at a concrete solution.
# The Quantum-Mathematical Bridge
Our approach introduces a novel way of thinking about mathematical reasoning through quantum mechanics. Instead of traditional step-by-step deduction, we represent mathematical statements as quantum states in a complex Hilbert space. This allows us to:
1. **Quantum Superposition of Reasoning**: Mathematical steps exist in superposition, allowing simultaneous exploration of multiple logical paths. This is particularly powerful for problems requiring consideration of infinite cases.
2. **Phase-Encoded Logic**: The phase relationships between quantum states encode logical dependencies. Valid reasoning paths exhibit constructive interference, while contradictions naturally cancel through destructive interference.
3. **Hamiltonian Evolution**: We designed a custom Hamiltonian operator that encodes mathematical axioms and inference rules. As the system evolves under this Hamiltonian, valid mathematical arguments emerge through natural quantum dynamics.
# System Architecture
You can explore our full system architecture in our visualization. The pipeline consists of:
1. **Problem Encoding**: Mathematical statements are transformed into quantum states through a careful mapping of tokens to complex amplitudes and phases.
2. **Quantum Evolution**: The system evolves under a specially designed Hamiltonian that encodes mathematical relationships and inference rules.
3. **Convergence Analysis**: We track solution stability through quantum state fidelity and mathematical term extraction.
# Comparison with Leading Models
Let's examine how different approaches tackled the IMO problem:
1. **GPT-4's Attempt** ([Full Log](https://github.com/NandhaKishorM/quantum_reflection/blob/main/gpt4o.md)):
* Got caught in circular reasoning
* Failed to handle the infinite nature of the problem
* Couldn't establish necessary and sufficient conditions
2. **Claude 3.5's Approach** ([Full Log](https://github.com/NandhaKishorM/quantum_reflection/blob/main/claude_sonnet_3.5.md)):
* Made incorrect assumptions about periodicity
* Missed crucial edge cases
* Failed to prove uniqueness
3. **Our Quantum Solution** ([Full Log](https://github.com/NandhaKishorM/quantum_reflection/blob/main/quantum_resoning.md)):
* Successfully proved α must be an even integer
* Handled both even and odd cases rigorously
* Provided complete mathematical justification
* Achieved stable convergence through quantum evolution
# Implementation Insights
The heart of our system lies in the quantum state representation. Each mathematical statement is encoded as:
|ψ⟩ = Σ (token/vocab\_size) \* exp(2πi\*j/dim)|j⟩
This encoding captures both the logical content (through amplitudes) and relationships between statements (through phases). The Hamiltonian evolution then naturally guides the system toward valid mathematical arguments.
# Future Implications
This breakthrough opens exciting possibilities for:
1. **Automated Theorem Proving**: The quantum approach provides a natural way to handle infinite cases and abstract mathematical structures.
2. **Mathematical Research**: The system can suggest novel approaches by exploring quantum superpositions of reasoning paths.
3. **Educational Applications**: The visualization of mathematical reasoning through quantum states offers new ways to understand proof strategies.
# Try It Yourself
The complete implementation is available on our GitHub repository: [quantum\_reflection](https://github.com/NandhaKishorM/quantum_reflection)
To get started:
git clone https://github.com/NandhaKishorM/quantum_reflection
cd quantum_reflection
pip install -r requirements.txt
python main.py
# Join the Development
We're actively seeking contributors interested in:
* Quantum computing and mathematical logic
* LLM architecture and training
* Mathematical theorem proving
* Visualization and educational tools
# Resources
* [GitHub Repository](https://github.com/NandhaKishorM/quantum_reflection)
* Technical Documentation
* System Architecture Visualization
* Full Result Logs
Experts in this field please add your thoughts | 2025-01-01T05:53:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hqwuhf/quantumenhanced_llama_solves_imo_2024_problem_1_a/ | Nandakishor_ml | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqwuhf | false | null | t3_1hqwuhf | /r/LocalLLaMA/comments/1hqwuhf/quantumenhanced_llama_solves_imo_2024_problem_1_a/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'PQwS354jLj3iHsKkU6f2IBQ51OL_ev_OFOAjvbuFMJo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/76JsuRjMgKESCNRu6euIyP8SvG41tcHOlIiHOEri-PQ.jpg?width=108&crop=smart&auto=webp&s=7d0a57f4604b91049ed8cf462e63b5feaf0eaa85', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/76JsuRjMgKESCNRu6euIyP8SvG41tcHOlIiHOEri-PQ.jpg?width=216&crop=smart&auto=webp&s=04a1dff44d429bb05225a304c38d0996d53ed28c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/76JsuRjMgKESCNRu6euIyP8SvG41tcHOlIiHOEri-PQ.jpg?width=320&crop=smart&auto=webp&s=be4d355392f63a010645cafd22029e22c5314a6c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/76JsuRjMgKESCNRu6euIyP8SvG41tcHOlIiHOEri-PQ.jpg?width=640&crop=smart&auto=webp&s=47b424afc0e5c0d8779d93b03d5a58b6134499bb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/76JsuRjMgKESCNRu6euIyP8SvG41tcHOlIiHOEri-PQ.jpg?width=960&crop=smart&auto=webp&s=f39edcef07f7e078f307ac0e94cf43bae2b59fd9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/76JsuRjMgKESCNRu6euIyP8SvG41tcHOlIiHOEri-PQ.jpg?width=1080&crop=smart&auto=webp&s=41d20b386798b9873a44c71891e15a7608f2a93c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/76JsuRjMgKESCNRu6euIyP8SvG41tcHOlIiHOEri-PQ.jpg?auto=webp&s=9d9321f5aa535d3bebaad8f67910cf69f67fb07f', 'width': 1200}, 'variants': {}}]} |
Mega LLM Resource of 43 lectures | Popular Youtube Playlist | 1 | [removed] | 2025-01-01T05:56:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hqwvud/mega_llm_resource_of_43_lectures_popular_youtube/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqwvud | false | {'oembed': {'description': 'In this playlist, we will learn about the entire process of building a Large Language Model (LLM) from scratch. Nothing will be assumed. Everything will be s...', 'height': 450, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fvideoseries%3Flist%3DPLPTV0NXA_ZSgsLAr8YCgCwhPIJNNtexWu&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fplaylist%3Flist%3DPLPTV0NXA_ZSgsLAr8YCgCwhPIJNNtexWu&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FXpr8D6LeAtw%2Fhqdefault.jpg%3Fsqp%3D-oaymwEXCOADEI4CSFryq4qpAwkIARUAAIhCGAE%3D%26rs%3DAOn4CLB-lxbDfAE7qoD3W0AThViqZzd55w%26days_since_epoch%3D20089&type=text%2Fhtml&schema=youtube" width="600" height="450" scrolling="no" title="YouTube embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'http://youtube.com', 'thumbnail_height': 270, 'thumbnail_url': 'https://i.ytimg.com/vi/Xpr8D6LeAtw/hqdefault.jpg?sqp=-oaymwEXCOADEI4CSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLB-lxbDfAE7qoD3W0AThViqZzd55w&days_since_epoch=20089', 'thumbnail_width': 480, 'title': 'Building LLMs from scratch', 'type': 'video', 'version': '1.0', 'width': 600}, 'type': 'youtube.com'} | t3_1hqwvud | /r/LocalLLaMA/comments/1hqwvud/mega_llm_resource_of_43_lectures_popular_youtube/ | false | false | 1 | {'enabled': False, 'images': [{'id': '5PyVHkoFsrddBslmOS6EzhbrJOxTQjO5STf4LiVK4_k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?width=108&crop=smart&auto=webp&s=9b6bc043bdccaad2019c8bbbae3441b99aaf894f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?width=216&crop=smart&auto=webp&s=b374e2f14de6652bd2c0e9f3a0d4656baf9bbc15', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?width=320&crop=smart&auto=webp&s=6a459b1295ced9b8325a2f950cc985a2d4fd69df', 'width': 320}], 'source': {'height': 270, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?auto=webp&s=a5ece470c3825c54146e1f008b6a0d6189e0231a', 'width': 480}, 'variants': {}}]} |
|
Mega LLM Learning Resource | 4 | Just like with machine learning, you will be a serious LLM engineer only if you truly understand how the nuts and bolts of a Large Language Model (LLM) work.
Very few people understand how an LLM exactly works. Even fewer can build an entire LLM from scratch.
Wouldn't it be great for you to build your own LLM from scratch?
Here is an awesome, playlist series on Youtube: Build your own LLM from scratch.
Playlist link: [https://www.youtube.com/playlist?list=PLPTV0NXA\_ZSgsLAr8YCgCwhPIJNNtexWu](https://www.youtube.com/playlist?list=PLPTV0NXA_ZSgsLAr8YCgCwhPIJNNtexWu)
It has become very popular on Youtube.
Everything is written on a whiteboard. From scratch.
43 lectures are released.
This lecture series is inspired from Sebastian Raschka's book "Build LLMs from scratch"
Hope you learn a lot :)
P.S: Attached GIF shows a small snippet of the notes accompanying this playlist.
| 2025-01-01T05:58:23 | https://www.reddit.com/r/LocalLLaMA/comments/1hqwwu7/mega_llm_learning_resource/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqwwu7 | false | null | t3_1hqwwu7 | /r/LocalLLaMA/comments/1hqwwu7/mega_llm_learning_resource/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': '5PyVHkoFsrddBslmOS6EzhbrJOxTQjO5STf4LiVK4_k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?width=108&crop=smart&auto=webp&s=9b6bc043bdccaad2019c8bbbae3441b99aaf894f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?width=216&crop=smart&auto=webp&s=b374e2f14de6652bd2c0e9f3a0d4656baf9bbc15', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?width=320&crop=smart&auto=webp&s=6a459b1295ced9b8325a2f950cc985a2d4fd69df', 'width': 320}], 'source': {'height': 270, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?auto=webp&s=a5ece470c3825c54146e1f008b6a0d6189e0231a', 'width': 480}, 'variants': {}}]} |
Is there any way I can connect local ollama model to internet? | 0 | I've been developing an AI assistant. I wanna give ollama model access to internet.... Is there any way i can?? PLease help me!!??
| 2025-01-01T07:56:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hqyhqd/is_there_any_way_i_can_connect_local_ollama_model/ | _kirada_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqyhqd | false | null | t3_1hqyhqd | /r/LocalLLaMA/comments/1hqyhqd/is_there_any_way_i_can_connect_local_ollama_model/ | false | false | self | 0 | null |
Elephant in the room, Chinese models and U.S. businesses. | 19 | Chatting with my peers and former colleagues it is clear majority of medium to large enterprises would never consider employing current crop of open Chinese models. And hard to argue with some of the threat modeling or research in LLM security area. Today, we deal with the loose dubious package management platforms from npm and python world exploited by similar actors. Has any organization quantified the risk of employing “open source” LLM models and how it compares to current situation with their use of open source software assets? LLM Security tools not yet up to the challenge? | 2025-01-01T08:28:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hqyx6t/elephant_in_the_room_chinese_models_and_us/ | palindsay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqyx6t | false | null | t3_1hqyx6t | /r/LocalLLaMA/comments/1hqyx6t/elephant_in_the_room_chinese_models_and_us/ | false | false | self | 19 | null |
NVIDIA GeForce RTX 5080 reportedly launches January 21st | 79 | 2025-01-01T08:38:11 | https://videocardz.com/newz/nvidia-geforce-rtx-5080-reportedly-launches-january-21st | Optifnolinalgebdirec | videocardz.com | 1970-01-01T00:00:00 | 0 | {} | 1hqz1px | false | null | t3_1hqz1px | /r/LocalLLaMA/comments/1hqz1px/nvidia_geforce_rtx_5080_reportedly_launches/ | false | false | 79 | {'enabled': False, 'images': [{'id': 'HFPYvCDhUo0wFZw4Ir7k2MZ7DSc49Mp4QWY9UYaMCnE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uLaNJeykAS0aAFm6TKayLd_mmYWTi12-18FjbyHZCtU.jpg?width=108&crop=smart&auto=webp&s=22fdba04f1bab8bc29b54e9f7c91b6f600196966', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/uLaNJeykAS0aAFm6TKayLd_mmYWTi12-18FjbyHZCtU.jpg?width=216&crop=smart&auto=webp&s=f3126716bb8a8a4ef7de84725ed8011a46b077b6', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/uLaNJeykAS0aAFm6TKayLd_mmYWTi12-18FjbyHZCtU.jpg?width=320&crop=smart&auto=webp&s=8e28db085de8d32de55df104bd75310d3ff37526', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/uLaNJeykAS0aAFm6TKayLd_mmYWTi12-18FjbyHZCtU.jpg?width=640&crop=smart&auto=webp&s=f6628a11b9bcf21912ed44a07ac04765969ef0a0', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/uLaNJeykAS0aAFm6TKayLd_mmYWTi12-18FjbyHZCtU.jpg?width=960&crop=smart&auto=webp&s=878292b0d428cd5dce6645bc3d7077448e00b5f8', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/uLaNJeykAS0aAFm6TKayLd_mmYWTi12-18FjbyHZCtU.jpg?width=1080&crop=smart&auto=webp&s=be872d8dd2875bb23e44c312c1fcdf8e9794b33b', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://external-preview.redd.it/uLaNJeykAS0aAFm6TKayLd_mmYWTi12-18FjbyHZCtU.jpg?auto=webp&s=d83460c9d87182ac643fee7576b39f8ea955c0e1', 'width': 2500}, 'variants': {}}]} |
||
🚀 Enhancing Mathematical Problem Solving with Large Language Models: A Divide and Conquer Approach | 8 | Hi everyone!
I'm excited to share our latest project: **Enhancing Mathematical Problem Solving with Large Language Models (LLMs)**. Our team has developed a novel approach that utilizes a divide and conquer strategy to improve the accuracy of LLMs in mathematical applications.
# Key Highlights:
* Focuses on computational challenges rather than proof-based problems.
* Achieves state-of-the-art performance in various tests.
* Open-source code available for anyone to explore and contribute!
Check out our GitHub repository here: [DaC-LLM](https://github.com/JasonAlbertEinstien/DaC-LLM)
We’re looking for feedback and potential collaborators who are interested in advancing research in this area. Feel free to reach out or comment with any questions!
Thanks for your support! | 2025-01-01T08:55:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hqz99x/enhancing_mathematical_problem_solving_with_large/ | jasonhon2013 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqz99x | false | null | t3_1hqz99x | /r/LocalLLaMA/comments/1hqz99x/enhancing_mathematical_problem_solving_with_large/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'dR68yNrqLXhoqpiKcOs8dTHQxFKctlJa6DWVEBa-tds', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/d6sG_9qBIsLV2IfMXKHKVrItULIqK3sbr1r2NkyQK1I.jpg?width=108&crop=smart&auto=webp&s=a4de441a390b9f4b7938a3ed4ed195211c7beff3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/d6sG_9qBIsLV2IfMXKHKVrItULIqK3sbr1r2NkyQK1I.jpg?width=216&crop=smart&auto=webp&s=0c5a85d8133ef1fbefe424c3bca91c7cb35086f1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/d6sG_9qBIsLV2IfMXKHKVrItULIqK3sbr1r2NkyQK1I.jpg?width=320&crop=smart&auto=webp&s=d010aff42175ba4c28961ccee9904a1911dc6dd8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/d6sG_9qBIsLV2IfMXKHKVrItULIqK3sbr1r2NkyQK1I.jpg?width=640&crop=smart&auto=webp&s=35f879dbee7f1b6c4bfd82078e9bb22b6729c9a4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/d6sG_9qBIsLV2IfMXKHKVrItULIqK3sbr1r2NkyQK1I.jpg?width=960&crop=smart&auto=webp&s=e2fafda3833f131dad1acc4d4d909af0021e5b3b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/d6sG_9qBIsLV2IfMXKHKVrItULIqK3sbr1r2NkyQK1I.jpg?width=1080&crop=smart&auto=webp&s=ad07834b27e0944f6c999dfcd5f4e670c0aeacd4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/d6sG_9qBIsLV2IfMXKHKVrItULIqK3sbr1r2NkyQK1I.jpg?auto=webp&s=d248dc19cb0473df3d8efedb552065c6ffdd91e5', 'width': 1200}, 'variants': {}}]} |
Optimal way to reason is by definition not only Super intelligence but Super intelligence that is impossible to improve further in reasoning. I think he means MORE optimal but has basically conceded that LLMs or LLM variants can infact reason. | 0 | https://x.com/fchollet/status/1865567233373831389?t=kDqbPc8c6nnUhaqnYMUs5g&s=19 | 2025-01-01T09:08:52 | Personal-Dot-380 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hqzfju | false | null | t3_1hqzfju | /r/LocalLLaMA/comments/1hqzfju/optimal_way_to_reason_is_by_definition_not_only/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'NHJqPlUrx-KdkFsIzjJ4_ZJXG6xmdWS11f8qSIINWU4', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/9pfwm7d8mcae1.jpeg?width=108&crop=smart&auto=webp&s=c34a6304ba3d6c3012cee68e256b7e4a0d654570', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/9pfwm7d8mcae1.jpeg?width=216&crop=smart&auto=webp&s=7d44d4d70f4a4a41c06037d3b4ebd2f982b1154c', 'width': 216}, {'height': 299, 'url': 'https://preview.redd.it/9pfwm7d8mcae1.jpeg?width=320&crop=smart&auto=webp&s=ac0958284c3ab5e604e3f6b0e46d01eb66539dd7', 'width': 320}, {'height': 598, 'url': 'https://preview.redd.it/9pfwm7d8mcae1.jpeg?width=640&crop=smart&auto=webp&s=b6a2f5e90d0e64ed1c808f79c9e7197a016e5333', 'width': 640}, {'height': 897, 'url': 'https://preview.redd.it/9pfwm7d8mcae1.jpeg?width=960&crop=smart&auto=webp&s=6edc25c3979861b52250954ccd059af9680abd9c', 'width': 960}, {'height': 1010, 'url': 'https://preview.redd.it/9pfwm7d8mcae1.jpeg?width=1080&crop=smart&auto=webp&s=fd499a69b6ab8da7a7f3f9a009adb29791e3966d', 'width': 1080}], 'source': {'height': 1010, 'url': 'https://preview.redd.it/9pfwm7d8mcae1.jpeg?auto=webp&s=eda8f3251b61d64b276b3c31cef0c61086f5c45e', 'width': 1080}, 'variants': {}}]} |
||
How to mitigate bias and risks in LLM applications? | 0 | I find an LLM job that says one of the qualification is:
\* Experience with ethical AI practices, including techniques for mitigating biases and risks in LLM applications
How to do that in practice? Fine tuning? Have another evaluation LLM to parse the output and ask the original LLM to re-answer until the former is satisfied?
Thanks a lot in advance. | 2025-01-01T09:10:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hqzg6p/how_to_mitigate_bias_and_risks_in_llm_applications/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqzg6p | false | null | t3_1hqzg6p | /r/LocalLLaMA/comments/1hqzg6p/how_to_mitigate_bias_and_risks_in_llm_applications/ | false | false | self | 0 | null |
build a search engine for txt , html and csv files | 0 | hello guys hope you are all right , can anyone help me in developing a search engine , i started with some steps but couldn't continue further .
i will share the notebook link | 2025-01-01T09:16:23 | https://www.reddit.com/r/LocalLLaMA/comments/1hqziwa/build_a_search_engine_for_txt_html_and_csv_files/ | LahmeriMohamed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqziwa | false | null | t3_1hqziwa | /r/LocalLLaMA/comments/1hqziwa/build_a_search_engine_for_txt_html_and_csv_files/ | false | false | self | 0 | null |
"Do you think LLMs generalize multi-hop reasoning out of distribution?" -Question "To some degree, probably not as well as human beings. I think it is true that human beings generalize much better, but at the same time they definitely generalize out of distribution to some degree." -Ilya | 26 | 2025-01-01T09:32:10 | https://v.redd.it/4paboexdqcae1 | Personal-Dot-380 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hqzprp | false | {'reddit_video': {'bitrate_kbps': 450, 'dash_url': 'https://v.redd.it/4paboexdqcae1/DASHPlaylist.mpd?a=1738315957%2CNDZlODAzMjAxMjkxOGVjOWVkZGUxZjAyMGU1ZjNiMzVlYzg4ZWFiYTE5YzZkNDMxYzlkYzhhZWZhMGY2NDgxYg%3D%3D&v=1&f=sd', 'duration': 113, 'fallback_url': 'https://v.redd.it/4paboexdqcae1/DASH_270.mp4?source=fallback', 'has_audio': True, 'height': 270, 'hls_url': 'https://v.redd.it/4paboexdqcae1/HLSPlaylist.m3u8?a=1738315957%2CNWJmOWU0YzA5NmIxODYwOGY2NWMxMmUxNDM0ODBlYTM2OWE3NDlhMWEwN2VmNTRhOGU4ZmU0MzlkMmQzMjliMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4paboexdqcae1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 480}} | t3_1hqzprp | /r/LocalLLaMA/comments/1hqzprp/do_you_think_llms_generalize_multihop_reasoning/ | false | false | 26 | {'enabled': False, 'images': [{'id': 'OGwyanlydGRxY2FlMYSN9vwE3oDV3b-ZFu4Zmu4q7WslHmeXu68l-oeT1YgY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OGwyanlydGRxY2FlMYSN9vwE3oDV3b-ZFu4Zmu4q7WslHmeXu68l-oeT1YgY.png?width=108&crop=smart&format=pjpg&auto=webp&s=0028c6073334fc04f19fcd82fec865c79467989c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OGwyanlydGRxY2FlMYSN9vwE3oDV3b-ZFu4Zmu4q7WslHmeXu68l-oeT1YgY.png?width=216&crop=smart&format=pjpg&auto=webp&s=7d79ccd9bc52e99b73ebaf93b2ac4c8f8cfe2148', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OGwyanlydGRxY2FlMYSN9vwE3oDV3b-ZFu4Zmu4q7WslHmeXu68l-oeT1YgY.png?width=320&crop=smart&format=pjpg&auto=webp&s=4f19d9c57fa6553ae9d4fab4d305a03a9dbd7263', 'width': 320}], 'source': {'height': 320, 'url': 'https://external-preview.redd.it/OGwyanlydGRxY2FlMYSN9vwE3oDV3b-ZFu4Zmu4q7WslHmeXu68l-oeT1YgY.png?format=pjpg&auto=webp&s=a66db3fcd43f41ff830466eaadedbb3d92f58956', 'width': 568}, 'variants': {}}]} |
||
Can iphones or android phones run RP models on device well, or are we not there yet? | 0 | Thinking of making my own mini app to run some RP models on a mobile phone that's completely local to the device. Is that even feasible yet in a performant way? | 2025-01-01T10:00:12 | https://www.reddit.com/r/LocalLLaMA/comments/1hr0279/can_iphones_or_android_phones_run_rp_models_on/ | Cool_Brick_772 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr0279 | false | null | t3_1hr0279 | /r/LocalLLaMA/comments/1hr0279/can_iphones_or_android_phones_run_rp_models_on/ | false | false | self | 0 | null |
Can iphones or android phones run RP models on device well, or are we not there yet? | 0 | Thinking of making my own mini app to run some RP models on a mobile phone that's completely local to the device. Is that even feasible yet in a performant way? | 2025-01-01T10:00:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hr027h/can_iphones_or_android_phones_run_rp_models_on/ | Cool_Brick_772 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr027h | false | null | t3_1hr027h | /r/LocalLLaMA/comments/1hr027h/can_iphones_or_android_phones_run_rp_models_on/ | false | false | self | 0 | null |
On deepseek, all output will repeat in response to my first input, and ignore subsequent inputs. | 1 | [removed] | 2025-01-01T10:20:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hr0bqn/on_deepseek_all_output_will_repeat_in_response_to/ | juzi5201314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr0bqn | false | null | t3_1hr0bqn | /r/LocalLLaMA/comments/1hr0bqn/on_deepseek_all_output_will_repeat_in_response_to/ | false | false | 1 | null |
|
Indoor safe drone (not from Amazon) + auto docking drone chargers in every room + voice /noise /movement activation + Local 70B uncensored Multimodal model + function calling for automatic navigation to any room + Open source OpenAI Advanced Voice mode = Future of home assistants? | 13 | 2025-01-01T10:27:05 | https://v.redd.it/c4ssm7860dae1 | Personal-Dot-380 | /r/LocalLLaMA/comments/1hr0eio/indoor_safe_drone_not_from_amazon_auto_docking/ | 1970-01-01T00:00:00 | 0 | {} | 1hr0eio | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/c4ssm7860dae1/DASHPlaylist.mpd?a=1738448833%2CNmZiYmYzNjc3NTEzNmMxZGUzMDE0MDVjOGRiMWQ0NjAxNDZjZjI1NjlhZDUzYTI2N2QzZGVhNDg0MTA4ZDU4YQ%3D%3D&v=1&f=sd', 'duration': 68, 'fallback_url': 'https://v.redd.it/c4ssm7860dae1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/c4ssm7860dae1/HLSPlaylist.m3u8?a=1738448833%2CYjJlY2FjMjdjOGY5NDkyYmM0Yzg3NzE1MGQ1NTFjN2ZkNmZhZjJiMzEzMDZiMGM2OWE4MDI2NmQwMzYxYmRkMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/c4ssm7860dae1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1hr0eio | /r/LocalLLaMA/comments/1hr0eio/indoor_safe_drone_not_from_amazon_auto_docking/ | false | false | 13 | {'enabled': False, 'images': [{'id': 'OHY0eXNwaTUwZGFlMXOw9dMMrwvMOAJ2JnQ8dr68jtxUlZhVi4A8PuVr-kD8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OHY0eXNwaTUwZGFlMXOw9dMMrwvMOAJ2JnQ8dr68jtxUlZhVi4A8PuVr-kD8.png?width=108&crop=smart&format=pjpg&auto=webp&s=193d2fe877ad97d1bce171678b94113298653b73', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OHY0eXNwaTUwZGFlMXOw9dMMrwvMOAJ2JnQ8dr68jtxUlZhVi4A8PuVr-kD8.png?width=216&crop=smart&format=pjpg&auto=webp&s=88bb4a95ed511e6d1e6064de62cc31e0393a53ce', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OHY0eXNwaTUwZGFlMXOw9dMMrwvMOAJ2JnQ8dr68jtxUlZhVi4A8PuVr-kD8.png?width=320&crop=smart&format=pjpg&auto=webp&s=322f606db3de03d16d2af29665f65acf99033fb6', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OHY0eXNwaTUwZGFlMXOw9dMMrwvMOAJ2JnQ8dr68jtxUlZhVi4A8PuVr-kD8.png?width=640&crop=smart&format=pjpg&auto=webp&s=8fba2d706f657f3a6258ad38d4d41919e954e845', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OHY0eXNwaTUwZGFlMXOw9dMMrwvMOAJ2JnQ8dr68jtxUlZhVi4A8PuVr-kD8.png?width=960&crop=smart&format=pjpg&auto=webp&s=ed37369f763fe8582d699e1a82a22c03e2848760', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OHY0eXNwaTUwZGFlMXOw9dMMrwvMOAJ2JnQ8dr68jtxUlZhVi4A8PuVr-kD8.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7beda1f196b42018f345bf5b46ece3491bf3752e', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/OHY0eXNwaTUwZGFlMXOw9dMMrwvMOAJ2JnQ8dr68jtxUlZhVi4A8PuVr-kD8.png?format=pjpg&auto=webp&s=f441cd65e25a789ed15657cb5b8334041c5e8858', 'width': 1280}, 'variants': {}}]} |
||
Best Software for Running Local LLMs on Windows with AMD 6800XT and 16GB VRAM | 1 | 2025-01-01T11:37:17 | https://www.reddit.com/r/LocalLLM/comments/1hr1aro/best_software_for_running_local_llms_on_windows/ | JustyAnotherOny | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hr1bho | false | null | t3_1hr1bho | /r/LocalLLaMA/comments/1hr1bho/best_software_for_running_local_llms_on_windows/ | false | false | default | 1 | null |
|
30% Drop In o1-Preview Accuracy When Putnam Math Problems Are Slightly Variated From Originals | 35 | 2025-01-01T12:23:13 | https://openreview.net/forum?id=YXnwlZe0yf¬eId=yrsGpHd0Sf | EducationalCicada | openreview.net | 1970-01-01T00:00:00 | 0 | {} | 1hr1yki | false | null | t3_1hr1yki | /r/LocalLLaMA/comments/1hr1yki/30_drop_in_o1preview_accuracy_when_putnam_math/ | false | false | 35 | {'enabled': False, 'images': [{'id': 'A2cFENtZsGUk4TdgVLLL25zXBQBwmcPSG87hZLopV-w', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/uqSXAZWnIeNKGNM9S7DGpGLOnzm_mxUMvr6Y0yks4jY.jpg?width=108&crop=smart&auto=webp&s=9c811689cb2c2b238253833845bad24e74bdb5d8', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/uqSXAZWnIeNKGNM9S7DGpGLOnzm_mxUMvr6Y0yks4jY.jpg?width=216&crop=smart&auto=webp&s=79517bf9d18cf488552e43744ad2c342af22479f', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/uqSXAZWnIeNKGNM9S7DGpGLOnzm_mxUMvr6Y0yks4jY.jpg?width=320&crop=smart&auto=webp&s=d4b56b82708f12907eed5cb9688415ff2947f8a5', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/uqSXAZWnIeNKGNM9S7DGpGLOnzm_mxUMvr6Y0yks4jY.jpg?auto=webp&s=71ad6a8a2e6e5fac511957278effb619d3b30998', 'width': 512}, 'variants': {}}]} |
||
Considering to Maxed out M4 Mac Mini | 1 | [removed] | 2025-01-01T12:34:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hr24ih/considering_to_maxed_out_m4_mac_mini/ | Iced-Tea338 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr24ih | false | null | t3_1hr24ih | /r/LocalLLaMA/comments/1hr24ih/considering_to_maxed_out_m4_mac_mini/ | false | false | self | 1 | null |
Who will release a new model in 2025 firstly? | 49 | A new llama, or a new qwen, maybe? | 2025-01-01T12:42:53 | https://www.reddit.com/r/LocalLLaMA/comments/1hr28lm/who_will_release_a_new_model_in_2025_firstly/ | foldl-li | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr28lm | false | null | t3_1hr28lm | /r/LocalLLaMA/comments/1hr28lm/who_will_release_a_new_model_in_2025_firstly/ | false | false | self | 49 | null |
What is the smallest models that can run locally on a low-power PC? | 0 | 2025-01-01T12:52:23 | denuwanlahiru11 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hr2dgh | false | null | t3_1hr2dgh | /r/LocalLLaMA/comments/1hr2dgh/what_is_the_smallest_models_that_can_run_locally/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'R9PBk1CE9Qs6TOyODyAXq-uzTJHl6i3_n8rp4N-oNrM', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/4krhrjr2qdae1.jpeg?width=108&crop=smart&auto=webp&s=2ac086949285ea3776f4e27f125a03cf463e3216', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/4krhrjr2qdae1.jpeg?width=216&crop=smart&auto=webp&s=0bdd36a437d397d04b75e0e25d03d071f15f8ca1', 'width': 216}], 'source': {'height': 180, 'url': 'https://preview.redd.it/4krhrjr2qdae1.jpeg?auto=webp&s=ab20b76d3f5856490b3cb55e351eb48682111dab', 'width': 280}, 'variants': {}}]} |
|||
BREAKING: o3 architecture leaked online!!! Can the Open source community reverse engineer it? | 1 | 2025-01-01T13:06:50 | Personal-Dot-380 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hr2lf3 | false | null | t3_1hr2lf3 | /r/LocalLLaMA/comments/1hr2lf3/breaking_o3_architecture_leaked_online_can_the/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'LnkJcAVAr_XuexXS-dIPRucTXtU6S16nUUydE-CiB9k', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/4vq7snsosdae1.png?width=108&crop=smart&auto=webp&s=2d6ab023f20060a72c1ff80122d6bffeb7106909', 'width': 108}, {'height': 165, 'url': 'https://preview.redd.it/4vq7snsosdae1.png?width=216&crop=smart&auto=webp&s=68de08d06596e6922f1ed4f535a379f352c78e75', 'width': 216}, {'height': 245, 'url': 'https://preview.redd.it/4vq7snsosdae1.png?width=320&crop=smart&auto=webp&s=c01e7f2364e06db7314dbfb3b40764a5c2c9040f', 'width': 320}], 'source': {'height': 485, 'url': 'https://preview.redd.it/4vq7snsosdae1.png?auto=webp&s=ced0f58b6943ae129d81a8cb6cbb8179e6bcd43a', 'width': 633}, 'variants': {}}]} |
|||
What's the deal with the B's anyways? | 63 | I wonder why the parameter sizes ("B's") of LLMs are all over the place. Like who decides to make a base model / finetune 13B, 20B, 22B, 7B, 3B, 70B etc?
Is that a decision made with any thought in mind or is it completely arbitrary? Do some models perform on some hardware limitations better with lets say 13B than 14B or is it like "ok, we leave 3GB for lora, context etc so people can run this thing on 16GB VRAM"? | 2025-01-01T13:10:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hr2noa/whats_the_deal_with_the_bs_anyways/ | dreamyrhodes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr2noa | false | null | t3_1hr2noa | /r/LocalLLaMA/comments/1hr2noa/whats_the_deal_with_the_bs_anyways/ | false | false | self | 63 | null |
NVIDIA needs more competitors because Jensen Huang is treated like a God. | 7 | 2025-01-01T13:14:04 | Personal-Dot-380 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hr2pg9 | false | null | t3_1hr2pg9 | /r/LocalLLaMA/comments/1hr2pg9/nvidia_needs_more_competitors_because_jensen/ | false | false | 7 | {'enabled': True, 'images': [{'id': 'zctrQ5-nmuXPNQmPTOecyeABV1SrE6Zo5JmziF-AWU0', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/6b9es4aztdae1.jpeg?width=108&crop=smart&auto=webp&s=c7a622bd129f681ffd444bc9b268cfb6984e227a', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/6b9es4aztdae1.jpeg?width=216&crop=smart&auto=webp&s=e47e8ecd8255e189f2ec99b2205c7a0c43eb68bd', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/6b9es4aztdae1.jpeg?width=320&crop=smart&auto=webp&s=a2bced2b4a75d34b101c3fbabad56461b034e3ea', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/6b9es4aztdae1.jpeg?width=640&crop=smart&auto=webp&s=4222ee92b9c0a5e37b4bd57d1942fa5e000d36d7', 'width': 640}], 'source': {'height': 2048, 'url': 'https://preview.redd.it/6b9es4aztdae1.jpeg?auto=webp&s=90f1c59a81d337e0352cdb34aed8ac97633116bd', 'width': 946}, 'variants': {}}]} |
|||
M1 with 8gb ram and a roleplay dream. | 1 | [removed] | 2025-01-01T13:19:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hr2sjr/m1_with_8gb_ram_and_a_roleplay_dream/ | throw123awaie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr2sjr | false | null | t3_1hr2sjr | /r/LocalLLaMA/comments/1hr2sjr/m1_with_8gb_ram_and_a_roleplay_dream/ | false | false | self | 1 | null |
Seeking offline LLM model text to image | 1 | [removed] | 2025-01-01T13:50:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hr3ao1/seeking_offline_llm_model_text_to_image/ | n0b0dy31337 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr3ao1 | false | null | t3_1hr3ao1 | /r/LocalLLaMA/comments/1hr3ao1/seeking_offline_llm_model_text_to_image/ | false | false | self | 1 | null |
Best VLM for object detection | 1 | [removed] | 2025-01-01T13:55:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hr3d65/best_vlm_for_object_detection/ | Embarrassed-Bass6140 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr3d65 | false | null | t3_1hr3d65 | /r/LocalLLaMA/comments/1hr3d65/best_vlm_for_object_detection/ | false | false | self | 1 | null |
Does google see the chats with gemini ? | 1 | [removed] | 2025-01-01T13:57:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hr3eof/does_google_see_the_chats_with_gemini/ | ChestSharp3634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr3eof | false | null | t3_1hr3eof | /r/LocalLLaMA/comments/1hr3eof/does_google_see_the_chats_with_gemini/ | false | false | self | 1 | null |
Best VLM for object detection | 0 | Problem : Given a image I will click on object , that should detected and given as < class label >
Here my classes are construction labels which are in construction area…
Approach following:
- Using sam to get boundary box (polygon Boundary box)
- Giving boundary box plotted in image of that object to VLM and asking it to detect the appropriate label of object
Tried approaches -
-Gived direct mask of sam in org image (missing object context)
-Gived rectangular bounding box( Adding many objects in box)
-Gived cropped object (missing location context ( object in ceiling or in wall like that)
Questions :
1) which open source model can i use to achieve this?? ( i m currently using internvl2.5 8b model - in my machine nvidia a100 40gb)
2) is my approach correct for object detection any better approach ??
Please help me..
Thanks in advance | 2025-01-01T13:58:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hr3f6x/best_vlm_for_object_detection/ | Hot-Hearing-2528 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr3f6x | false | null | t3_1hr3f6x | /r/LocalLLaMA/comments/1hr3f6x/best_vlm_for_object_detection/ | false | false | self | 0 | null |
I test a bunch of models but i always come back to nemo 12b | 1 | [removed] | 2025-01-01T14:31:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hr3zx4/i_test_a_bunch_of_models_but_i_always_come_back/ | Soup_1613 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr3zx4 | false | null | t3_1hr3zx4 | /r/LocalLLaMA/comments/1hr3zx4/i_test_a_bunch_of_models_but_i_always_come_back/ | false | false | self | 1 | null |
Qwen2.5-14B gives less tokens then Mistral Small 22B | 1 | [removed] | 2025-01-01T14:44:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hr48e4/qwen2514b_gives_less_tokens_then_mistral_small_22b/ | Deep-Yoghurt878 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr48e4 | false | null | t3_1hr48e4 | /r/LocalLLaMA/comments/1hr48e4/qwen2514b_gives_less_tokens_then_mistral_small_22b/ | false | false | 1 | null |
|
ByteDance Research Introduces 1.58-bit FLUX: A New AI Approach that Gets 99.5% of the Transformer Parameters Quantized to 1.58 bits | 608 | 2025-01-01T14:59:50 | https://www.marktechpost.com/2024/12/30/bytedance-research-introduces-1-58-bit-flux-a-new-ai-approach-that-gets-99-5-of-the-transformer-parameters-quantized-to-1-58-bits/ | DeltaSqueezer | marktechpost.com | 1970-01-01T00:00:00 | 0 | {} | 1hr4ifw | false | null | t3_1hr4ifw | /r/LocalLLaMA/comments/1hr4ifw/bytedance_research_introduces_158bit_flux_a_new/ | false | false | 608 | {'enabled': False, 'images': [{'id': 'Io2xj_M4qsDFCyle3gywdRjzO-M0TIN2flZwsBMPCR0', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/pg9m-pRLpDTR8wFfJ6n12Z6WU89ZtqhT_CJ_KHOHyh0.jpg?width=108&crop=smart&auto=webp&s=f8f975b9dfea2554fb5efadc90662049849064f6', 'width': 108}, {'height': 119, 'url': 'https://external-preview.redd.it/pg9m-pRLpDTR8wFfJ6n12Z6WU89ZtqhT_CJ_KHOHyh0.jpg?width=216&crop=smart&auto=webp&s=871656272f575a3b9ed625a73c4265a64c4b452a', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/pg9m-pRLpDTR8wFfJ6n12Z6WU89ZtqhT_CJ_KHOHyh0.jpg?width=320&crop=smart&auto=webp&s=5973fc1b0021393210a7ac565d4fab167a57c8b5', 'width': 320}, {'height': 354, 'url': 'https://external-preview.redd.it/pg9m-pRLpDTR8wFfJ6n12Z6WU89ZtqhT_CJ_KHOHyh0.jpg?width=640&crop=smart&auto=webp&s=cc4f89d440d2af97864bcbd64802ccb1516a7d8c', 'width': 640}, {'height': 532, 'url': 'https://external-preview.redd.it/pg9m-pRLpDTR8wFfJ6n12Z6WU89ZtqhT_CJ_KHOHyh0.jpg?width=960&crop=smart&auto=webp&s=aca25111c362202136d464da91ef01a30af47b6f', 'width': 960}, {'height': 598, 'url': 'https://external-preview.redd.it/pg9m-pRLpDTR8wFfJ6n12Z6WU89ZtqhT_CJ_KHOHyh0.jpg?width=1080&crop=smart&auto=webp&s=66c394d1bca65c3b260098f944ca9b3fd7d8fc09', 'width': 1080}], 'source': {'height': 804, 'url': 'https://external-preview.redd.it/pg9m-pRLpDTR8wFfJ6n12Z6WU89ZtqhT_CJ_KHOHyh0.jpg?auto=webp&s=3b2dc8946dbe0a5d7b84e0ec0cbed18b9680ac8d', 'width': 1450}, 'variants': {}}]} |
||
LLM for ocr tasks | 1 | [removed] | 2025-01-01T15:11:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hr4qry/llm_for_ocr_tasks/ | QuoteOk6877 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr4qry | false | null | t3_1hr4qry | /r/LocalLLaMA/comments/1hr4qry/llm_for_ocr_tasks/ | false | false | self | 1 | null |
Vision Transformer Explorer: interactively explore the self-attention maps produced by ViTs | 129 | 2025-01-01T15:15:13 | https://v.redd.it/zwjqvo1dfeae1 | xenovatech | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hr4t5d | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/zwjqvo1dfeae1/DASHPlaylist.mpd?a=1738336527%2CNTA2OGE0MmJjYmVjZmZlNTI3MTkyODg5MmQ5NDNmNTY0N2UyYTE5MzcwZjE0ODhjNzI4ZTBmY2VlMjQ1YTIxNg%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/zwjqvo1dfeae1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/zwjqvo1dfeae1/HLSPlaylist.m3u8?a=1738336527%2CMDQ0OTJmM2EyMjg1NTcwOTZmOTM0MzExMGUzODM3NmZjMDAxYjZlN2E5M2FmMjA2ZDg4NDBmODExNGE4NWY0Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/zwjqvo1dfeae1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1hr4t5d | /r/LocalLLaMA/comments/1hr4t5d/vision_transformer_explorer_interactively_explore/ | false | false | 129 | {'enabled': False, 'images': [{'id': 'MWQwMzFvMWRmZWFlMaXO8zq7rajVrt4MjrjFF4Jz7W9X-K-JtWUu0jnAl0rs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MWQwMzFvMWRmZWFlMaXO8zq7rajVrt4MjrjFF4Jz7W9X-K-JtWUu0jnAl0rs.png?width=108&crop=smart&format=pjpg&auto=webp&s=571164371746ae1b28ce9da11352a578e859492a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MWQwMzFvMWRmZWFlMaXO8zq7rajVrt4MjrjFF4Jz7W9X-K-JtWUu0jnAl0rs.png?width=216&crop=smart&format=pjpg&auto=webp&s=868e508e043b9099ba5d7f007c8eacad3f12419f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MWQwMzFvMWRmZWFlMaXO8zq7rajVrt4MjrjFF4Jz7W9X-K-JtWUu0jnAl0rs.png?width=320&crop=smart&format=pjpg&auto=webp&s=09d2d3399c5e2fea41994d5d6bf80a67a24d199d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MWQwMzFvMWRmZWFlMaXO8zq7rajVrt4MjrjFF4Jz7W9X-K-JtWUu0jnAl0rs.png?width=640&crop=smart&format=pjpg&auto=webp&s=29a9439a159391dabc7bf710c439c329f600cfd1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MWQwMzFvMWRmZWFlMaXO8zq7rajVrt4MjrjFF4Jz7W9X-K-JtWUu0jnAl0rs.png?width=960&crop=smart&format=pjpg&auto=webp&s=4ebfde1c6c6ac525ab82e63a4f6dd3f0b4d3e9da', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MWQwMzFvMWRmZWFlMaXO8zq7rajVrt4MjrjFF4Jz7W9X-K-JtWUu0jnAl0rs.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7f905e8f740d4340a69b5148716efca748bd8434', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MWQwMzFvMWRmZWFlMaXO8zq7rajVrt4MjrjFF4Jz7W9X-K-JtWUu0jnAl0rs.png?format=pjpg&auto=webp&s=f0feef4369ffe778f8858cf083e20edd63373f7e', 'width': 1920}, 'variants': {}}]} |
||
Notes on Deepseek v3: Is it truly better than GPT-4o and 3.5 Sonnet? | 290 | After almost two years of GPT-4, we finally have an open model on par with it and Claude 3.5 Sonnet. And that too at a fraction of their cost.
There’s a lot of hype around it right now, and quite rightly so. But I wanted to know if Deepseek v3 is actually that impressive.
I tested the model on my personal question set to benchmark its performance across Reasoning, Math, Coding, and Writing.
Here’s what I found out:
* For reasoning and math problems, Deepseek v3 performs better than GPT-4o and Claude 3.5 Sonnet.
* For coding, Claude is unmatched. Only o1 stands a chance against it.
* Claude is better again for writing, but I noticed that Deepseek’s response pattern, even words, is sometimes eerily similar to GPT-4o. I shared an example in my blog post.
Deepseek probably trained the model on GPT-4o-generated data. You can even feel how it apes the GPT-4o style of talking.
# Who should Deepseek v3 use?
* If you used GPT-4o, you can safely switch; it’s the same thing at a much lower cost. Sometimes even better.
* v3 is the most ideal model for building AI apps. It is super cheap compared to other models, considering the performance.
* For daily driving, I would still prefer the Claude 3.5 Sonnet.
For full analysis and my notes on Deepseek v3, do check out the blog post: [Notes on Deepseek v3](https://composio.dev/blog/notes-on-new-deepseek-v3/)
What are your experiences with the new Deepseek v3? Did you find the model useful for your use cases? | 2025-01-01T15:34:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hr56e3/notes_on_deepseek_v3_is_it_truly_better_than/ | SunilKumarDash | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr56e3 | false | null | t3_1hr56e3 | /r/LocalLLaMA/comments/1hr56e3/notes_on_deepseek_v3_is_it_truly_better_than/ | false | false | self | 290 | {'enabled': False, 'images': [{'id': 'XeEhUYyEWjukiy2G8huUC6Wk1BilYA93wDJIBbmPsTs', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/XGG1gWXgla1WMdTAFpsaoYBJ2TAt0Pu2-5tDJSAhMLY.jpg?width=108&crop=smart&auto=webp&s=da42d4073e89c4a43fdda125897b23a7ed85c638', 'width': 108}, {'height': 100, 'url': 'https://external-preview.redd.it/XGG1gWXgla1WMdTAFpsaoYBJ2TAt0Pu2-5tDJSAhMLY.jpg?width=216&crop=smart&auto=webp&s=0cd50245f5e6a8025c1aa9adabfb8d7f65fb3d37', 'width': 216}, {'height': 148, 'url': 'https://external-preview.redd.it/XGG1gWXgla1WMdTAFpsaoYBJ2TAt0Pu2-5tDJSAhMLY.jpg?width=320&crop=smart&auto=webp&s=ecefca89c5ec7dc5a368ce6cb0bdc5f777cce6ff', 'width': 320}, {'height': 296, 'url': 'https://external-preview.redd.it/XGG1gWXgla1WMdTAFpsaoYBJ2TAt0Pu2-5tDJSAhMLY.jpg?width=640&crop=smart&auto=webp&s=c691aed8df966860555e1d7e5a01bde3ff7eb3f9', 'width': 640}], 'source': {'height': 372, 'url': 'https://external-preview.redd.it/XGG1gWXgla1WMdTAFpsaoYBJ2TAt0Pu2-5tDJSAhMLY.jpg?auto=webp&s=46c51fbe066ea6d51ea3f27a74d4b5cd278c2a21', 'width': 802}, 'variants': {}}]} |
Best Local LLM app for mobile? | 17 | I see a lot of people here praising PocketPal, and it does look polished, but I find it a lot slower than MLCChat. The problem is that MLCChat is veeery basic in terms of UI (no ability to have multiple chats, no customization of system prompt etc).
What are you guys using at the moment?
(I'm testing on an iPad Pro M4, and iPhone 15 Pro Max). | 2025-01-01T15:49:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hr5hai/best_local_llm_app_for_mobile/ | Hanthunius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr5hai | false | null | t3_1hr5hai | /r/LocalLLaMA/comments/1hr5hai/best_local_llm_app_for_mobile/ | false | false | self | 17 | null |
Calculating GPU VRAM requirements | 1 | [removed] | 2025-01-01T16:02:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hr5qy4/calculating_gpu_vram_requirements/ | thepiemod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr5qy4 | false | null | t3_1hr5qy4 | /r/LocalLLaMA/comments/1hr5qy4/calculating_gpu_vram_requirements/ | false | false | self | 1 | null |
Does someone was able to work with multi modal models on SGLang? | 3 | Hello everyone. After searching from internet, I had found this interesting tool [called SGLang](https://sgl-project.github.io/index.html), which has really features that I wanted:
1. It can constrain generation which is really useful for generating parseable content
2. You can launch backend on remote server, while generating on your local machine
3. Supports multi modal models.
Sounds great, right? I was so naive.
So, I have rented remote server with 48GB VRAM GPU, loaded with **Llama-3.2-11B-Vision-Instruct model** and run simple code like this:
@sgl.function
def describe_image(s, image_file):
s += "Here this image: " + sgl.image(image_file)
s += "Description is: " + sgl.gen("description")
state = describe_image.run(image_file="./image.png")
print(state["description"])
Which generates description of this image. Now I then I change this code above in order to generate constrained generation:
@sgl.function
def describe_image(s, image_file):
s += "Here this image: " + sgl.image(image_file)
s += "Style of this image is: " + sgl.gen("style", choices=["anime", "cartoon", "3d"])
state = describe_image.run(image_file="./image.png")
print(state["style"])
This code just hangs generation, with no response. It's like it deadlocked itself with these messages:
Prefill batch. #new-seq: 1, #new-token: 4, #cached-token: 6423, cache hit rate: 83.27%, token usage: 0.05, #running-req: 0, #queue-req: 0
The interesting thing is that it does not throw any error messages, it just hangs forever. Then I went to github issues and according to ither users it maybe vram problem. Then I have launched it on 2xRTX A6000 then even rented H100, it still does not generate choice based results and hangs.
Does someone was able to work with multi modal on SGLang with select methods? | 2025-01-01T16:05:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hr5sq0/does_someone_was_able_to_work_with_multi_modal/ | DaniyarQQQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr5sq0 | false | null | t3_1hr5sq0 | /r/LocalLLaMA/comments/1hr5sq0/does_someone_was_able_to_work_with_multi_modal/ | false | false | self | 3 | null |
Calculating GPU VRAM requirements | 1 | [removed] | 2025-01-01T16:17:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hr62dx/calculating_gpu_vram_requirements/ | thepiemod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr62dx | false | null | t3_1hr62dx | /r/LocalLLaMA/comments/1hr62dx/calculating_gpu_vram_requirements/ | false | false | self | 1 | null |
Best coding LLM? (open-source) 2025 | 1 | [removed]
[View Poll](https://www.reddit.com/poll/1hr669h) | 2025-01-01T16:22:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hr669h/best_coding_llm_opensource_2025/ | WashWarm8360 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr669h | false | null | t3_1hr669h | /r/LocalLLaMA/comments/1hr669h/best_coding_llm_opensource_2025/ | false | false | self | 1 | null |
Howto RAG with local LLM like jan.ai | 1 | [removed] | 2025-01-01T16:44:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hr6n69/howto_rag_with_local_llm_like_janai/ | Lower-Albatross9910 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr6n69 | false | null | t3_1hr6n69 | /r/LocalLLaMA/comments/1hr6n69/howto_rag_with_local_llm_like_janai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '-ctwWkN6rHGc2V6GtsAmk-HLdFHSpEj4U0gSuMMDRmw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OsoAgJqfaL_UgiiQdsx-291iQtC4URluQgtyHkpiGeE.jpg?width=108&crop=smart&auto=webp&s=f35549a0260f3dffaecfe008535d98df9d849414', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OsoAgJqfaL_UgiiQdsx-291iQtC4URluQgtyHkpiGeE.jpg?width=216&crop=smart&auto=webp&s=114731a6bcfdad6fd0883c8b2a70f73220d22b2f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OsoAgJqfaL_UgiiQdsx-291iQtC4URluQgtyHkpiGeE.jpg?width=320&crop=smart&auto=webp&s=c8fb76daec27d80fcc15f0cefd8e91282936cd19', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OsoAgJqfaL_UgiiQdsx-291iQtC4URluQgtyHkpiGeE.jpg?width=640&crop=smart&auto=webp&s=0383505fe666d8be50a1d3fe9573db90bb6261d1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OsoAgJqfaL_UgiiQdsx-291iQtC4URluQgtyHkpiGeE.jpg?width=960&crop=smart&auto=webp&s=82f9d2f0b51b75324d44083284dd3ceee6c693a6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OsoAgJqfaL_UgiiQdsx-291iQtC4URluQgtyHkpiGeE.jpg?width=1080&crop=smart&auto=webp&s=bdede698022f902c3057b82e81f2c0712657c58f', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/OsoAgJqfaL_UgiiQdsx-291iQtC4URluQgtyHkpiGeE.jpg?auto=webp&s=8d7458ab24160c3de9b7569fcb5ec2c622537d11', 'width': 2400}, 'variants': {}}]} |
Calculating GPU VRAM requirements | 1 | hi r/LocalLLaMA , I am starting to play around with local llm and I would like to better understand the gpu vram calculation for running local llm
I have a 6GB RTX A2000, and correct me if I'm wrong, but from my limited research, the calculation goes something like this
VRAM = Parameters (Billion) \* 4 bytes of float (32 bit)
and if i use quantized model, as i understand, the float bytes will be reduced, with the common quantization like
16 bit -> 2 bytes
8 bit -> 1 byte
4 bit -> 0.5 byte
so if i were to max my VRAM GPU, then it can handle the a 12B parameters 4 bit quantized model, is this right?
since the calculation goes
VRAM = 12 Billion \* 0.5 byte = 12GB
other calculations, still maxxing a 6GB VRAM GPU
for 8 bit -> 6B params
for 16 bit -> 3B params
am i doing this correctly? | 2025-01-01T16:48:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hr6pzx/calculating_gpu_vram_requirements/ | thepiemod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr6pzx | false | null | t3_1hr6pzx | /r/LocalLLaMA/comments/1hr6pzx/calculating_gpu_vram_requirements/ | false | false | self | 1 | null |
I made Termite - a CLI that can generate terminal UIs from simple text prompts | 183 | 2025-01-01T17:01:34 | jsonathan | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hr70do | false | null | t3_1hr70do | /r/LocalLLaMA/comments/1hr70do/i_made_termite_a_cli_that_can_generate_terminal/ | false | false | 183 | {'enabled': True, 'images': [{'id': 'vVHprg-a_xFQa8cStmXiesmFdWPgEfdiftg2OHvH7Gw', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?width=108&crop=smart&format=png8&s=32fdea5b4001cfd90c0df2157f29249b78e7c45a', 'width': 108}, {'height': 130, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?width=216&crop=smart&format=png8&s=d104072b4fbf68fda347198b5af8e98b2495ae6d', 'width': 216}, {'height': 193, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?width=320&crop=smart&format=png8&s=561bb4eef94d605cfcc8eb3612e9ae74edf98a21', 'width': 320}, {'height': 386, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?width=640&crop=smart&format=png8&s=4cb893da43e34311759705fda26302b145e05fb9', 'width': 640}, {'height': 579, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?width=960&crop=smart&format=png8&s=d94a6d36d2e7e79747c4bce4e73802c610cab51f', 'width': 960}, {'height': 652, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?width=1080&crop=smart&format=png8&s=daf744863a1624e7c36485a4663266cb855cb76f', 'width': 1080}], 'source': {'height': 720, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?format=png8&s=743d4dcb08f35bd2b976bc7545622cb4352746a8', 'width': 1192}, 'variants': {'gif': {'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?width=108&crop=smart&s=39169fa7db72ad26b0ae614b204c1f9eb7084e8a', 'width': 108}, {'height': 130, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?width=216&crop=smart&s=ab6c1849cfaf7ddfc2a3698fbd98dfba06c0ea78', 'width': 216}, {'height': 193, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?width=320&crop=smart&s=6360bbb9a80ffe9cacda58ae6ce46cb7f32dc978', 'width': 320}, {'height': 386, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?width=640&crop=smart&s=31bd29c64c3b45fc5afa952ee7bc45f21e765d4c', 'width': 640}, {'height': 579, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?width=960&crop=smart&s=6ef2fe73e20c9e17e59bd35a3053755f8104fba1', 'width': 960}, {'height': 652, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?width=1080&crop=smart&s=ebe35c0448fbda1646e9d3cda7a4c220b79c6917', 'width': 1080}], 'source': {'height': 720, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?s=1ad3b190fe6637605987e316e3f4a2b2d9f9d8d0', 'width': 1192}}, 'mp4': {'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?width=108&format=mp4&s=6b6f7495a623e0f6c241e4f57b4373f09005c81d', 'width': 108}, {'height': 130, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?width=216&format=mp4&s=4eddf2ecb3fb41d19cb921c9bb0867653de9bc57', 'width': 216}, {'height': 193, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?width=320&format=mp4&s=464882033ed0a6939674f4e6c046d89a6739a49f', 'width': 320}, {'height': 386, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?width=640&format=mp4&s=a1d3a3bd46931fd546db8008672ede553988344b', 'width': 640}, {'height': 579, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?width=960&format=mp4&s=282b2e0cae54203657a9028b6df7b14b4eee7854', 'width': 960}, {'height': 652, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?width=1080&format=mp4&s=735c805d387e562aa5ef8a3b1908519eadb7f4ca', 'width': 1080}], 'source': {'height': 720, 'url': 'https://preview.redd.it/yasvuh6dyeae1.gif?format=mp4&s=c5bd2651cb231e5da623f32c186b53fbde6ada64', 'width': 1192}}}}]} |
|||
Decentralized Ai Memory: Storing Embeddings In The Browser with EntityDB | 1 | [removed] | 2025-01-01T17:18:43 | https://github.com/babycommando/entity-db | babydriver808 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hr7dwa | false | null | t3_1hr7dwa | /r/LocalLLaMA/comments/1hr7dwa/decentralized_ai_memory_storing_embeddings_in_the/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'z2CeukTLfWwfEmGN7_yt3GcgYiuOC9V9mi1mivkcQJ0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UrTz5gAMe2cu_V5Odmpv5jsnKAbJygrLxwoy9CFFEFc.jpg?width=108&crop=smart&auto=webp&s=3564e65ca619be503d63d87a8431d5ac79df01d4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/UrTz5gAMe2cu_V5Odmpv5jsnKAbJygrLxwoy9CFFEFc.jpg?width=216&crop=smart&auto=webp&s=7d46d0b1a5c70b52b0e284a4ad0a7633e22a4f21', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/UrTz5gAMe2cu_V5Odmpv5jsnKAbJygrLxwoy9CFFEFc.jpg?width=320&crop=smart&auto=webp&s=88e288df41ba62d4591e02225282b5e970484046', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/UrTz5gAMe2cu_V5Odmpv5jsnKAbJygrLxwoy9CFFEFc.jpg?width=640&crop=smart&auto=webp&s=8e85c424d9a349610c35543d9399e42d89416927', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/UrTz5gAMe2cu_V5Odmpv5jsnKAbJygrLxwoy9CFFEFc.jpg?width=960&crop=smart&auto=webp&s=b1c56c9d74f5f1a9d6c2f033b6e57854792f039d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/UrTz5gAMe2cu_V5Odmpv5jsnKAbJygrLxwoy9CFFEFc.jpg?width=1080&crop=smart&auto=webp&s=416ba5162f3718061b177f55a5abea9f1812c765', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/UrTz5gAMe2cu_V5Odmpv5jsnKAbJygrLxwoy9CFFEFc.jpg?auto=webp&s=a53988222b666cb8a3bb6bcbf596f2b26686a40f', 'width': 1200}, 'variants': {}}]} |
|
Separate content from noise in web-scraped content | 15 | I find myself wasting lots of time writing code to separate content from noise whenever I load data for RAG or similar tasks.
I am somewhat successful with using an LLM to go through chunks of contents and iteratively extract the gist of it and then classify chunks as related or unrelated. This might require going through the source material twice, which is expensive.
Another idea would be to use embeddings once I have identified the important information and cluster the chunks with these key concepts as cluster centers and discard chunks too far away. However, I don't entirely trust the robustness of that approach.
I also considered tuning a BERT classifier, but creating a good dataset might become too tedious, given I have no idea if it will be any good.
The problem applies to other document sources, like papers, not only content scraped from html pages.
Any ideas or specific models that exist that could make this process more easy and efficient? | 2025-01-01T17:49:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hr82lw/separate_content_from_noise_in_webscraped_content/ | mnze_brngo_7325 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr82lw | false | null | t3_1hr82lw | /r/LocalLLaMA/comments/1hr82lw/separate_content_from_noise_in_webscraped_content/ | false | false | self | 15 | null |
Is there truly no way for a local LLM to be malicious? | 1 | [removed] | 2025-01-01T17:53:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hr864r/is_there_truly_no_way_for_a_local_llm_to_be/ | streetmeat4cheap | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr864r | false | null | t3_1hr864r | /r/LocalLLaMA/comments/1hr864r/is_there_truly_no_way_for_a_local_llm_to_be/ | false | false | self | 1 | null |
Help me create llm | 1 | [removed] | 2025-01-01T17:54:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hr86ly/help_me_create_llm/ | SuspiciousDetail8103 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr86ly | false | null | t3_1hr86ly | /r/LocalLLaMA/comments/1hr86ly/help_me_create_llm/ | false | false | self | 1 | null |
Is Deepseek spyware like Temu and tiktok? | 0 | Any risks with using deepseek and leaking info to CCP? | 2025-01-01T17:55:04 | https://www.reddit.com/r/LocalLLaMA/comments/1hr870w/is_deepseek_spyware_like_temu_and_tiktok/ | vincentsigmafreeman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr870w | false | null | t3_1hr870w | /r/LocalLLaMA/comments/1hr870w/is_deepseek_spyware_like_temu_and_tiktok/ | false | false | self | 0 | null |
Call to AutoModelForCausalLM not terminating | 0 | model = AutoModelForCausalLM.from_pretrained(model_id, return_dict_in_generate = True, output_hidden_states=True, output_attentions=True)
I have this function call that is not terminating (I have print statements that follow that don't print out.)
what should I do? Here is my model\_id:
model_id = "meta-llama/Llama-3.1-8B-Instruct"
| 2025-01-01T17:55:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hr87ng/call_to_automodelforcausallm_not_terminating/ | Ok_Web_2949 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr87ng | false | null | t3_1hr87ng | /r/LocalLLaMA/comments/1hr87ng/call_to_automodelforcausallm_not_terminating/ | false | false | self | 0 | null |
4bit 405b locally... | 1 | [removed] | 2025-01-01T18:02:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hr8d51/4bit_405b_locally/ | SolarNexxus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr8d51 | false | null | t3_1hr8d51 | /r/LocalLLaMA/comments/1hr8d51/4bit_405b_locally/ | false | false | self | 1 | null |
Prompt Tuning vs Fine Tuning: Key Differences and When to use what? | 6 | Prompt Tuning is a technique which keeps the LLM weights frozen and adds learnable "soft prompts" to guide the outputs where as Fine-tuning adjusts an LLM weights using task-specific data, creating a highly accurate, customized model.
Here’s a quick breakdown to help you decide which approach works best for your data:
𝟭. 𝗣𝗮𝗿𝗮𝗺𝗲𝘁𝗲𝗿 𝗠𝗼𝗱𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻
Fine-Tuning - Adjusts all model weights for a specific task.
Prompt Tuning - Keeps weights frozen; modifies input prompts instead
𝟮. 𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲 𝗥𝗲𝗾𝘂𝗶𝗿𝗲𝗺𝗲𝗻𝘁𝘀
Fine-Tuning - Needs high computational power and large datasets.
Prompt Tuning - Lightweight, requiring minimal compute and data.
𝟯. 𝗧𝗮𝘀𝗸 𝗔𝗱𝗮𝗽𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆
Fine-Tuning - Creates separate models for each task.
Prompt Tuning - Uses one model for multiple tasks via tailored prompts.
𝟰. 𝗔𝗰𝗰𝘂𝗿𝗮𝗰𝘆 𝗮𝗻𝗱 𝗖𝘂𝘀𝘁𝗼𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻
Fine-Tuning - High accuracy with deep task-specific customization.
Prompt Tuning - Efficient, though less precise for complex tasks.
𝟱. 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 𝗮𝗻𝗱 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁
Fine-Tuning - Complex, with multiple task-specific models to manage.
Prompt Tuning - Simplifies deployment using a single model for all tasks.
Dive deeper into their details and understand the Key Differences, Best Practices and Use Cases here: [https://hub.athina.ai/blogs/difference-between-fine-tuning-and-prompt-tuning/](https://hub.athina.ai/blogs/difference-between-fine-tuning-and-prompt-tuning/) | 2025-01-01T18:06:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hr8gjb/prompt_tuning_vs_fine_tuning_key_differences_and/ | Sam_Tech1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr8gjb | false | null | t3_1hr8gjb | /r/LocalLLaMA/comments/1hr8gjb/prompt_tuning_vs_fine_tuning_key_differences_and/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'Ci5CNRQelJY2DizVphLPP0PY3-tsos3BVqS0hsw2BxQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/CL9f8B-pNZCHomoR0tmsEIH7YOZBs0ugOK4gLDO0XPA.jpg?width=108&crop=smart&auto=webp&s=9f8713fff666ef1c37d1f87030216446daf0d27f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/CL9f8B-pNZCHomoR0tmsEIH7YOZBs0ugOK4gLDO0XPA.jpg?width=216&crop=smart&auto=webp&s=2cf6deefbacb9c601e12f2b6c00aaf66a04fdfbd', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/CL9f8B-pNZCHomoR0tmsEIH7YOZBs0ugOK4gLDO0XPA.jpg?width=320&crop=smart&auto=webp&s=10c249bfd8c8a1eb22d2e6317709dfd116a48e83', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/CL9f8B-pNZCHomoR0tmsEIH7YOZBs0ugOK4gLDO0XPA.jpg?width=640&crop=smart&auto=webp&s=6a596108fb05754e23a94a1a906e1c3d6c64c122', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/CL9f8B-pNZCHomoR0tmsEIH7YOZBs0ugOK4gLDO0XPA.jpg?width=960&crop=smart&auto=webp&s=63021c528abcbaaa4c5245209865bbe33c737a23', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/CL9f8B-pNZCHomoR0tmsEIH7YOZBs0ugOK4gLDO0XPA.jpg?width=1080&crop=smart&auto=webp&s=c7a274ba954f7f24a3c89507c3b3be2dd9ccb42a', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/CL9f8B-pNZCHomoR0tmsEIH7YOZBs0ugOK4gLDO0XPA.jpg?auto=webp&s=a7757dbbbe0f72f399fa7b118b77f890b1870220', 'width': 1200}, 'variants': {}}]} |
Gemini-2.0 Flash for Direct Audio Input in Local Tools | 0 | Hey everyone,
I've been experimenting with Google's Gemini-2.0 Flash in AI Studio for a while now, and one of the nice features is its multimodal capability, allowing direct audio input. The documentation in AI Studio even provides instructions on how to directly upload audio, which is great because it theoretically eliminates the need for a separate speech-to-text step. Also, it understands speech in various languages (even low-resource ones) beyond English.
It should be fairly straightforward to integrate this direct audio input feature into tools like Open WebUI. I've used the Gemini API in Open WebUI through pipelines before, but by default, when I try to input audio/record speech, Open WebUI processes it by first sending it through a speech recognition system (Whisper) before feeding the text to the LLM. For a multimodal model like Gemini-2.0, this step is, of course, unnecessary and loses information.
I'm wondering if anyone in the community has figured out a way to directly feed audio to moels within Open WebUI or any other local tool. Is there a way to bypass the speech-to-text conversion in these toolsi? | 2025-01-01T18:19:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hr8rgv/gemini20_flash_for_direct_audio_input_in_local/ | Far_Celery1041 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr8rgv | false | null | t3_1hr8rgv | /r/LocalLLaMA/comments/1hr8rgv/gemini20_flash_for_direct_audio_input_in_local/ | false | false | self | 0 | null |
Optimal Setup for Running LLM Locally | 1 | [removed] | 2025-01-01T18:20:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hr8ru8/optimal_setup_for_running_llm_locally/ | Kiriko8698 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr8ru8 | false | null | t3_1hr8ru8 | /r/LocalLLaMA/comments/1hr8ru8/optimal_setup_for_running_llm_locally/ | false | false | self | 1 | null |
hopes, dreams, aspirations | 1 | [removed] | 2025-01-01T18:20:29 | brucespector | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hr8rz9 | false | null | t3_1hr8rz9 | /r/LocalLLaMA/comments/1hr8rz9/hopes_dreams_aspirations/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'kg23EwFd-L1h7m-D8SIo4Kx6ZN5w8aUc_Ql1Jd2IRlw', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/xrxgqghncfae1.jpeg?width=108&crop=smart&auto=webp&s=b33c931c6422a44b82da320dffa2926281d61e04', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/xrxgqghncfae1.jpeg?width=216&crop=smart&auto=webp&s=5661c7c65730585d46d49dccf514429c868a0336', 'width': 216}, {'height': 182, 'url': 'https://preview.redd.it/xrxgqghncfae1.jpeg?width=320&crop=smart&auto=webp&s=795f57f4db9e45125e175a8ba979692c41fa9752', 'width': 320}, {'height': 365, 'url': 'https://preview.redd.it/xrxgqghncfae1.jpeg?width=640&crop=smart&auto=webp&s=e86b1f9b75d4871775e4029c4dc367ae21bba7b2', 'width': 640}, {'height': 548, 'url': 'https://preview.redd.it/xrxgqghncfae1.jpeg?width=960&crop=smart&auto=webp&s=e90d123fab0a2a102d35513a1084970d4945dc50', 'width': 960}], 'source': {'height': 607, 'url': 'https://preview.redd.it/xrxgqghncfae1.jpeg?auto=webp&s=0a9bbf3a5bde33598a043097b9ecc16931fb7a06', 'width': 1063}, 'variants': {}}]} |
||
I just built a new computer, what's my next step? | 0 | Greetings,
I currently have an Orange Pi 5 on which I am actively running some docker services. I want to go to the next level! I just assembled a new device:
* AMD Ryzen 5 8500G
* CORSAIR 64GB (2x32GB)
* Biostar A620MH AURORA
* ASUS Prime AP201 White
* Gigabyte 850W P850GM
* Kingston NV3 SNV3S/1000G 1TB
With the 5000 series graphics cards coming out, I will add the 4070 TI that I am currently using in my gaming PC to the system.
I want to continue the same habit of using docker on Ubuntu, but I also want to virtualize since I have a powerful computer.
What should be my next step? What do you suggest? | 2025-01-01T18:24:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hr8vee/i_just_built_a_new_computer_whats_my_next_step/ | PhyesiX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr8vee | false | null | t3_1hr8vee | /r/LocalLLaMA/comments/1hr8vee/i_just_built_a_new_computer_whats_my_next_step/ | false | false | self | 0 | null |
Preventing cut-off responses with low max_tokens sampling | 4 | Are there any best practices? I did some research today but couldn't really find any decent way of dealing with this.
I was hoping there'd be a sampler that would start rewarding the EOS token more and more as we get closer to max\_tokens, but there is not.
Any advice is appreciated. | 2025-01-01T18:28:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hr8y49/preventing_cutoff_responses_with_low_max_tokens/ | komninosc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr8y49 | false | null | t3_1hr8y49 | /r/LocalLLaMA/comments/1hr8y49/preventing_cutoff_responses_with_low_max_tokens/ | false | false | self | 4 | null |
I built a small (function calling) LLM that packs a big punch; integrated in an open source gateway for agentic apps | 185 | https://huggingface.co/katanemo/Arch-Function-3B
As they say big things come in small packages. I set out to see if we could dramatically improve latencies for agentic apps (perform tasks based on prompts for users) - and we were able to develop a function calling LLM that matches if not exceed frontier LLM performance.
And we engineered the LLM in https://github.com/katanemo/archgw - an intelligent gateway for agentic apps so that developers can focus on the more differentiated parts of their agentic apps | 2025-01-01T18:56:28 | AdditionalWeb107 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hr9ll1 | false | null | t3_1hr9ll1 | /r/LocalLLaMA/comments/1hr9ll1/i_built_a_small_function_calling_llm_that_packs_a/ | false | false | 185 | {'enabled': True, 'images': [{'id': 'DyPsKDGlaAbpk6lgOmUybY6fQln1zTxdr8Ts7cSDGUE', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/kl6690j2jfae1.jpeg?width=108&crop=smart&auto=webp&s=ded25e3abd5bc01f2b1ee8cb3d530bc60603d8fb', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/kl6690j2jfae1.jpeg?width=216&crop=smart&auto=webp&s=c7360d284dd0f5e9d5b9470a05d0ab0330174e6c', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/kl6690j2jfae1.jpeg?width=320&crop=smart&auto=webp&s=8fa792ee5d149e69489c7c26ebd655a320cd0e6a', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/kl6690j2jfae1.jpeg?width=640&crop=smart&auto=webp&s=ecc97629d6471ce6cee6f2602e94c7f9f68e03e9', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/kl6690j2jfae1.jpeg?width=960&crop=smart&auto=webp&s=07bbe1457a6d8ce2da3db648186c18902412ffb1', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/kl6690j2jfae1.jpeg?width=1080&crop=smart&auto=webp&s=3b4695a870cdafacba873fc2248c076a0eedc15a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/kl6690j2jfae1.jpeg?auto=webp&s=33b1aa7300fd0a9158f43407f7a463ba6e1e95b6', 'width': 1920}, 'variants': {}}]} |
||
The average American's view of China's open models | 1 | [removed] | 2025-01-01T19:02:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hr9r82/the_average_americans_view_of_chinas_open_models/ | Existing_Freedom_342 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hr9r82 | false | null | t3_1hr9r82 | /r/LocalLLaMA/comments/1hr9r82/the_average_americans_view_of_chinas_open_models/ | false | false | self | 1 | null |
Is LM studio unable to load 1.58 bit models or is it just me ? | 1 | [removed] | 2025-01-01T19:17:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hra36e/is_lm_studio_unable_to_load_158_bit_models_or_is/ | Present_Plantain_163 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hra36e | false | null | t3_1hra36e | /r/LocalLLaMA/comments/1hra36e/is_lm_studio_unable_to_load_158_bit_models_or_is/ | false | false | self | 1 | null |
Any advice for making a Inference server on the cheap? | 2 | I live in Argentina, and I want to gradually build a system that is rack-mountable and as affordable as possible for AI inference.
I’m open to using second-hand parts since my main goal is to run LLM inference efficiently. I need a step-by-step plan for purchasing components incrementally (e.g., starting with the PSU, then moving on to other components) so that, over time, I can assemble a decent and upgradeable system.
# Key Questions
1. How important are the CPU, motherboard, and RAM in this type of build?
2. Can I use mining boards, or are they unsuitable for AI workloads?
3. Is it worth investing in second-hand Threadripper or EPYC chips?
4. How much RAM is recommended to run 70B models? I’m considering an NVIDIA P40 GPU but am unsure if it’s still a budget-friendly option for LLM inference.
# Timeline and Context
I plan to start buying parts in February when the government reduces import taxes from 22% to 0%. I expect to complete the build over the next two years.
For networking and storage:
* I plan to store all LLM data on the server itself, as 10Gb networking is currently too expensive.
* NVMe SSDs are cheaper than SATA SSDs, so I’ll likely prioritize NVMe drives if they are compatible with my setup.
# Priority List
Here’s my list of priorities, ranked by importance:
1. **Affordability**: The cheapest components that can meet my requirements.
2. **Performance**: The system must handle LLM inference at a minimum of 5 tokens/second (t/s) to 15t/s.
3. **Upgradability**: The system should be future-proof and allow for upgrades to handle more demanding AI workloads.
# Optional Features (Nice to Have)
* **General Compute**: Ability to use the system for home server tasks, virtual machines, etc.
* **Rack Mountability**: For better scalability and aesthetics.
* **Good NICs**: Reliable NICs for torrenting tasks (e.g., avoiding issues with qBittorrent).
* **Built-in SAS or SAS Card Support**: I’ve found 8TB drives for $60, so I’d like to explore RAID setups.
# Additional Concerns
I’m worried that if I choose data center/server-grade components, the PSU might lack the necessary power or other compatibility issues could arise. Can I approach this like building a regular PC, or are there special considerations?
Additionally, it would be helpful to know the current meta for used hardware that balances cost and performance for AI inference workloads.
Thanks for your help! | 2025-01-01T19:24:53 | https://www.reddit.com/r/LocalLLaMA/comments/1hra9i4/any_advice_for_making_a_inference_server_on_the/ | weener69420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hra9i4 | false | null | t3_1hra9i4 | /r/LocalLLaMA/comments/1hra9i4/any_advice_for_making_a_inference_server_on_the/ | false | false | self | 2 | null |
I am new to the LLM scene and I want to build a PC to accommodate over 30 B parameters, aside for price will be the best build? I want to do at least a GTX 4090 GPU it doesn’t matter if it’s AMD or Intel. | 0 |
I’m completely new to the scene and I just want to be able to run large set locally superfast. | 2025-01-01T19:38:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hraka1/i_am_new_to_the_llm_scene_and_i_want_to_build_a/ | AmillieIO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hraka1 | false | null | t3_1hraka1 | /r/LocalLLaMA/comments/1hraka1/i_am_new_to_the_llm_scene_and_i_want_to_build_a/ | false | false | self | 0 | null |
Is there any benefit to having more than 64GB of RAM? | 0 | I’m currently using a Ryzen 9 7900 system with 32GB (16x2) DDR5 5200 MHz RAM and considering an upgrade to either 64GB (32x2) or 96GB (48x2) of the same speed.
My main use case involves coding with large context sizes. I already have an RTX 4090 and might add either a 3090 or a 5090 depending on how things pan out financially in the coming months.
Given that my VRAM usage typically won’t exceed 48GB (or 56GB max), is there any practical reason for me to go beyond 64GB of system RAM? I don’t see myself relying on CPU inference since it becomes inefficient with higher context sizes.
For context, I generally work with \~30B models but would like to explore 70B models for coding once I upgrade, or even test exl2 versions on my current 24GB VRAM.
With Intel soon to launch 24GB cards and AMD launching Strix Halo incoming with soldered high bandwidth ram - does it make sense to over-invest in DIMMs?
Would love to hear your thoughts!
| 2025-01-01T19:43:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hrao9f/is_there_any_benefit_to_having_more_than_64gb_of/ | trithilon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrao9f | false | null | t3_1hrao9f | /r/LocalLLaMA/comments/1hrao9f/is_there_any_benefit_to_having_more_than_64gb_of/ | false | false | self | 0 | null |
Can the MacBook Pro M4 128 GB handle deep seek V3 locally? | 0 | If not, do you think any laptop is capable of running it locally? | 2025-01-01T19:48:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hrasev/can_the_macbook_pro_m4_128_gb_handle_deep_seek_v3/ | AmillieIO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrasev | false | null | t3_1hrasev | /r/LocalLLaMA/comments/1hrasev/can_the_macbook_pro_m4_128_gb_handle_deep_seek_v3/ | false | false | self | 0 | null |
A new Microsoft paper lists sizes for most of the closed models | 960 | Paper link: arxiv.org/pdf/2412.19260
| 2025-01-01T19:59:25 | jd_3d | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hrb1hp | false | null | t3_1hrb1hp | /r/LocalLLaMA/comments/1hrb1hp/a_new_microsoft_paper_lists_sizes_for_most_of_the/ | false | false | 960 | {'enabled': True, 'images': [{'id': 'Z1qXYSyFgwyroNNO8p7aMM1_9PxaGi-l1aeI6l7tGo0', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/wff1zlaaufae1.png?width=108&crop=smart&auto=webp&s=e83eb05d8ac9594ad30bb8ae519061b5fe125243', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/wff1zlaaufae1.png?width=216&crop=smart&auto=webp&s=79487fea84de424dba6666d7c3f545820a87988e', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/wff1zlaaufae1.png?width=320&crop=smart&auto=webp&s=168b4b3018bf1993fb0a464efe92143e4df3c43d', 'width': 320}, {'height': 321, 'url': 'https://preview.redd.it/wff1zlaaufae1.png?width=640&crop=smart&auto=webp&s=3e761ca5b17b094bc5cc869ce8224f15dd8986e0', 'width': 640}, {'height': 482, 'url': 'https://preview.redd.it/wff1zlaaufae1.png?width=960&crop=smart&auto=webp&s=bd5ec22c863039f6be0e5c8c69c0c9c6624131e9', 'width': 960}, {'height': 543, 'url': 'https://preview.redd.it/wff1zlaaufae1.png?width=1080&crop=smart&auto=webp&s=0bba46baa63465b1a65d2c53396014d798b332cc', 'width': 1080}], 'source': {'height': 543, 'url': 'https://preview.redd.it/wff1zlaaufae1.png?auto=webp&s=ba3cd7fc20ff0f1b0b5ca4ddc4cdcd89e0a63d95', 'width': 1080}, 'variants': {}}]} |
||
Which primers on practical foundation modeling are relevant for January 2025? | 8 | I spent the last couple of years with a heavy focus on continued pre-training and finetuning 8B - 70B LLMs over industry-specific datasets. Until now, the cost of creating a new foundation model has been cost-prohibitive so my team has focused on tightening up our training and text annotation methodologies to squeeze performance out of existing open source models.
My company leaders have asked me to strongly consider creating a foundation model that we can push even further than the best off-the-shelf models. It's a big jump in cost, so I'm writing a summary of the expected risks, rewards, infrastructure, timelines, etc. that we can use as a basis for our conversation.
I'm curious what people here would recommend in terms of today's best practice papers/articles/books/repos or industry success stories to get my feet back on the ground with pre-training the current era of LLMs. Fortunately, I'm not jumping in cold. I have old publications on BERT pre-training where we found unsurprising gains from fundamental changes like domain-specific tokenization. I thought BERT was expensive, but it sure looks easy to burn an entire startup funding round with these larger models. Any pointers would be greatly appreciated. | 2025-01-01T20:36:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hrbusm/which_primers_on_practical_foundation_modeling/ | robotnarwhal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrbusm | false | null | t3_1hrbusm | /r/LocalLLaMA/comments/1hrbusm/which_primers_on_practical_foundation_modeling/ | false | false | self | 8 | null |
2x3090 build | 3 | I'm building a 2x3090 rig for mostly LLM related stuff, Agents, RAG apps, Knowledge graphs etc.
Here is the components list I have narrowed down to so far:GPU: Rtx 2x 3090 24GB (preferably MSI Suprim X or EVGA)
CPU: Ryzen 7 7800X3D OR Ryzen 9 7900X
Motherboard: X670E or X670E-Plus
RAM: 32GB x 3 DDR5 (minimum 5000Mhz)
SSD: 2TB (high sequential speeds)
AIO: Arctic Liquid Freezer II (360mm)
Fans: 8x Arctic P12 or Corsair ML120 Pro
PSU: 1200-1500W 80+ Gold/ Platinum CORSAIR AX OR RM
CASE: Lian Li o11 Dynamic XL
or Corsair 7000D
Had a few questions:
1. Which specific Motherboard model should I go for? Alot of x670s and x670es only allow x16 x4 split for dual gpus, only specific ones allow x8 x8. Also which motherboards will have enough space for airflow for the additional GPU? **
2. What other cases are my options? Is it better to go with a E-Atx and super tower build?
3. How much will an Nvlink Help? Use case is mostly inference but also include some finetuning?
4. Is it worth it to go for an x3d cpu despite the lower base clock?
5. I am planning to 3d print custom channels for airflow straight to gpus, what sort of design would be useful for the 2x3090
6. will gpu sag be an issue for the top gpu?
7. Is any of this even worth it? I deal with sensitive data so apis esp. Chinese like deepseek is basically out of the question for certain use cases
8. Is an aio necessary? I'm scared of horror stories from failed pumps and leaked coolants decimating my gpus
Note: I am not based in USA/Canada so the 2nd hand market is significantly smaller
Budget is approx 3000USD, can stretch slightly but not alot
Would love to hear everyone's opinions and suggestions, somewhat new to the scene but enthusiastic to learn | 2025-01-01T21:02:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hrcfqd/2x3090_build/ | dRraMaticc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrcfqd | false | null | t3_1hrcfqd | /r/LocalLLaMA/comments/1hrcfqd/2x3090_build/ | false | false | self | 3 | null |
LM Studio and RAM usage | 1 | I am currently running into an issue where “Keep model in memory” doesn’t seem to function as advertised. Whenever I load a model with that feature disabled, it still loads the entire model into both VRAM and RAM (and stays until ejection).
For reference, the model fits into VRAM with room to spare, and the GPU offload slider is all the way up. Am I misunderstanding what this setting does?
Thanks! | 2025-01-01T21:06:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hrcjfc/lm_studio_and_ram_usage/ | wappledilly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrcjfc | false | null | t3_1hrcjfc | /r/LocalLLaMA/comments/1hrcjfc/lm_studio_and_ram_usage/ | false | false | self | 1 | null |
"This year Llama 4 will have multiple releases" "speech and reasoning" | 298 | 2025-01-01T21:23:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hrcwul/this_year_llama_4_will_have_multiple_releases/ | ApprehensiveAd3629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrcwul | false | null | t3_1hrcwul | /r/LocalLLaMA/comments/1hrcwul/this_year_llama_4_will_have_multiple_releases/ | false | false | 298 | null |
||
Local model to ingest a variety of video, audio, and documents? | 8 | To make a long story short, I've got a rather large directory of .mov, .mp4, .png, .pdf, .docx, .mp3, .wav, .jpg, and .xlsx files. I'd really like to see if there is a way I can use LM Studio with a local model to ingest all of the files and then be able to do prompts like "Show me everywhere that thanksgiving break is documented and summarize it" or "find in any video or audio file where there is female voice yelling".
Would anyone have any suggestions? I've got 32GB of RAM, so these days not great but not bad. | 2025-01-01T21:25:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hrcyaq/local_model_to_ingest_a_variety_of_video_audio/ | ageoffri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrcyaq | false | null | t3_1hrcyaq | /r/LocalLLaMA/comments/1hrcyaq/local_model_to_ingest_a_variety_of_video_audio/ | false | false | self | 8 | null |
AI Agent Newsletter | 8 | I’m starting a weekly newsletter for AI Agents. We all know it’s going to pop off this year and I definitely expect a constant flood of news. It’s called Turing Trail in honor of Alan Turing, the Father of Computer Science, who created the Turing Trail and helped us win WWII. I have a twitter page too that I’ll link below. I plan on posting regularly here and hope you guys don’t mind. Let me know if you have any ideas and what you think. I’m pretty excited :) | 2025-01-01T21:32:24 | Clear_Duck7306 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hrd424 | false | null | t3_1hrd424 | /r/LocalLLaMA/comments/1hrd424/ai_agent_newsletter/ | false | false | 8 | {'enabled': True, 'images': [{'id': 'I5bKgGkDoSMQmRhqhtF2DL8FsKPQM4p9FjNddbqMdls', 'resolutions': [{'height': 159, 'url': 'https://preview.redd.it/soltc53wagae1.jpeg?width=108&crop=smart&auto=webp&s=c2aa27525510d44c282c884e0b13c8bc36229ff6', 'width': 108}, {'height': 318, 'url': 'https://preview.redd.it/soltc53wagae1.jpeg?width=216&crop=smart&auto=webp&s=dd845efca9c2990fd29204b8cf33167a2274eaa4', 'width': 216}, {'height': 471, 'url': 'https://preview.redd.it/soltc53wagae1.jpeg?width=320&crop=smart&auto=webp&s=8fa98605f4270b85981eb0be09eb3897acabee8d', 'width': 320}], 'source': {'height': 560, 'url': 'https://preview.redd.it/soltc53wagae1.jpeg?auto=webp&s=9bc0c88fcc475a28b8765d07174694259b9f73a3', 'width': 380}, 'variants': {}}]} |
||
Best way to combine 1 7900XTX and 2 3060s? | 6 | I have two PCs, one that has a 7900XTX and another that has two 3060s. I was wondering what would be the best way to combine them? If possible? I know having all the same brand in one machine is ideal but since I already have 48GB VRAM across my three cards I'd like to try utilizing them before I spend more money. | 2025-01-01T21:35:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hrd6qx/best_way_to_combine_1_7900xtx_and_2_3060s/ | ogmiche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrd6qx | false | null | t3_1hrd6qx | /r/LocalLLaMA/comments/1hrd6qx/best_way_to_combine_1_7900xtx_and_2_3060s/ | false | false | self | 6 | null |
Local Llama3.2-vision Problem (PHP) | 1 | [removed] | 2025-01-01T21:38:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hrd9f2/local_llama32vision_problem_php/ | pomazoka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrd9f2 | false | null | t3_1hrd9f2 | /r/LocalLLaMA/comments/1hrd9f2/local_llama32vision_problem_php/ | false | false | self | 1 | null |
AN OPEN SOURCE MODEL FINE TUNE IS ABOUT TO BEAT THE TWITCH HYPE TRAIN WORLD RECORD | 0 | 2025-01-01T21:39:07 | https://www.twitch.tv/vedal987 | MerePotato | twitch.tv | 1970-01-01T00:00:00 | 0 | {} | 1hrd9ks | false | {'oembed': {'description': 'how to meet your makers', 'height': 340, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fplayer.twitch.tv%2F%3Fchannel%3Dvedal987%26muted%3Dtrue%26autoplay%3Dfalse%26parent%3Dcdn.embedly.com%26parent%3Dreddit.com%26parent%3Dwww.reddit.com%26parent%3Dold.reddit.com%26parent%3Dnew.reddit.com%26parent%3Dredditmedia.com&display_name=Twitch.tv&url=https%3A%2F%2Fwww.twitch.tv%2Fvedal987&image=https%3A%2F%2Fstatic-cdn.jtvnw.net%2Fjtv_user_pictures%2Fdd956f46-3776-4dfd-8bc3-e4f74c5ede67-profile_image-300x300.png&type=text%2Fhtml&schema=twitch" width="600" height="340" scrolling="no" title="Twitch.tv embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'Twitch.tv', 'provider_url': 'http://www.twitch.tv', 'thumbnail_height': 300, 'thumbnail_url': 'https://static-cdn.jtvnw.net/jtv_user_pictures/dd956f46-3776-4dfd-8bc3-e4f74c5ede67-profile_image-300x300.png', 'thumbnail_width': 300, 'title': 'vedal987 - Twitch', 'type': 'rich', 'version': '1.0', 'width': 600}, 'type': 'twitch.tv'} | t3_1hrd9ks | /r/LocalLLaMA/comments/1hrd9ks/an_open_source_model_fine_tune_is_about_to_beat/ | false | false | 0 | {'enabled': False, 'images': [{'id': '2E6_cIeLTtM7uM-NJQouR8WebW0tnDwP8z1OwzAGD1U', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/wiKF-cKv2omDtOkX-Mmy5AFZimH5RnCsUC_gxKbYFHI.jpg?width=108&crop=smart&auto=webp&s=a9f749f7438c93c56a4447586c2a177adf0ab821', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/wiKF-cKv2omDtOkX-Mmy5AFZimH5RnCsUC_gxKbYFHI.jpg?width=216&crop=smart&auto=webp&s=8932b05579398420e5e20fa461787665f4c290ed', 'width': 216}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/wiKF-cKv2omDtOkX-Mmy5AFZimH5RnCsUC_gxKbYFHI.jpg?auto=webp&s=3f54031a3fc0d3a6080c929b8cfbff135e34f9a8', 'width': 300}, 'variants': {}}]} |
||
Are we in the era of ternary and binary models yet? | 2 | With the sizes of these models balooning to hundreds of billions of parameters, have there been any recent developments in successfully training ternary and binary LLMs? | 2025-01-01T21:43:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hrdcs9/are_we_in_the_era_of_ternary_and_binary_models_yet/ | Equivalent-Bet-8771 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrdcs9 | false | null | t3_1hrdcs9 | /r/LocalLLaMA/comments/1hrdcs9/are_we_in_the_era_of_ternary_and_binary_models_yet/ | false | false | self | 2 | null |
👋 Chipper AI/RAG Interface for Tinkerers (Ollama, Haystack RAG, Python) | 13 | I started this project as a way to help my girlfriend with her new book and to learn a bit about LLMs and RAGs. The idea was to use local embeddings to ask questions about characters and explore creative possibilities, all while keeping everything local. What began as a bunch of a wild collection of scripts is now growing into a labour of love pet-project. You maybe know how it sometimes goes :)
It's not finished and polished yet, but I've made some good progress and brought it to a much better overall state over Christmas days. I'd love to show it to a few people and get some feedback and contributors. Your thoughts could really help me improve it and make it even better.
[https://github.com/TilmanGriesel/chipper](https://github.com/TilmanGriesel/chipper)
[Chipper Web-UI Demo](https://i.redd.it/s909n40udgae1.gif)
[Chipper CLI Demo](https://i.redd.it/zlsiuurzdgae1.gif)
| 2025-01-01T21:51:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hrdje9/chipper_airag_interface_for_tinkerers_ollama/ | Alarming_Divide_1339 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrdje9 | false | null | t3_1hrdje9 | /r/LocalLLaMA/comments/1hrdje9/chipper_airag_interface_for_tinkerers_ollama/ | false | false | 13 | {'enabled': False, 'images': [{'id': '6CYMF05zRz8pmIzAe2fiiiehf8kDE3V70IoDbE6WGrI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oQLzW--5OR55XwAPvdLG5CmNFqUDqgVxu_hP8qJ01nI.jpg?width=108&crop=smart&auto=webp&s=1559e7db3020fdf5b180cba22f3e1ac80c77a973', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oQLzW--5OR55XwAPvdLG5CmNFqUDqgVxu_hP8qJ01nI.jpg?width=216&crop=smart&auto=webp&s=004d35df50ce7b3470fb937d979b7863f98806fb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oQLzW--5OR55XwAPvdLG5CmNFqUDqgVxu_hP8qJ01nI.jpg?width=320&crop=smart&auto=webp&s=eac288b06fea435306776645a0a09feb430250f9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oQLzW--5OR55XwAPvdLG5CmNFqUDqgVxu_hP8qJ01nI.jpg?width=640&crop=smart&auto=webp&s=69b9fbbb01ec5d9c182a3f75e400219368fde485', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oQLzW--5OR55XwAPvdLG5CmNFqUDqgVxu_hP8qJ01nI.jpg?width=960&crop=smart&auto=webp&s=c42c7f3fe82cac6d19a525454ec0f48befbb689f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oQLzW--5OR55XwAPvdLG5CmNFqUDqgVxu_hP8qJ01nI.jpg?width=1080&crop=smart&auto=webp&s=2c17482ad48554c1d60aff752f127c08a99f1c45', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/oQLzW--5OR55XwAPvdLG5CmNFqUDqgVxu_hP8qJ01nI.jpg?auto=webp&s=31b520e54f9fbda0e5ecc857df604b0c5f6ff0cd', 'width': 1280}, 'variants': {}}]} |
|
I'm getting started with LLMs on Raspberry Pi 5: Using Ollama, Hailo AI Hat and Agents | 1 | [removed] | 2025-01-01T21:56:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hrdnnc/im_getting_started_with_llms_on_raspberry_pi_5/ | OutrageousAspect7459 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrdnnc | false | null | t3_1hrdnnc | /r/LocalLLaMA/comments/1hrdnnc/im_getting_started_with_llms_on_raspberry_pi_5/ | false | false | self | 1 | null |
I have a dream... want to join me? | 1 | [removed] | 2025-01-01T21:59:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hrdpzx/i_have_a_dream_want_to_join_me/ | _khromalabs_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrdpzx | false | null | t3_1hrdpzx | /r/LocalLLaMA/comments/1hrdpzx/i_have_a_dream_want_to_join_me/ | false | false | self | 1 | null |
Does anyone here use Vast.ai? | 29 | No GPU provider I've tried is perfect but they generally tend to have something going for them... so far Vast.ai is the first one that's demonstrated no redeeming qualities.
* Instances seem to get deadlocked with little to no observability when setting up templates or changing them (and others seem to have noticed this: https://www.reddit.com/r/LocalLLaMA/comments/1cz90le/comment/l5g270m/)
* Most of the UI is the most painful, fucked up interface I've ever had the displeasure of managing compute with.
* Getting awful performance even on their "datacenter" instances:
https://preview.redd.it/cs48lp5ndgae1.png?width=989&format=png&auto=webp&s=78c0fa8d3668ba5b31193e692f4990d6ed27c3b4
* The max contract concept is annoying to have to deal with. It feels like they didn't want to enforce an actual SLA with providers and offloaded the burden to users.
* Prices don't seem that competitive *on the whole*? Some very specific configs (like 4x3090s/4090s) are definitely cheaper than usual... but with such crappy instance quality, model parallelism tends to result in really terrible performance like my graph above, wiping out the savings.
* They don't seem to have much in the way of the most cost-effective low-cost GPUs, like A40s
I know I'm shitting on Vast here, but I'm also genuinely trying to figure out if maybe there is some magic niche where they make sense since I already loaded up credits there.
Is it great programmatic access? Are they just doing better at some sweet spot with some specific GPU config I'm missing? They give the impression of doing a lot of business, so I'd like to make the best of things. | 2025-01-01T22:17:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hre4c2/does_anyone_here_use_vastai/ | MustyMustelidae | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hre4c2 | false | null | t3_1hre4c2 | /r/LocalLLaMA/comments/1hre4c2/does_anyone_here_use_vastai/ | false | false | 29 | null |
|
Best small model for understanding text | 2 | I’m building an app the reads the text of for sale listings and converts it into structured json based on some fields it extracts from the text. I have it working well with llama 3.2 8b but it’s a bit slow and feels like massive overkill. Is there a smaller, faster open source model you’d recommend for this task? | 2025-01-01T22:31:04 | https://www.reddit.com/r/LocalLLaMA/comments/1href9n/best_small_model_for_understanding_text/ | elliotspritzer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1href9n | false | null | t3_1href9n | /r/LocalLLaMA/comments/1href9n/best_small_model_for_understanding_text/ | false | false | self | 2 | null |
LLMs are not reasoning models | 0 | LLMs are not reasoning models, and I'm getting tired of people saying otherwise.
Please, keep in mind that this is my opinion, which may differ from yours, so if it does, or even if it doesn't, please be respectful and constructive in your answer. Thanks
It's almost practically everywhere now (after the announcement of o3) that A.G.I. is here or very, very close, and that LLMs or more sophisticated architectures are able to fully reason and plan.
I use LLMs almost every day to accelerate the way I work (software development), and I can tell you, at least from my experience, that we're very far from reasoning models or an A.G.I.
And it's really frustrating for me to hear or read about people using those tools and basically saying that they can do anything, even though those people have practically no experience in algorithmic or coding. This frustration isn't me just being jealous, it comes down to the fact that:
It's not because a code works that you should use it.
People are software engineers for a reason, not because they can write code, or because they can copy and paste some lines from Stack Overflow, it's because they know the overall architecture of what they're doing, why they're doing it this way and not any other way and for what purpose.
If you ask an LLM to do something, yes it *might* be able to do it, but it may also create a function that is O(n^(2)) instead of O(n). Or it may create a code that's not going to be scalable in the long run.
You'll say to me that you could ask the LLM to tell you what's the best solution, or the best possible solutions for this specific question, and my answer to you would be: How do you know which one to use if you don't even know what it means ? You're just going to blindly trust the LLM, hoping that the solution is the one for you ? And if you do use that proposed solution, how do you expect to debug it/make it evolve over time ? If your project evolves, and you start hiring someone, how do you explain your project to your new collaborator if even you don't know how it works ?
I really think it's a hubris to think that Software engineers are going to vanish from one day to the next. Not because their work may not be automated, but by the time you get a normal person to the level of a Software engineer thanks to A.I., that same Software engineer is going to be worth a whole team, or even a small company.
Yes, you could meticulously tell the LLM exactly what you want, with details everywhere, and ask it something simple, but first, it may not work, even if your prompt is dead perfect, and second, even if it does, congratulations, you just did the work of a Software engineer. When you know what you're doing, it takes less time to write the code of a small task yourself, than having to entirely explain what you want. The purpose of an LLM is not to do the job of thinking (for now), it's to do the job of doing.
Also, I say those models are not reasoning at all because, from my day-to-day job, I can clearly see that it's not generalizing from its training data, and it's practically not able to reason at all on real world tasks. I'm not talking about benchmarks here, whether private or public, abstract or not, I'm talking about the real software that I work on.
For instance, not so long ago, I tried to create a function that deals with a singly linked list using the best Claude model (Sonnet New). Linked List is something that a computer science graduate learns from the very beginning (this is really basic stuff), and yet, it couldn't do it. I just tried with other models, and it's the same (I couldn't try with o1 though).
I'm not beating the hell out of those models just to tell that they can't or can do something, I'm using this very specific example, because it shows just how dumb they can be, and how not *reasoning* they are.
Linked Lists involve some kind of physical understanding of what you're doing, basically, it means that you'll probably have to use a pen and paper (or tablet) to get to the solution, meaning that you have to apply what you know to that very specific situation, a.k.a. reasoning. In my situation, I was doing singly linked list with a database, using 3 tables of that database, which is totally different from just doing singly linked list in C or Python, plus there are some subtleties here and there.
Anyway, it couldn't do it, not by just a tiny bit, but by a huge margin, it fucked up quite a lot. That's because it's not reasoning, it's just regurgitating stuff it's seen here and there in its training data, that's all.
I know people will say: Well it may not be working right now, but in x months or years it will. Like I said earlier, it doesn't matter if it works, if it and you don't know why.
When you go to the doctor, they might tell you that you have a cold or the flue, are you going to tell me that just because you could tell me that too, that it means you're a doctor too, or that you're almost qualified to be one ? It's nonsense, because as long as you don't know why you're saying what you're saying, your answer will almost be worthless.
I'm not writing this post to piss on LLMs or similar architectures, I'm doing so as a reminder, in the end LLMs are just tools, and tools do not replace people, they enhance them.
You might say that I'm delusional in thinking this way, but I'm sorry to tell you so, but until proven otherwise, you've been, to some extent, lied by Corporations and the Media into thinking that A.G.I. is nearby.
The fact is, it's not the case and no one really knows when we'll have thinking machines. And until then, let's stop pretending that those tools are magical, that they can do anything, replace entire teams of engineers, designers or writers, but instead, we should start thinking deeply how to incorporate them into our workflows to enhance our day-to-day lives.
The future that we've been promised is, well, a future, and it's definitely not there yet, and it's going to require way more architectural changes than just test-time compute (hate that term) to achieve that very future.
I thank you for reading !
Happy new year ! | 2025-01-01T22:48:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hretea/llms_are_not_reasoning_models/ | SignalCompetitive582 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hretea | false | null | t3_1hretea | /r/LocalLLaMA/comments/1hretea/llms_are_not_reasoning_models/ | false | false | self | 0 | null |
¿Cómo puedo ejecutar un modelo .gguf ya descargado en termux? | 1 | [removed] | 2025-01-01T22:49:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hretmd/cómo_puedo_ejecutar_un_modelo_gguf_ya_descargado/ | Annual_Library_8288 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hretmd | false | null | t3_1hretmd | /r/LocalLLaMA/comments/1hretmd/cómo_puedo_ejecutar_un_modelo_gguf_ya_descargado/ | false | false | self | 1 | null |
☠️☠️☠️☠️☠️ | 178 | 2025-01-02T00:27:18 | Pro-editor-1105 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hrgwqd | false | null | t3_1hrgwqd | /r/LocalLLaMA/comments/1hrgwqd/_/ | false | false | 178 | {'enabled': True, 'images': [{'id': 'PMrJEI3O5JJQGq6AL0unYK-Husgaq848ESsqL06kywU', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/xylms0416hae1.jpeg?width=108&crop=smart&auto=webp&s=6647fe6973fb5fe9f657f34fed2b8a15b46340e5', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/xylms0416hae1.jpeg?width=216&crop=smart&auto=webp&s=aa20a405fb6e5b2aec7faf873f2fe4ef944a1689', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/xylms0416hae1.jpeg?width=320&crop=smart&auto=webp&s=b1c32e4f9ec834fdd2c112289c848ac790076ebd', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/xylms0416hae1.jpeg?width=640&crop=smart&auto=webp&s=d0f606d418a2eff234b010dcdabbf03a6a6c8771', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/xylms0416hae1.jpeg?width=960&crop=smart&auto=webp&s=0207a2d155d048ca06bfe1f8cf9f381fa3cf2b2c', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/xylms0416hae1.jpeg?width=1080&crop=smart&auto=webp&s=5d43c3fb3bd50a2101474d1adc45bedc3cae02f9', 'width': 1080}], 'source': {'height': 2796, 'url': 'https://preview.redd.it/xylms0416hae1.jpeg?auto=webp&s=6bc8744f84ad7a57ec5d0ad0746e4fb49dce68ec', 'width': 1290}, 'variants': {}}]} |
|||
Actual word count distribution in a 400-500 word writing task while maintaining a running count after each sentence by LLM | 15 | 2025-01-02T00:49:08 | zero0_one1 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hrhd4d | false | null | t3_1hrhd4d | /r/LocalLLaMA/comments/1hrhd4d/actual_word_count_distribution_in_a_400500_word/ | false | false | 15 | {'enabled': True, 'images': [{'id': 'ti3h7YI3FP3DcTjthGqTMDKxS02B3-3OZ9iBkHDv65s', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/v01ir6tn9hae1.png?width=108&crop=smart&auto=webp&s=62b7997d807df3bb54961447084167c5b1f6685d', 'width': 108}, {'height': 96, 'url': 'https://preview.redd.it/v01ir6tn9hae1.png?width=216&crop=smart&auto=webp&s=5d31d14d40b7a0a97aa60feb3acff4098deb2456', 'width': 216}, {'height': 142, 'url': 'https://preview.redd.it/v01ir6tn9hae1.png?width=320&crop=smart&auto=webp&s=581645735ede7fd62e045225abd3df86ed928e82', 'width': 320}, {'height': 284, 'url': 'https://preview.redd.it/v01ir6tn9hae1.png?width=640&crop=smart&auto=webp&s=2bc6a9ea105c4b1868641da628a31c56d884e377', 'width': 640}, {'height': 426, 'url': 'https://preview.redd.it/v01ir6tn9hae1.png?width=960&crop=smart&auto=webp&s=5d6bf1afa9db1f74d4d0c13033695fa8c661696a', 'width': 960}, {'height': 480, 'url': 'https://preview.redd.it/v01ir6tn9hae1.png?width=1080&crop=smart&auto=webp&s=40f0eee9eea083246e6980e020973da0a0974c98', 'width': 1080}], 'source': {'height': 800, 'url': 'https://preview.redd.it/v01ir6tn9hae1.png?auto=webp&s=94da47c2782ded363df0019d2ad66cdfb02bbde4', 'width': 1800}, 'variants': {}}]} |
|||
ollama pull progress bar goes backward | 1 | [removed] | 2025-01-02T00:51:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hrhf05/ollama_pull_progress_bar_goes_backward/ | Existing-Mirror2315 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrhf05 | false | null | t3_1hrhf05 | /r/LocalLLaMA/comments/1hrhf05/ollama_pull_progress_bar_goes_backward/ | false | false | self | 1 | null |
Using XGBoost as a reward model? | 1 | [removed] | 2025-01-02T01:00:56 | https://www.reddit.com/r/LocalLLaMA/comments/1hrhluj/using_xgboost_as_a_reward_model/ | Wonderful_Alfalfa115 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrhluj | false | null | t3_1hrhluj | /r/LocalLLaMA/comments/1hrhluj/using_xgboost_as_a_reward_model/ | false | false | self | 1 | null |
Subsets and Splits