Add research section on DeepSeek-R1T-Chimera
Browse files
README.md
CHANGED
@@ -13,18 +13,22 @@ short_description: TNG on huggingface
|
|
13 |
We solve hard IT problems.
|
14 |
|
15 |
## Latest Research
|
16 |
-
Check out our latest research
|
17 |
|
18 |
-
|
19 |
-
|
20 |
-
[
|
|
|
|
|
|
|
|
|
21 |
|
22 |
## Blog
|
23 |
Read our latest blog posts:
|
24 |
|
25 |
-
[
|
26 |
-
|
27 |
-
[Efficient Request Queueing – Optimizing LLM Performance](https://huggingface.co/blog/tngtech/llm-performance-request-queueing)
|
28 |
|
29 |
## Follow us
|
30 |
|
|
|
13 |
We solve hard IT problems.
|
14 |
|
15 |
## Latest Research
|
16 |
+
Check out our latest research:
|
17 |
|
18 |
+
- **DeepSeek-R1T-Chimera**
|
19 |
+
- [Announcement on X](https://x.com/tngtech/status/1916284566127444468)
|
20 |
+
- [Model Card](https://huggingface.co/tngtech/DeepSeek-R1T-Chimera)
|
21 |
+
- More details will follow soon :-)
|
22 |
+
- **Mixture of Tunable Experts**
|
23 |
+
- [arXiv: Mixture of Tunable Experts](https://arxiv.org/abs/2502.11096)
|
24 |
+
- [blog: Mixture of Tunable Experts](https://huggingface.co/blog/rbrt/mixture-of-tunable-experts)
|
25 |
|
26 |
## Blog
|
27 |
Read our latest blog posts:
|
28 |
|
29 |
+
- [Prefill and Decode for Concurrent Requests - Optimizing LLM Performance ](https://huggingface.co/blog/tngtech/llm-performance-prefill-decode-concurrent-requests)
|
30 |
+
- [Finetuning olmOCR to be a faithful OCR-Engine](https://huggingface.co/blog/tngtech/finetuning-olmocr-to-be-a-faithful-ocr-engine)
|
31 |
+
- [Efficient Request Queueing – Optimizing LLM Performance](https://huggingface.co/blog/tngtech/llm-performance-request-queueing)
|
32 |
|
33 |
## Follow us
|
34 |
|