Update README.md
Browse files
README.md
CHANGED
@@ -46,4 +46,82 @@ library_name: transformers
|
|
46 |
tags:
|
47 |
- text-generation-inference
|
48 |
- multilingual
|
49 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
46 |
tags:
|
47 |
- text-generation-inference
|
48 |
- multilingual
|
49 |
+
---
|
50 |
+
|
51 |
+
# **Lambda-Equulei-1.5B-xLingual**
|
52 |
+
|
53 |
+
> **Lambda-Equulei-1.5B-xLingual** is a **multilingual conversational model** fine-tuned from **Qwen2-1.5B**, specifically designed for **cross-lingual chat and experimental conversations** across **30+ languages**. It brings advanced multilingual understanding and natural dialogue capabilities in a compact size, ideal for international communication tools, language learning platforms, and global conversational assistants.
|
54 |
+
|
55 |
+
## **Key Features**
|
56 |
+
1. **Multilingual Conversational Excellence**
|
57 |
+
Trained to engage in natural, flowing conversations across 30+ languages, Lambda-Equulei-1.5B-xLingual enables seamless cross-cultural communication and supports diverse linguistic contexts for global applications.
|
58 |
+
|
59 |
+
2. **Extensive Language Support (30+ Languages)**
|
60 |
+
Capable of understanding, responding, and maintaining context fluently in **over 30 languages** including English, Chinese, Spanish, French, German, Japanese, Korean, Arabic, Hindi, Portuguese, Russian, Italian, Dutch, and many more regional languages.
|
61 |
+
|
62 |
+
3. **Compact yet Conversationally Rich**
|
63 |
+
While only 1.5B parameters, this model delivers strong performance for natural dialogue, context retention, cultural awareness, and nuanced conversations with minimal resource demands.
|
64 |
+
|
65 |
+
4. **Experimental Conversational AI**
|
66 |
+
Provides dynamic, context-aware responses that adapt to different conversational styles, cultural nuances, and communication patterns across languages.
|
67 |
+
|
68 |
+
## **Quickstart with Transformers**
|
69 |
+
```python
|
70 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
71 |
+
|
72 |
+
model_name = "prithivMLmods/Lambda-Equulei-1.5B-xLingual"
|
73 |
+
model = AutoModelForCausalLM.from_pretrained(
|
74 |
+
model_name,
|
75 |
+
torch_dtype="auto",
|
76 |
+
device_map="auto"
|
77 |
+
)
|
78 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
79 |
+
|
80 |
+
prompt = "Hello! Can you help me practice Spanish conversation?"
|
81 |
+
messages = [
|
82 |
+
{"role": "system", "content": "You are a helpful multilingual assistant capable of conversing naturally in over 30 languages."},
|
83 |
+
{"role": "user", "content": prompt}
|
84 |
+
]
|
85 |
+
|
86 |
+
text = tokenizer.apply_chat_template(
|
87 |
+
messages,
|
88 |
+
tokenize=False,
|
89 |
+
add_generation_prompt=True
|
90 |
+
)
|
91 |
+
|
92 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
93 |
+
|
94 |
+
generated_ids = model.generate(
|
95 |
+
**model_inputs,
|
96 |
+
max_new_tokens=512
|
97 |
+
)
|
98 |
+
|
99 |
+
generated_ids = [
|
100 |
+
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
101 |
+
]
|
102 |
+
|
103 |
+
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
104 |
+
```
|
105 |
+
|
106 |
+
## **Intended Use**
|
107 |
+
- **Multilingual Chat Applications**: Natural conversation support across 30+ languages for global platforms.
|
108 |
+
- **Language Learning Tools**: Interactive practice partners for students learning new languages.
|
109 |
+
- **International Customer Support**: Cross-cultural communication for global businesses and services.
|
110 |
+
- **Cultural Exchange Platforms**: Facilitating meaningful conversations between speakers of different languages.
|
111 |
+
- **Lightweight Multilingual Bots**: Embedded use cases in mobile apps, web platforms, or resource-constrained environments.
|
112 |
+
|
113 |
+
## **Limitations**
|
114 |
+
1. **Experimental Nature**:
|
115 |
+
As an experimental conversational model, responses may vary in quality and consistency across different languages and contexts.
|
116 |
+
|
117 |
+
2. **Language Proficiency Variation**:
|
118 |
+
While supporting 30+ languages, proficiency levels may differ between major languages (English, Chinese, Spanish) and less common regional languages.
|
119 |
+
|
120 |
+
3. **Parameter Scale Constraints**:
|
121 |
+
Though efficient, the 1.5B parameter size may limit performance on highly complex multilingual tasks compared to larger models.
|
122 |
+
|
123 |
+
4. **Bias from Base Model**:
|
124 |
+
Inherits any biases from Qwen2-1.5B's pretraining. Cultural sensitivity and output validation recommended for sensitive applications.
|
125 |
+
|
126 |
+
5. **Context Length Limitations**:
|
127 |
+
May struggle with very long conversations or complex multi-turn dialogues requiring extensive context retention.
|