Upload folder using huggingface_hub
Browse files- README.md +61 -0
- config.json +29 -0
- generation_config.json +6 -0
- model.safetensors +3 -0
- pytorch_model.bin +3 -0
- tokenizer.model +3 -0
- tokenizer.vocab +0 -0
- tokenizer_config.json +12 -0
    	
        README.md
    ADDED
    
    | @@ -0,0 +1,61 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            ---
         | 
| 2 | 
            +
            license: apache-2.0
         | 
| 3 | 
            +
            base_model: llama
         | 
| 4 | 
            +
            library_name: transformers
         | 
| 5 | 
            +
            pipeline_tag: text-generation
         | 
| 6 | 
            +
            tags:
         | 
| 7 | 
            +
            - one-way-polyglot
         | 
| 8 | 
            +
            - japanese
         | 
| 9 | 
            +
            - english
         | 
| 10 | 
            +
            - bilingual
         | 
| 11 | 
            +
            - small-model
         | 
| 12 | 
            +
            ---
         | 
| 13 | 
            +
             | 
| 14 | 
            +
            # one-way-polyglot-8m-tied
         | 
| 15 | 
            +
             | 
| 16 | 
            +
            A one-way polyglot language model trained to understand Japanese but generate only English.
         | 
| 17 | 
            +
             | 
| 18 | 
            +
            ## Model Details
         | 
| 19 | 
            +
             | 
| 20 | 
            +
            - **Architecture**: LLaMA-based transformer
         | 
| 21 | 
            +
            - **Parameters**: 8,519,936 (8.5M)
         | 
| 22 | 
            +
            - **Vocabulary**: 16,384 tokens (bilingual SentencePiece)
         | 
| 23 | 
            +
            - **Context Length**: 512 tokens
         | 
| 24 | 
            +
            - **Embedding Strategy**: Tied
         | 
| 25 | 
            +
             | 
| 26 | 
            +
            ## Capabilities
         | 
| 27 | 
            +
             | 
| 28 | 
            +
            - **Semantic Transfer**: Understands Japanese input and generates contextually appropriate English
         | 
| 29 | 
            +
            - **One-Way Constraint**: Strong bias toward English-only generation
         | 
| 30 | 
            +
            - **Name Transliteration**: Can transliterate Japanese names to English (context-dependent)
         | 
| 31 | 
            +
             | 
| 32 | 
            +
            ## Training Data
         | 
| 33 | 
            +
             | 
| 34 | 
            +
            Trained on bilingual Japanese-English story data with masked loss on Japanese prefixes to enforce one-way generation.
         | 
| 35 | 
            +
             | 
| 36 | 
            +
            ## Usage
         | 
| 37 | 
            +
             | 
| 38 | 
            +
            ```python
         | 
| 39 | 
            +
            from transformers import LlamaForCausalLM, LlamaTokenizer
         | 
| 40 | 
            +
             | 
| 41 | 
            +
            model = LlamaForCausalLM.from_pretrained("one-way-polyglot-8m-tied")
         | 
| 42 | 
            +
            tokenizer = LlamaTokenizer.from_pretrained("one-way-polyglot-8m-tied")
         | 
| 43 | 
            +
             | 
| 44 | 
            +
            # Japanese input, English output
         | 
| 45 | 
            +
            prompt = "ζγ
γθ΅€γεγζγ£γε°ε₯³γγγΎγγγ"
         | 
| 46 | 
            +
            inputs = tokenizer(prompt, return_tensors="pt")
         | 
| 47 | 
            +
            outputs = model.generate(**inputs, max_new_tokens=50, temperature=0.7)
         | 
| 48 | 
            +
            print(tokenizer.decode(outputs[0], skip_special_tokens=True))
         | 
| 49 | 
            +
            ```
         | 
| 50 | 
            +
             | 
| 51 | 
            +
            ## Model Variants
         | 
| 52 | 
            +
             | 
| 53 | 
            +
            This is part of a series exploring one-way polyglot capabilities:
         | 
| 54 | 
            +
            - 1.25M parameters (tied embeddings)
         | 
| 55 | 
            +
            - 8.5M parameters (tied embeddings) 
         | 
| 56 | 
            +
            - 12.7M parameters (untied embeddings)
         | 
| 57 | 
            +
            - 15.7M parameters (tied embeddings)
         | 
| 58 | 
            +
             | 
| 59 | 
            +
            ## License
         | 
| 60 | 
            +
             | 
| 61 | 
            +
            Apache 2.0
         | 
    	
        config.json
    ADDED
    
    | @@ -0,0 +1,29 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            {
         | 
| 2 | 
            +
              "architectures": [
         | 
| 3 | 
            +
                "LlamaForCausalLM"
         | 
| 4 | 
            +
              ],
         | 
| 5 | 
            +
              "attention_bias": false,
         | 
| 6 | 
            +
              "attention_dropout": 0.0,
         | 
| 7 | 
            +
              "bos_token_id": 1,
         | 
| 8 | 
            +
              "eos_token_id": 2,
         | 
| 9 | 
            +
              "head_dim": 64,
         | 
| 10 | 
            +
              "hidden_act": "silu",
         | 
| 11 | 
            +
              "hidden_size": 256,
         | 
| 12 | 
            +
              "initializer_range": 0.02,
         | 
| 13 | 
            +
              "intermediate_size": 682,
         | 
| 14 | 
            +
              "max_position_embeddings": 512,
         | 
| 15 | 
            +
              "mlp_bias": false,
         | 
| 16 | 
            +
              "model_type": "llama",
         | 
| 17 | 
            +
              "num_attention_heads": 4,
         | 
| 18 | 
            +
              "num_hidden_layers": 6,
         | 
| 19 | 
            +
              "num_key_value_heads": 2,
         | 
| 20 | 
            +
              "pretraining_tp": 1,
         | 
| 21 | 
            +
              "rms_norm_eps": 1e-06,
         | 
| 22 | 
            +
              "rope_scaling": null,
         | 
| 23 | 
            +
              "rope_theta": 10000.0,
         | 
| 24 | 
            +
              "tie_word_embeddings": true,
         | 
| 25 | 
            +
              "torch_dtype": "float32",
         | 
| 26 | 
            +
              "transformers_version": "4.54.1",
         | 
| 27 | 
            +
              "use_cache": true,
         | 
| 28 | 
            +
              "vocab_size": 16384
         | 
| 29 | 
            +
            }
         | 
    	
        generation_config.json
    ADDED
    
    | @@ -0,0 +1,6 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            {
         | 
| 2 | 
            +
              "_from_model_config": true,
         | 
| 3 | 
            +
              "bos_token_id": 1,
         | 
| 4 | 
            +
              "eos_token_id": 2,
         | 
| 5 | 
            +
              "transformers_version": "4.54.1"
         | 
| 6 | 
            +
            }
         | 
    	
        model.safetensors
    ADDED
    
    | @@ -0,0 +1,3 @@ | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            version https://git-lfs.github.com/spec/v1
         | 
| 2 | 
            +
            oid sha256:1804e76de82b0380509a595273b62af4117bf200d42183ba15d433243bfbac91
         | 
| 3 | 
            +
            size 34085832
         | 
    	
        pytorch_model.bin
    ADDED
    
    | @@ -0,0 +1,3 @@ | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            version https://git-lfs.github.com/spec/v1
         | 
| 2 | 
            +
            oid sha256:4359c7d373bfaa6bc9ec396e10c9cde2d0e7604a0821075d8529b3561b7c2778
         | 
| 3 | 
            +
            size 34102703
         | 
    	
        tokenizer.model
    ADDED
    
    | @@ -0,0 +1,3 @@ | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            version https://git-lfs.github.com/spec/v1
         | 
| 2 | 
            +
            oid sha256:18fd3bcd564240df1343dda502457ea5db419f4061ca442a9b204796d3d79301
         | 
| 3 | 
            +
            size 581374
         | 
    	
        tokenizer.vocab
    ADDED
    
    | The diff for this file is too large to render. 
		See raw diff | 
|  | 
    	
        tokenizer_config.json
    ADDED
    
    | @@ -0,0 +1,12 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            {
         | 
| 2 | 
            +
              "tokenizer_class": "LlamaTokenizer",
         | 
| 3 | 
            +
              "vocab_size": 16384,
         | 
| 4 | 
            +
              "model_max_length": 512,
         | 
| 5 | 
            +
              "pad_token": "[PAD]",
         | 
| 6 | 
            +
              "bos_token": "[BOS]",
         | 
| 7 | 
            +
              "eos_token": "[EOS]",
         | 
| 8 | 
            +
              "unk_token": "[UNK]",
         | 
| 9 | 
            +
              "add_bos_token": true,
         | 
| 10 | 
            +
              "add_eos_token": false,
         | 
| 11 | 
            +
              "clean_up_tokenization_spaces": false
         | 
| 12 | 
            +
            }
         |