Upload README.md with huggingface_hub
Browse files
    	
        README.md
    ADDED
    
    | @@ -0,0 +1,36 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            ---
         | 
| 2 | 
            +
            license: mit
         | 
| 3 | 
            +
            tags:
         | 
| 4 | 
            +
              - computer-vision
         | 
| 5 | 
            +
              - microscopy
         | 
| 6 | 
            +
              - materials-science
         | 
| 7 | 
            +
              - encoder
         | 
| 8 | 
            +
              - segmentation
         | 
| 9 | 
            +
              - pytorch
         | 
| 10 | 
            +
              - pretrained
         | 
| 11 | 
            +
            library_name: pretrained-microscopy-models
         | 
| 12 | 
            +
            ---
         | 
| 13 | 
            +
             | 
| 14 | 
            +
            # Pretrained Microscopy Encoder - resnet50 (micronet)
         | 
| 15 | 
            +
             | 
| 16 | 
            +
            This is a `resnet50` encoder pretrained on `micronet` microscopy datasets, prepared for use with [segmentation_models.pytorch](https://github.com/qubvel-org/segmentation_models.pytorch).
         | 
| 17 | 
            +
             | 
| 18 | 
            +
            Models originally pretrained for [pretrained-microscopy-models](https://github.com/nasa/pretrained-microscopy-models)
         | 
| 19 | 
            +
             | 
| 20 | 
            +
            ## Model Details
         | 
| 21 | 
            +
             | 
| 22 | 
            +
            - **Architecture**: resnet50
         | 
| 23 | 
            +
            - **Pretrained on**: micronet
         | 
| 24 | 
            +
            - **Input shape**: RGB images
         | 
| 25 | 
            +
            - **Framework**: PyTorch
         | 
| 26 | 
            +
            - **Use case**: Feature extraction, segmentation backbone
         | 
| 27 | 
            +
             | 
| 28 | 
            +
            ## Files
         | 
| 29 | 
            +
             | 
| 30 | 
            +
            - `encoder_weights.pth` - PyTorch `state_dict()` of the encoder
         | 
| 31 | 
            +
            - `README.md` - This model card
         | 
| 32 | 
            +
            - `encoder.py` - Sample code to use the encoder within UNet.
         | 
| 33 | 
            +
             | 
| 34 | 
            +
            ## License
         | 
| 35 | 
            +
             | 
| 36 | 
            +
            mit
         |