Model Card for Model ID

This is a fine-tuned BERT model that classifies YouTube channels content into categories such as Education, Technology, Finance, and more.

Model Details

Model Description

This is a fine-tuned BERT-based classification model designed to categorize YouTube video metadataโ€”specifically titles, descriptions into one of categories**:

  • Education
  • Technology
  • Motivation
  • Entertainment
  • Gaming

The model is based on the bert-base-uncased architecture from the Hugging Face Transformers library and was fine-tuned using a labeled dataset of YouTube content. It is optimized for short text classification, making it ideal for content analytics, recommendation systems, and media monitoring tools focused on YouTube.


Highlights

  • ๐Ÿง  Model type: BERT (Transformer-based)
  • ๐Ÿ”  Input: Raw text (title + optional description)
  • ๐ŸŽฏ Task: Multi-class classification
  • ๐Ÿท๏ธ Classes: 20 categories Such as Gaming,Technology,Finance etc
  • ๐Ÿ“ฆ Pretrained Base: bert-base-uncased
  • ๐Ÿ’ก Use Case: YouTube video categorization, content recommendation, channel analysis

Let me know if you also want a short version or something more technical for the model-index or metadata fields.

  • Developed by: Jayesh Mehta
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Model type: BERT-based sequence classification model
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Finetuned from model [optional]: [More Information Needed]

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

Direct Use

Downstream Use [optional]

This model can be integrated into larger systems, such as:

Content management systems

YouTube channel analytics tools

Personalized recommendation engines

Out-of-Scope Use

The model is not suitable for long-form text or transcript-level classification.

Should not be used to classify non-YouTube content or languages other than English.

Avoid using it in sensitive decision-making scenarios (e.g., legal, medical).

Bias, Risks, and Limitations

Like most models trained on public or scraped data:

The model may carry biases from the underlying data (e.g., overrepresentation of certain video types).

It may misclassify mixed-genre or ambiguous titles (e.g., โ€œTop 10 Gaming Laptops for Studentsโ€).

It is sensitive to text length and clarityโ€”very short or vague titles may reduce accuracy.

Recommendations

Use the model as an assistive tool, not a final decision-maker.

Evaluate its performance on your specific data before deploying.

Consider adding user feedback or manual review in production systems.

How to Get Started with the Model

from transformers import BertTokenizer, BertForSequenceClassification

model = BertForSequenceClassification.from_pretrained("JaySenpai/bert-model") tokenizer = BertTokenizer.from_pretrained("JaySenpai/bert-model")

text = "10 Tips to Grow Your YouTube Channel" inputs = tokenizer(text, return_tensors="pt") outputs = model(**inputs) prediction = outputs.logits.argmax(dim=1).item()

labels = {0: "Education", 1: "Comedy and Humour", 2: "Gaming", 3: "Technology", 4: "Motivation"} print("Predicted label:", labels[prediction])

Training Details

Training Data

Training Data The model was fine-tuned using a labeled dataset of YouTube titles and descriptions, mapped to categories:

Education
Travel
Cooking
Gaming
Music
Health and Fitness
Finance
Technology
Vlogging
Beauty & Fashion
Digital Marketing
Movies/Series Reviews
Comedy and Humour
Podcast
Youtube or Instagram Grow Tips
Online Income
ASMR
Business and Marketing
News
Motivation

Training Procedure

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

  • Training regime:

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

The model was evaluated on a held-out validation set of manually labeled YouTube titles and descriptions.

Factors

[More Information Needed]

Metrics

Accuracy: ~97%

F1-score (macro): ~0.95

Results

The model performed well on clear-cut categories like "Gaming" and "Technology" but showed confusion between "Motivation" and "Education" in edge cases.

Summary

Model Examination [optional]

[More Information Needed]

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: [More Information Needed]
  • Hours used: [More Information Needed]
  • Cloud Provider: [More Information Needed]
  • Compute Region: [More Information Needed]
  • Carbon Emitted: [More Information Needed]

Technical Specifications [optional]

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

[More Information Needed]

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

Author: Jayesh Mehta(JaySenpai) Hugging Face: @JaySenpai

Downloads last month
27
Safetensors
Model size
109M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support