Model description
[More Information Needed]
Intended uses & limitations
[More Information Needed]
Training Procedure
[More Information Needed]
Hyperparameters
Click to expand
Hyperparameter | Value |
---|---|
memory | |
steps | [('scaler', StandardScaler()), ('svm', SGDClassifier())] |
verbose | False |
scaler | StandardScaler() |
svm | SGDClassifier() |
scaler__copy | True |
scaler__with_mean | True |
scaler__with_std | True |
svm__alpha | 0.0001 |
svm__average | False |
svm__class_weight | |
svm__early_stopping | False |
svm__epsilon | 0.1 |
svm__eta0 | 0.0 |
svm__fit_intercept | True |
svm__l1_ratio | 0.15 |
svm__learning_rate | optimal |
svm__loss | hinge |
svm__max_iter | 1000 |
svm__n_iter_no_change | 5 |
svm__n_jobs | |
svm__penalty | l2 |
svm__power_t | 0.5 |
svm__random_state | |
svm__shuffle | True |
svm__tol | 0.001 |
svm__validation_fraction | 0.1 |
svm__verbose | 0 |
svm__warm_start | False |
Model Plot
Pipeline(steps=[('scaler', StandardScaler()), ('svm', SGDClassifier())])In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Pipeline(steps=[('scaler', StandardScaler()), ('svm', SGDClassifier())])
StandardScaler()
SGDClassifier()
Evaluation Results
Metric | Value |
---|---|
accuracy | 0.834443 |
f1 score | 0.707803 |
precision | 0.813203 |
recall | 0.62659 |
How to Get Started with the Model
[More Information Needed]
Model Card Authors
This model card is written by following authors:
[More Information Needed]
Model Card Contact
You can contact the model card authors through following channels: [More Information Needed]
Citation
Below you can find information related to citation.
BibTeX:
[More Information Needed]
eval_method
The model is evaluated using test split, on accuracy, precision, recall and f1.
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.