metadata
language:
- en
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6448
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
widget:
- source_sentence: How are retail sales data integrated into trading models?
sentences:
- >-
Lagged variables represent historical values of a time series variable
and are used in forecasting models to capture the impact of past
observations on future market trends, enhancing the accuracy of
predictions by incorporating relevant historical information.
- >-
Retail sales data reflect consumer spending patterns and overall
economic activity. Traders analyze this indicator to gauge consumer
confidence, sectoral performance, and potential market trends related to
retail-focused stocks.
- >-
Regulatory approval for a new drug can have a positive impact on a
pharmaceutical company's stock price as it opens up new revenue streams
and market opportunities.
- source_sentence: What impact does algorithmic trading have on market liquidity?
sentences:
- >-
Volume analysis in stock trading involves studying the number of shares
or contracts traded in a security or market over a specific period to
gauge the strength or weakness of a price move.
- >-
Social media sentiment analysis can assist in detecting anomalies in
stock prices by capturing public sentiment and opinions on stocks,
identifying trends or sudden shifts in sentiment that may precede
abnormal price movements.
- >-
Algorithmic trading can impact market liquidity by increasing trading
speed, efficiency, and overall trading volume, leading to potential
liquidity disruptions during certain market conditions.
- source_sentence: >-
What considerations should traders take into account when selecting an
adaptive trading algorithm?
sentences:
- >-
Historical price data helps analysts identify patterns and trends that
can be used to develop models for predicting future stock prices based
on past performance.
- >-
Traders should consider factors such as performance metrics, risk
management capabilities, adaptability to changing market conditions,
data requirements, and the level of transparency and control offered by
the algorithm.
- >-
A stock exchange is a centralized marketplace where securities like
stocks, bonds, and commodities are bought and sold by investors and
traders.
- source_sentence: >-
How can currency exchange rates and forex markets be integrated into
trading models alongside macroeconomic indicators?
sentences:
- >-
Moving averages smooth out price data over a specified period, making it
easier to identify trends and reversals in stock prices.
- >-
Currency exchange rates and forex markets are integrated into trading
models to assess currency risk, international trade impact, and
cross-border investment opportunities influenced by macroeconomic
indicators.
- >-
Investors use quantitative momentum indicators to identify securities
that are gaining positive momentum and potentially generating profits by
buying those assets and selling underperforming assets.
- source_sentence: >-
What role does back-testing play in refining event-driven trading
strategies using historical data and real-time analysis?
sentences:
- >-
Genetic algorithms are well-suited for solving multi-objective
optimization problems, nonlinear and non-convex optimization problems,
problems with high-dimensional search spaces, and problems where
traditional methods may struggle to find optimal solutions.
- >-
Risk management techniques such as position sizing, portfolio
diversification, and stop-loss orders are often used in quantitative
momentum strategies to manage downside risk and protect against large
losses.
- >-
Back-testing allows traders to evaluate the performance of event-driven
trading strategies using historical data, identify patterns, optimize
parameters, and refine strategies for real-time implementation.
datasets:
- yymYYM/stock_trading_QA
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@3
- cosine_precision@3
- cosine_recall@3
- cosine_ndcg@3
- cosine_mrr@3
- cosine_map@3
model-index:
- name: SentenceTransformer based on BAAI/bge-base-en-v1.5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@3
value: 0.6750348675034867
name: Cosine Accuracy@3
- type: cosine_precision@3
value: 0.22501162250116222
name: Cosine Precision@3
- type: cosine_recall@3
value: 0.6750348675034867
name: Cosine Recall@3
- type: cosine_ndcg@3
value: 0.5838116811117793
name: Cosine Ndcg@3
- type: cosine_mrr@3
value: 0.5523012552301251
name: Cosine Mrr@3
- type: cosine_map@3
value: 0.5523012552301255
name: Cosine Map@3
SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the stock_trading_qa dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- Language: en
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("iamleonie/leonies-test")
# Run inference
sentences = [
'What role does back-testing play in refining event-driven trading strategies using historical data and real-time analysis?',
'Back-testing allows traders to evaluate the performance of event-driven trading strategies using historical data, identify patterns, optimize parameters, and refine strategies for real-time implementation.',
'Risk management techniques such as position sizing, portfolio diversification, and stop-loss orders are often used in quantitative momentum strategies to manage downside risk and protect against large losses.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@3 | 0.675 |
cosine_precision@3 | 0.225 |
cosine_recall@3 | 0.675 |
cosine_ndcg@3 | 0.5838 |
cosine_mrr@3 | 0.5523 |
cosine_map@3 | 0.5523 |
Training Details
Training Dataset
stock_trading_qa
- Dataset: stock_trading_qa at 35dab2e
- Size: 6,448 training samples
- Columns:
anchor
andcontext
- Approximate statistics based on the first 1000 samples:
anchor context type string string details - min: 7 tokens
- mean: 15.83 tokens
- max: 39 tokens
- min: 17 tokens
- mean: 34.67 tokens
- max: 59 tokens
- Samples:
anchor context How should I approach investing in a volatile stock market?
Diversify your portfolio, invest in stable companies, consider dollar-cost averaging, and stay informed about market trends to make informed trading decisions.
What is the role of cross-validation in assessing the performance of time series forecasting models for stock market trends?
Cross-validation helps evaluate the generalization ability of forecasting models by partitioning historical data into training and validation sets, ensuring that the model's performance is robust and reliable for future predictions.
What role does correlation play in statistical arbitrage and pair trading?
Correlation measures the relationship between asset prices and helps traders identify pairs that exhibit a stable price relationship suitable for pair trading.
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Evaluation Dataset
stock_trading_qa
- Dataset: stock_trading_qa at 35dab2e
- Size: 717 evaluation samples
- Columns:
anchor
andcontext
- Approximate statistics based on the first 717 samples:
anchor context type string string details - min: 7 tokens
- mean: 15.96 tokens
- max: 30 tokens
- min: 17 tokens
- mean: 35.03 tokens
- max: 62 tokens
- Samples:
anchor context How can anomaly detection in stock prices be used to identify market inefficiencies and opportunities for arbitrage?
Anomaly detection can help identify market inefficiencies by spotting mispricings and opportunities for arbitrage, where traders can exploit price differentials to make profits by trading on anomalies.
How do traders interpret the Head and Shoulders pattern as a trading signal?
The Head and Shoulders pattern is a reversal pattern with three peaks, where the middle peak (head) is higher than the other two (shoulders), signaling a potential trend reversal and offering a bearish trading signal.
How do traders use Fibonacci levels as trading signals?
Fibonacci levels are used as trading signals to identify potential support and resistance levels, trend reversals, and price targets in financial markets.
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16gradient_accumulation_steps
: 16learning_rate
: 2e-05num_train_epochs
: 4lr_scheduler_type
: cosinewarmup_ratio
: 0.1fp16
: Trueoptim
: adamw_8bitbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 16eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 4max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_8bitoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | Validation Loss | cosine_ndcg@3 |
---|---|---|---|---|
-1 | -1 | - | - | 0.4451 |
0.3970 | 10 | 5.7817 | 0.0765 | 0.5278 |
0.7940 | 20 | 1.295 | 0.0251 | 0.5608 |
1.1588 | 30 | 0.6208 | 0.0209 | 0.5771 |
1.5558 | 40 | 0.5701 | 0.0183 | 0.5789 |
1.9529 | 50 | 0.4546 | 0.0171 | 0.5882 |
2.3176 | 60 | 0.2861 | 0.0160 | 0.5839 |
2.7146 | 70 | 0.3315 | 0.0154 | 0.5818 |
3.0794 | 80 | 0.3179 | 0.0152 | 0.5852 |
3.4764 | 90 | 0.367 | 0.0150 | 0.5843 |
3.8734 | 100 | 0.354 | 0.0150 | 0.5838 |
Framework Versions
- Python: 3.11.12
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}