--- language: - en tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:6448 - loss:MultipleNegativesRankingLoss base_model: BAAI/bge-base-en-v1.5 widget: - source_sentence: How are retail sales data integrated into trading models? sentences: - Lagged variables represent historical values of a time series variable and are used in forecasting models to capture the impact of past observations on future market trends, enhancing the accuracy of predictions by incorporating relevant historical information. - Retail sales data reflect consumer spending patterns and overall economic activity. Traders analyze this indicator to gauge consumer confidence, sectoral performance, and potential market trends related to retail-focused stocks. - Regulatory approval for a new drug can have a positive impact on a pharmaceutical company's stock price as it opens up new revenue streams and market opportunities. - source_sentence: What impact does algorithmic trading have on market liquidity? sentences: - Volume analysis in stock trading involves studying the number of shares or contracts traded in a security or market over a specific period to gauge the strength or weakness of a price move. - Social media sentiment analysis can assist in detecting anomalies in stock prices by capturing public sentiment and opinions on stocks, identifying trends or sudden shifts in sentiment that may precede abnormal price movements. - Algorithmic trading can impact market liquidity by increasing trading speed, efficiency, and overall trading volume, leading to potential liquidity disruptions during certain market conditions. - source_sentence: What considerations should traders take into account when selecting an adaptive trading algorithm? sentences: - Historical price data helps analysts identify patterns and trends that can be used to develop models for predicting future stock prices based on past performance. - Traders should consider factors such as performance metrics, risk management capabilities, adaptability to changing market conditions, data requirements, and the level of transparency and control offered by the algorithm. - A stock exchange is a centralized marketplace where securities like stocks, bonds, and commodities are bought and sold by investors and traders. - source_sentence: How can currency exchange rates and forex markets be integrated into trading models alongside macroeconomic indicators? sentences: - Moving averages smooth out price data over a specified period, making it easier to identify trends and reversals in stock prices. - Currency exchange rates and forex markets are integrated into trading models to assess currency risk, international trade impact, and cross-border investment opportunities influenced by macroeconomic indicators. - Investors use quantitative momentum indicators to identify securities that are gaining positive momentum and potentially generating profits by buying those assets and selling underperforming assets. - source_sentence: What role does back-testing play in refining event-driven trading strategies using historical data and real-time analysis? sentences: - Genetic algorithms are well-suited for solving multi-objective optimization problems, nonlinear and non-convex optimization problems, problems with high-dimensional search spaces, and problems where traditional methods may struggle to find optimal solutions. - Risk management techniques such as position sizing, portfolio diversification, and stop-loss orders are often used in quantitative momentum strategies to manage downside risk and protect against large losses. - Back-testing allows traders to evaluate the performance of event-driven trading strategies using historical data, identify patterns, optimize parameters, and refine strategies for real-time implementation. datasets: - yymYYM/stock_trading_QA pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@3 - cosine_precision@3 - cosine_recall@3 - cosine_ndcg@3 - cosine_mrr@3 - cosine_map@3 model-index: - name: SentenceTransformer based on BAAI/bge-base-en-v1.5 results: - task: type: information-retrieval name: Information Retrieval dataset: name: Unknown type: unknown metrics: - type: cosine_accuracy@3 value: 0.6750348675034867 name: Cosine Accuracy@3 - type: cosine_precision@3 value: 0.22501162250116222 name: Cosine Precision@3 - type: cosine_recall@3 value: 0.6750348675034867 name: Cosine Recall@3 - type: cosine_ndcg@3 value: 0.5838116811117793 name: Cosine Ndcg@3 - type: cosine_mrr@3 value: 0.5523012552301251 name: Cosine Mrr@3 - type: cosine_map@3 value: 0.5523012552301255 name: Cosine Map@3 --- # SentenceTransformer based on BAAI/bge-base-en-v1.5 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the [stock_trading_qa](https://huggingface.co/datasets/yymYYM/stock_trading_QA) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [stock_trading_qa](https://huggingface.co/datasets/yymYYM/stock_trading_QA) - **Language:** en ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("iamleonie/leonies-test") # Run inference sentences = [ 'What role does back-testing play in refining event-driven trading strategies using historical data and real-time analysis?', 'Back-testing allows traders to evaluate the performance of event-driven trading strategies using historical data, identify patterns, optimize parameters, and refine strategies for real-time implementation.', 'Risk management techniques such as position sizing, portfolio diversification, and stop-loss orders are often used in quantitative momentum strategies to manage downside risk and protect against large losses.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Evaluation ### Metrics #### Information Retrieval * Evaluated with [InformationRetrievalEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:-------------------|:-----------| | cosine_accuracy@3 | 0.675 | | cosine_precision@3 | 0.225 | | cosine_recall@3 | 0.675 | | **cosine_ndcg@3** | **0.5838** | | cosine_mrr@3 | 0.5523 | | cosine_map@3 | 0.5523 | ## Training Details ### Training Dataset #### stock_trading_qa * Dataset: [stock_trading_qa](https://huggingface.co/datasets/yymYYM/stock_trading_QA) at [35dab2e](https://huggingface.co/datasets/yymYYM/stock_trading_QA/tree/35dab2e25b6da10842cfb0f832b715cab3765727) * Size: 6,448 training samples * Columns: anchor and context * Approximate statistics based on the first 1000 samples: | | anchor | context | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | | | * Samples: | anchor | context | |:------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | How should I approach investing in a volatile stock market? | Diversify your portfolio, invest in stable companies, consider dollar-cost averaging, and stay informed about market trends to make informed trading decisions. | | What is the role of cross-validation in assessing the performance of time series forecasting models for stock market trends? | Cross-validation helps evaluate the generalization ability of forecasting models by partitioning historical data into training and validation sets, ensuring that the model's performance is robust and reliable for future predictions. | | What role does correlation play in statistical arbitrage and pair trading? | Correlation measures the relationship between asset prices and helps traders identify pairs that exhibit a stable price relationship suitable for pair trading. | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### stock_trading_qa * Dataset: [stock_trading_qa](https://huggingface.co/datasets/yymYYM/stock_trading_QA) at [35dab2e](https://huggingface.co/datasets/yymYYM/stock_trading_QA/tree/35dab2e25b6da10842cfb0f832b715cab3765727) * Size: 717 evaluation samples * Columns: anchor and context * Approximate statistics based on the first 717 samples: | | anchor | context | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | | | * Samples: | anchor | context | |:----------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | How can anomaly detection in stock prices be used to identify market inefficiencies and opportunities for arbitrage? | Anomaly detection can help identify market inefficiencies by spotting mispricings and opportunities for arbitrage, where traders can exploit price differentials to make profits by trading on anomalies. | | How do traders interpret the Head and Shoulders pattern as a trading signal? | The Head and Shoulders pattern is a reversal pattern with three peaks, where the middle peak (head) is higher than the other two (shoulders), signaling a potential trend reversal and offering a bearish trading signal. | | How do traders use Fibonacci levels as trading signals? | Fibonacci levels are used as trading signals to identify potential support and resistance levels, trend reversals, and price targets in financial markets. | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 4 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `fp16`: True - `optim`: adamw_8bit - `batch_sampler`: no_duplicates #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_8bit - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional
### Training Logs | Epoch | Step | Training Loss | Validation Loss | cosine_ndcg@3 | |:------:|:----:|:-------------:|:---------------:|:-------------:| | -1 | -1 | - | - | 0.4451 | | 0.3970 | 10 | 5.7817 | 0.0765 | 0.5278 | | 0.7940 | 20 | 1.295 | 0.0251 | 0.5608 | | 1.1588 | 30 | 0.6208 | 0.0209 | 0.5771 | | 1.5558 | 40 | 0.5701 | 0.0183 | 0.5789 | | 1.9529 | 50 | 0.4546 | 0.0171 | 0.5882 | | 2.3176 | 60 | 0.2861 | 0.0160 | 0.5839 | | 2.7146 | 70 | 0.3315 | 0.0154 | 0.5818 | | 3.0794 | 80 | 0.3179 | 0.0152 | 0.5852 | | 3.4764 | 90 | 0.367 | 0.0150 | 0.5843 | | 3.8734 | 100 | 0.354 | 0.0150 | 0.5838 | ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 4.1.0 - Transformers: 4.52.4 - PyTorch: 2.6.0+cu124 - Accelerate: 1.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```