forum_id
stringlengths
10
10
forum_title
stringlengths
5
188
forum_authors
sequencelengths
0
98
forum_abstract
stringlengths
3
4.69k
forum_keywords
sequencelengths
0
29
forum_pdf_url
stringlengths
40
40
note_id
stringlengths
10
10
note_type
stringclasses
5 values
note_created
int64
1,695B
1,737B
note_replyto
stringlengths
10
10
note_readers
sequencelengths
1
6
note_signatures
sequencelengths
1
1
note_text
stringlengths
14
30.1k
I2FbCxyYVS
[Proposal-ML]Protein Sequence Generation Model
[ "pan jiang", "Zeeshan Zulfiqar", "Ivan Iazykov" ]
This proposal outlines a novel approach for protein sequence generation, a key area in computational biology focused on generating sequences with specific functional and structural characteristics. Despite recent advances with deep learning models, challenges remain in handling long sequences and capturing biological dynamics. We propose implementing a Mamba architecture that leverages selective state-space updates to achieve efficient protein sequence generation. With linear computational complexity and effective handling of long-range dependencies, this architecture is designed to be more suited to biological dynamics while reducing resource requirements. A comprehensive evaluation on the UniRef50 dataset will demonstrate its potential to deliver competitive performance, providing a viable solution for research environments with limited computational resources.
[ "protein sequence generation", "Mamba architecture", "efficient modeling", "protein language models", "biological dynamics" ]
https://openreview.net/pdf?id=I2FbCxyYVS
c4QkkGyfd4
official_review
1,731,424,236,739
I2FbCxyYVS
[ "everyone" ]
[ "~Zhu_Zhang6" ]
title: Good proposal, well explained review: **Summary:** This proposal presents a protein sequence generation model based on the Mamba architecture, aiming to improve efficiency and accessibility for protein modeling. By leveraging the Mamba model's selective state-space updates, which offer linear computational complexity and efficient handling of long-range dependencies, the authors seek to generate high-quality protein sequences with reduced computational resources. They plan to evaluate the model’s performance across multiple dimensions, including sequence generation quality, structural accuracy, and computational efficiency, with a focus on making protein modeling more accessible to researchers with limited resources. **Strengths:** 1. **Efficient Model Choice:** The use of Mamba architecture, known for its computational efficiency, addresses accessibility issues and makes complex protein modeling more feasible in resource-limited settings. 2. **Comprehensive Evaluation Framework:** The proposal outlines a multi-faceted evaluation framework, covering sequence quality, structural validation, and biological plausibility, which provides a thorough assessment of the model's capabilities. **Weaknesses:** 1. **Evaluation of Long-Term Stability Missing:** There is no mention of testing the long-term stability or usability of generated sequences in practical protein engineering applications. **Questions:** 1. How will the authors ensure that the generated sequences maintain functionality and stability in real biological contexts? 2. Is there a plan to compare the model’s performance with existing transformer-based protein models, specifically in terms of computational cost and sequence quality? rating: 9 confidence: 3
I2FbCxyYVS
[Proposal-ML]Protein Sequence Generation Model
[ "pan jiang", "Zeeshan Zulfiqar", "Ivan Iazykov" ]
This proposal outlines a novel approach for protein sequence generation, a key area in computational biology focused on generating sequences with specific functional and structural characteristics. Despite recent advances with deep learning models, challenges remain in handling long sequences and capturing biological dynamics. We propose implementing a Mamba architecture that leverages selective state-space updates to achieve efficient protein sequence generation. With linear computational complexity and effective handling of long-range dependencies, this architecture is designed to be more suited to biological dynamics while reducing resource requirements. A comprehensive evaluation on the UniRef50 dataset will demonstrate its potential to deliver competitive performance, providing a viable solution for research environments with limited computational resources.
[ "protein sequence generation", "Mamba architecture", "efficient modeling", "protein language models", "biological dynamics" ]
https://openreview.net/pdf?id=I2FbCxyYVS
Q7s2UkX7pH
official_review
1,731,064,276,306
I2FbCxyYVS
[ "everyone" ]
[ "~Yunghwei_Lai1" ]
title: review review: The project proposes using a small-scale Mamba architecture, which is a compelling choice. Mamba's selective state-space updates and linear computational complexity make it well-suited for handling long protein sequences and capturing complex dynamics. This project has a well-defined goal, focusing on an accessible yet effective architecture for protein sequence generation. Its emphasis on reducing computational requirements is highly relevant for academia. Clarifying methodological and evaluation aspects would enhance the project, but overall, it’s a strong proposal with potential for impactful results in the computational biology community. rating: 10 confidence: 4
I2FbCxyYVS
[Proposal-ML]Protein Sequence Generation Model
[ "pan jiang", "Zeeshan Zulfiqar", "Ivan Iazykov" ]
This proposal outlines a novel approach for protein sequence generation, a key area in computational biology focused on generating sequences with specific functional and structural characteristics. Despite recent advances with deep learning models, challenges remain in handling long sequences and capturing biological dynamics. We propose implementing a Mamba architecture that leverages selective state-space updates to achieve efficient protein sequence generation. With linear computational complexity and effective handling of long-range dependencies, this architecture is designed to be more suited to biological dynamics while reducing resource requirements. A comprehensive evaluation on the UniRef50 dataset will demonstrate its potential to deliver competitive performance, providing a viable solution for research environments with limited computational resources.
[ "protein sequence generation", "Mamba architecture", "efficient modeling", "protein language models", "biological dynamics" ]
https://openreview.net/pdf?id=I2FbCxyYVS
NbN1zF6zJU
official_review
1,731,315,079,916
I2FbCxyYVS
[ "everyone" ]
[ "~Hector_Rodriguez_Rodriguez1" ]
title: Review of “Protein Sequence Generation Model” review: The authors propose using a small Mamba model for protein sequence generation. - The introduction highlights the challenges of protein sequence generation and explains the unique characteristics of the Mamba architecture that make it competitive with LLMs while requiring fewer computational resources. However, the clarity of Figure 1 could be improved by using larger typography, and the caption should be revised to accurately describe the figure. - The related work section provides a good understanding of the state-of-the-art in protein sequence generation. This section could benefit from more consistent use of terms like “protein language modeling,” “protein sequence generation,” and “protein sequence modeling,” which would enhance clarity. - The proposed architecture incorporates multiple aspects that ideally should be disclosed in more detail, particularly the meaning of each parameter. Additionally, it would be valuable to clarify which aspects of the model are novel compared to previously mentioned implementations like ProtMamba or PTM-Mamba. - The evaluation framework and expected results are thorough and well planned, and the dataset selection is justified in detail. Overall, the proposal is clear and well-written, providing a good understanding of the problem and the future work. rating: 9 confidence: 4
I2FbCxyYVS
[Proposal-ML]Protein Sequence Generation Model
[ "pan jiang", "Zeeshan Zulfiqar", "Ivan Iazykov" ]
This proposal outlines a novel approach for protein sequence generation, a key area in computational biology focused on generating sequences with specific functional and structural characteristics. Despite recent advances with deep learning models, challenges remain in handling long sequences and capturing biological dynamics. We propose implementing a Mamba architecture that leverages selective state-space updates to achieve efficient protein sequence generation. With linear computational complexity and effective handling of long-range dependencies, this architecture is designed to be more suited to biological dynamics while reducing resource requirements. A comprehensive evaluation on the UniRef50 dataset will demonstrate its potential to deliver competitive performance, providing a viable solution for research environments with limited computational resources.
[ "protein sequence generation", "Mamba architecture", "efficient modeling", "protein language models", "biological dynamics" ]
https://openreview.net/pdf?id=I2FbCxyYVS
LMSfiSqWBs
official_review
1,731,386,544,598
I2FbCxyYVS
[ "everyone" ]
[ "~Grace_Xin-Yue_Yi1" ]
title: Review review: The proposal discusses the importance of protein sequence generation and the challenges associated with current models in terms of computational resources and sequence length handling. It clearly outlines the motivation for using the Mamba architecture, emphasizing efficiency and accessibility. The proposal also includes a well-researched review of related work, from foundational models like UniRep and TAPE to advanced models like ProtGPT2 and ProtMamba. The evaluation section is detailed and thorough, outlining various aspects such as sequence modeling capability, computational efficiency, and sequence quality. rating: 10 confidence: 3
I2FbCxyYVS
[Proposal-ML]Protein Sequence Generation Model
[ "pan jiang", "Zeeshan Zulfiqar", "Ivan Iazykov" ]
This proposal outlines a novel approach for protein sequence generation, a key area in computational biology focused on generating sequences with specific functional and structural characteristics. Despite recent advances with deep learning models, challenges remain in handling long sequences and capturing biological dynamics. We propose implementing a Mamba architecture that leverages selective state-space updates to achieve efficient protein sequence generation. With linear computational complexity and effective handling of long-range dependencies, this architecture is designed to be more suited to biological dynamics while reducing resource requirements. A comprehensive evaluation on the UniRef50 dataset will demonstrate its potential to deliver competitive performance, providing a viable solution for research environments with limited computational resources.
[ "protein sequence generation", "Mamba architecture", "efficient modeling", "protein language models", "biological dynamics" ]
https://openreview.net/pdf?id=I2FbCxyYVS
KSv1WStVlA
official_review
1,731,406,854,400
I2FbCxyYVS
[ "everyone" ]
[ "~ChenJian1" ]
title: Brief review review: The proposal presents a small-scale Mamba architecture-based protein sequence generation model, aiming to efficiently handle long sequences and simulate protein dynamics through selective state-space updates. The study emphasizes achieving competitive performance while reducing computational resource requirements, making advanced protein sequence modeling more accessible to the academic research community. With a multi-dimensional evaluation framework and experimental workflow, the study is expected to provide valuable references for the protein engineering field, especially in computationally constrained academic settings. ### Strengths: ①The proposed Mamba architecture offers linear computational complexity, efficient handling of long-range sequence dependencies, and natural alignment with protein dynamics. ②The model design considers parameter optimization to balance expressiveness and efficiency, adaptable hidden state dimensions based on sequence complexity, and efficient batch processing for training acceleration. ③The comprehensive evaluation framework covers multiple aspects of protein sequence analysis and generation, including sequence modeling capability, computational efficiency analysis, and sequence quality assessment. ④The experimental design is rigorous, using the UniRef50 dataset to ensure comprehensive coverage while maintaining computational feasibility. ### Weaknesses: ①Although the advantages of the Mamba architecture are proposed, the proposal lacks specific details on how to handle complex biochemical characteristics within protein sequences. ②While the experimental part is well-designed, there is a lack of assessment of the model's generalization ability, especially its performance on different types of protein sequences. ③The proposal is not clear enough on the model's scalability and the direction of future work. rating: 9 confidence: 4
I2FbCxyYVS
[Proposal-ML]Protein Sequence Generation Model
[ "pan jiang", "Zeeshan Zulfiqar", "Ivan Iazykov" ]
This proposal outlines a novel approach for protein sequence generation, a key area in computational biology focused on generating sequences with specific functional and structural characteristics. Despite recent advances with deep learning models, challenges remain in handling long sequences and capturing biological dynamics. We propose implementing a Mamba architecture that leverages selective state-space updates to achieve efficient protein sequence generation. With linear computational complexity and effective handling of long-range dependencies, this architecture is designed to be more suited to biological dynamics while reducing resource requirements. A comprehensive evaluation on the UniRef50 dataset will demonstrate its potential to deliver competitive performance, providing a viable solution for research environments with limited computational resources.
[ "protein sequence generation", "Mamba architecture", "efficient modeling", "protein language models", "biological dynamics" ]
https://openreview.net/pdf?id=I2FbCxyYVS
6lR2LzOofr
official_review
1,730,993,070,790
I2FbCxyYVS
[ "everyone" ]
[ "~Juncheng_Yu1" ]
title: Innovative Application of Mamba Architecture for Efficient Protein Sequence Prediction review: ## Summary This paper explores the application of the Mamba architecture to protein sequence prediction, representing an innovative interdisciplinary practice in computational biology. The study leverages Mamba’s selective state-space updates, which offer advantages in computational efficiency and handling long-range dependencies, making this approach both innovative and feasible. The paper presents a comprehensive framework, including model evaluation metrics and a detailed experimental design, demonstrating a thoughtful approach to achieving high-performance protein sequence modeling. ## Strengths - **Innovative Exploration of Mamba Architecture**: The application of Mamba architecture in protein sequence prediction is creative and shows promise for advancing the field. The selective state-space updates provide an efficient means to model protein sequences, which could be highly valuable for resource-constrained settings. - **Importance of the Research Problem**: Protein sequence prediction is a vital area of study with broad implications for biological research and practical applications. Addressing this challenge could aid in advancing drug discovery and understanding protein functions, underlining the relevance of this research. - **Thorough Literature Review and Feasibility**: The paper provides a well-rounded review of related work, demonstrating the authors' strong understanding of the field. The chosen approach appears feasible, with a clear plan for implementation and evaluation. ## Weaknesses - **Small and Unclear Figures**: The figures in the paper are too small and lack clarity, making it difficult to interpret the visual information provided. Larger, more detailed figures would significantly improve the readability and comprehension of the model architecture and results. - **Writing Quality**: The writing could benefit from improved clarity and logical flow. More precise and sophisticated word choices would enhance the paper’s presentation. ## Score - **Soundness**: 8/10 - **Contribution**: 8/10 - **Presentation**:7/10 rating: 8 confidence: 3
I2FbCxyYVS
[Proposal-ML]Protein Sequence Generation Model
[ "pan jiang", "Zeeshan Zulfiqar", "Ivan Iazykov" ]
This proposal outlines a novel approach for protein sequence generation, a key area in computational biology focused on generating sequences with specific functional and structural characteristics. Despite recent advances with deep learning models, challenges remain in handling long sequences and capturing biological dynamics. We propose implementing a Mamba architecture that leverages selective state-space updates to achieve efficient protein sequence generation. With linear computational complexity and effective handling of long-range dependencies, this architecture is designed to be more suited to biological dynamics while reducing resource requirements. A comprehensive evaluation on the UniRef50 dataset will demonstrate its potential to deliver competitive performance, providing a viable solution for research environments with limited computational resources.
[ "protein sequence generation", "Mamba architecture", "efficient modeling", "protein language models", "biological dynamics" ]
https://openreview.net/pdf?id=I2FbCxyYVS
5C5emasHYM
official_review
1,731,411,859,034
I2FbCxyYVS
[ "everyone" ]
[ "~Yuji_Wang4" ]
title: Review of “Protein Sequence Generation Model” review: The authors propose to explore applying small-scale Mamba models for protein sequence generation, which is expected to be an efficient approach than previous methods based on other architecture like transformers. ### Strengths 1. Topic selection: Both Mamba and protein sequence generation are popular research topics with high practical value, making the project a meaningful exploration. 2. Clear proposal structure:The proposal clearly defines the problem and provides a thorough review of related works. Meanwhile, the offered training and evaluation methods build a strong foundation for the project. ### Weaknesses 1. Lack of clarity: More detail is needed. For example, readers unfamiliar with the topics may not know why Mamba models are well-suited for this task and how they outperform transformers. 2. The figure may be confusing as readers can not easily understand the three parts mentioned in the caption. rating: 9 confidence: 3
I2FbCxyYVS
[Proposal-ML]Protein Sequence Generation Model
[ "pan jiang", "Zeeshan Zulfiqar", "Ivan Iazykov" ]
This proposal outlines a novel approach for protein sequence generation, a key area in computational biology focused on generating sequences with specific functional and structural characteristics. Despite recent advances with deep learning models, challenges remain in handling long sequences and capturing biological dynamics. We propose implementing a Mamba architecture that leverages selective state-space updates to achieve efficient protein sequence generation. With linear computational complexity and effective handling of long-range dependencies, this architecture is designed to be more suited to biological dynamics while reducing resource requirements. A comprehensive evaluation on the UniRef50 dataset will demonstrate its potential to deliver competitive performance, providing a viable solution for research environments with limited computational resources.
[ "protein sequence generation", "Mamba architecture", "efficient modeling", "protein language models", "biological dynamics" ]
https://openreview.net/pdf?id=I2FbCxyYVS
1e7fmNmah0
official_review
1,731,416,964,016
I2FbCxyYVS
[ "everyone" ]
[ "~Kaiwei_Zhang3" ]
title: The format needs to be revised review: **1. Summary:** This research proposal presents a small-scale implementation of the Mamba architecture for protein sequence generation, targeting efficiency and accuracy in handling long protein sequences. The work aims to balance performance and computational efficiency, making advanced protein modeling techniques accessible for researchers with limited resources. **2. Clarity:** The proposal is mostly clear, yet some sections could benefit from more explanation, such as the formulas in *Proposed Method* part. **3. Originality:** The work is original in its focus on adapting the Mamba architecture for protein sequence generation at a smaller scale, making high-performance protein modeling more accessible. **4. Significance:** The proposal focuses on a crucial problem. If successful, it could significantly benefit researchers in resource-constrained settings, enabling them to achieve competitive results without extensive computational resources. **5. Pros:** * **Efficient model design.** The Mamba architecture’s linear complexity and adaptive state updates are promising for reducing computational requirements. * **Practical application**. Targeting computational efficiency makes the approach accessible to a wider range of researchers, addressing a common limitation in the field. **6. Cons:** * **Formatting issue.** Names of the authors are not written. Text in figure 1 is too small. Index of each line should not be shown. A special font should be used for the big-O notations. * **Lack of clarity.** The formulas in *Proposed Model* Section needs to be furthur illustrated. The meaning of the equations and symbols are not clear. rating: 8 confidence: 3
H5rJNzKqFe
Predicting the Number of People In a Room
[ "Kuanghao Wang" ]
At present, the energy consumption of the whole building process and the energy consumption of building operation in China accounts for a high proportion of the country's total energy consumption, and reducing the energy consumption of building operation is an important part of saving energy, reducing greenhouse gas emissions, mitigating climate change and achieving carbon neutrality. Building energy consumption is mainly composed of two parts: cooling system energy consumption and heating system energy consumption, both of which are related to indoor cooling and heating loads. A large part of the indoor load is generated by indoor personnel, and the load has a lagging effect. Therefore, if the number of indoor personnel can be predicted in advance, the start/stop time and operating power of the cooling and heating systems can be determined in advance, so as to avoid wastage of cold and heat, and thus to achieve the purpose of energy saving and emission reduction. In the past, there are fewer studies on the time series prediction of indoor occupancy, so this paper will take a specific room as the object of study, based on its occupancy monitoring data, use different methods such as Support Vector Machine (SVM), Auto Regressive Integrated Moving Average (ARIMA), Random Forest, Deep Neural Networks, etc. to predict the number of indoor occupants, and evaluate the advantages and disadvantages of the effectiveness of various prediction methods.
[ "indoor people number", "SVM", "deep learning", "machine learning" ]
https://openreview.net/pdf?id=H5rJNzKqFe
xxsgtsFTsQ
official_review
1,731,061,235,577
H5rJNzKqFe
[ "everyone" ]
[ "~Ruitao_Jing1" ]
title: A Promising Contribution to Sustainable Building Practices review: The paper explores the application of machine learning to predict indoor population dynamics for improved building temperature management and energy efficiency, a topic of high practical value. It innovatively addresses the limitations of previous methods by focusing on data analysis and the impact of random events like COVID-19 on occupancy patterns. However, the paper could benefit from a formal problem definition, detailed model descriptions, and clear model evaluation metrics. Enhancing these aspects will strengthen the methodology and practical implications of the research. rating: 9 confidence: 4
H5rJNzKqFe
Predicting the Number of People In a Room
[ "Kuanghao Wang" ]
At present, the energy consumption of the whole building process and the energy consumption of building operation in China accounts for a high proportion of the country's total energy consumption, and reducing the energy consumption of building operation is an important part of saving energy, reducing greenhouse gas emissions, mitigating climate change and achieving carbon neutrality. Building energy consumption is mainly composed of two parts: cooling system energy consumption and heating system energy consumption, both of which are related to indoor cooling and heating loads. A large part of the indoor load is generated by indoor personnel, and the load has a lagging effect. Therefore, if the number of indoor personnel can be predicted in advance, the start/stop time and operating power of the cooling and heating systems can be determined in advance, so as to avoid wastage of cold and heat, and thus to achieve the purpose of energy saving and emission reduction. In the past, there are fewer studies on the time series prediction of indoor occupancy, so this paper will take a specific room as the object of study, based on its occupancy monitoring data, use different methods such as Support Vector Machine (SVM), Auto Regressive Integrated Moving Average (ARIMA), Random Forest, Deep Neural Networks, etc. to predict the number of indoor occupants, and evaluate the advantages and disadvantages of the effectiveness of various prediction methods.
[ "indoor people number", "SVM", "deep learning", "machine learning" ]
https://openreview.net/pdf?id=H5rJNzKqFe
xhPZ2EShxb
official_review
1,731,224,125,457
H5rJNzKqFe
[ "everyone" ]
[ "~Zihan_Yan2" ]
title: A Proposal with a good topic review: This proposal aims to optimize building energy consumption by predicting indoor occupancy, particularly in terms of air conditioning and heating systems. The research background highlights the high proportion of building energy use in China’s total energy consumption and points out that predicting indoor occupancy can enable more efficient control over the start-stop times and operating power of air conditioning and heating systems, leading to energy savings and emissions reduction. The study will use various methods, such as Support Vector Machine (SVM), Auto-Regressive Integrated Moving Average (ARIMA), Random Forest, and Deep Neural Networks, to predict indoor occupancy and evaluate the effectiveness of these approaches. This research has a strong practical background, requires interdisciplinary collaboration, and carries significant practical implications. rating: 9 confidence: 3
H5rJNzKqFe
Predicting the Number of People In a Room
[ "Kuanghao Wang" ]
At present, the energy consumption of the whole building process and the energy consumption of building operation in China accounts for a high proportion of the country's total energy consumption, and reducing the energy consumption of building operation is an important part of saving energy, reducing greenhouse gas emissions, mitigating climate change and achieving carbon neutrality. Building energy consumption is mainly composed of two parts: cooling system energy consumption and heating system energy consumption, both of which are related to indoor cooling and heating loads. A large part of the indoor load is generated by indoor personnel, and the load has a lagging effect. Therefore, if the number of indoor personnel can be predicted in advance, the start/stop time and operating power of the cooling and heating systems can be determined in advance, so as to avoid wastage of cold and heat, and thus to achieve the purpose of energy saving and emission reduction. In the past, there are fewer studies on the time series prediction of indoor occupancy, so this paper will take a specific room as the object of study, based on its occupancy monitoring data, use different methods such as Support Vector Machine (SVM), Auto Regressive Integrated Moving Average (ARIMA), Random Forest, Deep Neural Networks, etc. to predict the number of indoor occupants, and evaluate the advantages and disadvantages of the effectiveness of various prediction methods.
[ "indoor people number", "SVM", "deep learning", "machine learning" ]
https://openreview.net/pdf?id=H5rJNzKqFe
kQQ6EDUxNC
official_review
1,731,317,038,094
H5rJNzKqFe
[ "everyone" ]
[ "~Anqi_LI5" ]
title: review review: This paper explores the crucial topic of predicting indoor occupancy in buildings for energy efficiency, which is significant given the rising energy consumption in building operations. The research aims to utilize various machine learning techniques to predict the number of occupants and assess their effectiveness. Pros: Relevance and Significance: The research addresses a pressing issue of energy consumption in buildings, which is a major contributor to greenhouse gas emissions. Predicting occupancy can optimize HVAC systems, leading to energy savings and reduced environmental impact. Comprehensive Approach: The paper considers multiple machine learning methods (SVM, ARIMA, Random Forest, Deep Neural Networks) for occupancy prediction, providing a comparative analysis of their performance. This allows for a more informed decision on the most suitable method for specific applications. Use of Real-World Data: The research utilizes a dataset collected from an actual office environment, ensuring the practical relevance of the findings. The data covers a significant period (April 2022 to March 2024), providing valuable insights into occupancy patterns. Acknowledgment of Challenges: The paper acknowledges the limitations of occupancy prediction, such as data noise, random events, and the impact of the COVID-19 pandemic. This demonstrates a realistic understanding of the problem and potential challenges. Cons: Limited Scope of Data: The dataset is limited to a single office space, which may not fully represent occupancy patterns in different building types or environments. Generalizability of the findings could be limited. Lack of Detailed Methodology: While the paper mentions various machine learning methods, it lacks detailed descriptions of the specific algorithms, hyperparameter tuning, and model evaluation metrics used. This makes it difficult to replicate the results or assess the robustness of the models. Limited Analysis of Results: The paper presents prediction accuracy as the primary evaluation metric, but lacks a deeper analysis of the predictions, such as understanding the reasons behind the errors or exploring the limitations of each method. No Comparison with Baseline Methods: The paper compares the performance of different machine learning methods but does not compare them with simpler baseline methods, such as using historical average occupancy or time-based rules. This would provide a more comprehensive understanding of the added value of the proposed methods rating: 8 confidence: 3
H5rJNzKqFe
Predicting the Number of People In a Room
[ "Kuanghao Wang" ]
At present, the energy consumption of the whole building process and the energy consumption of building operation in China accounts for a high proportion of the country's total energy consumption, and reducing the energy consumption of building operation is an important part of saving energy, reducing greenhouse gas emissions, mitigating climate change and achieving carbon neutrality. Building energy consumption is mainly composed of two parts: cooling system energy consumption and heating system energy consumption, both of which are related to indoor cooling and heating loads. A large part of the indoor load is generated by indoor personnel, and the load has a lagging effect. Therefore, if the number of indoor personnel can be predicted in advance, the start/stop time and operating power of the cooling and heating systems can be determined in advance, so as to avoid wastage of cold and heat, and thus to achieve the purpose of energy saving and emission reduction. In the past, there are fewer studies on the time series prediction of indoor occupancy, so this paper will take a specific room as the object of study, based on its occupancy monitoring data, use different methods such as Support Vector Machine (SVM), Auto Regressive Integrated Moving Average (ARIMA), Random Forest, Deep Neural Networks, etc. to predict the number of indoor occupants, and evaluate the advantages and disadvantages of the effectiveness of various prediction methods.
[ "indoor people number", "SVM", "deep learning", "machine learning" ]
https://openreview.net/pdf?id=H5rJNzKqFe
gpyddGwcab
official_review
1,731,409,654,268
H5rJNzKqFe
[ "everyone" ]
[ "~Xuancheng_Li1" ]
title: review review: This paper explores predicting indoor occupancy using machine learning to improve building energy efficiency, a significant concern given China’s high building energy consumption. By forecasting room occupancy, heating and cooling systems can be optimized, reducing energy waste. The study compares several methods, including SVM, ARIMA, Random Forest, and Deep Neural Networks. Strengths The research addresses a relevant and impactful problem, applying machine learning to an area with high energy-saving potential. Comparing multiple prediction methods provides a balanced view of model effectiveness. Weaknesses Focusing on a single room limits the study's generalizability. More details on the dataset and performance criteria could enhance clarity. Conclusion This is a solid preliminary study on occupancy prediction for energy savings. Expanding to multi-room setups and testing real-time integration could strengthen its practical impact. rating: 6 confidence: 3
H5rJNzKqFe
Predicting the Number of People In a Room
[ "Kuanghao Wang" ]
At present, the energy consumption of the whole building process and the energy consumption of building operation in China accounts for a high proportion of the country's total energy consumption, and reducing the energy consumption of building operation is an important part of saving energy, reducing greenhouse gas emissions, mitigating climate change and achieving carbon neutrality. Building energy consumption is mainly composed of two parts: cooling system energy consumption and heating system energy consumption, both of which are related to indoor cooling and heating loads. A large part of the indoor load is generated by indoor personnel, and the load has a lagging effect. Therefore, if the number of indoor personnel can be predicted in advance, the start/stop time and operating power of the cooling and heating systems can be determined in advance, so as to avoid wastage of cold and heat, and thus to achieve the purpose of energy saving and emission reduction. In the past, there are fewer studies on the time series prediction of indoor occupancy, so this paper will take a specific room as the object of study, based on its occupancy monitoring data, use different methods such as Support Vector Machine (SVM), Auto Regressive Integrated Moving Average (ARIMA), Random Forest, Deep Neural Networks, etc. to predict the number of indoor occupants, and evaluate the advantages and disadvantages of the effectiveness of various prediction methods.
[ "indoor people number", "SVM", "deep learning", "machine learning" ]
https://openreview.net/pdf?id=H5rJNzKqFe
fULMLHt01E
official_review
1,731,293,494,833
H5rJNzKqFe
[ "everyone" ]
[ "~Matteo_Jiahao_Chen1" ]
title: Good proposal for reducing building energy consumption review: Practical proposal for reducing the energy consumption, but it could benefit from a formal problem definition, detailed model descriptions, and clear model evaluation metrics. rating: 8 confidence: 5
H5rJNzKqFe
Predicting the Number of People In a Room
[ "Kuanghao Wang" ]
At present, the energy consumption of the whole building process and the energy consumption of building operation in China accounts for a high proportion of the country's total energy consumption, and reducing the energy consumption of building operation is an important part of saving energy, reducing greenhouse gas emissions, mitigating climate change and achieving carbon neutrality. Building energy consumption is mainly composed of two parts: cooling system energy consumption and heating system energy consumption, both of which are related to indoor cooling and heating loads. A large part of the indoor load is generated by indoor personnel, and the load has a lagging effect. Therefore, if the number of indoor personnel can be predicted in advance, the start/stop time and operating power of the cooling and heating systems can be determined in advance, so as to avoid wastage of cold and heat, and thus to achieve the purpose of energy saving and emission reduction. In the past, there are fewer studies on the time series prediction of indoor occupancy, so this paper will take a specific room as the object of study, based on its occupancy monitoring data, use different methods such as Support Vector Machine (SVM), Auto Regressive Integrated Moving Average (ARIMA), Random Forest, Deep Neural Networks, etc. to predict the number of indoor occupants, and evaluate the advantages and disadvantages of the effectiveness of various prediction methods.
[ "indoor people number", "SVM", "deep learning", "machine learning" ]
https://openreview.net/pdf?id=H5rJNzKqFe
YeypglrptI
official_review
1,731,055,057,902
H5rJNzKqFe
[ "everyone" ]
[ "~Anton_Johansson1" ]
title: Well written proposal for an important topic review: This proposal tackles an interesting and practical problem. The layout is clear, and the background is solid. The method is also good but could benefit from more details. It’s great that the proposal focuses on a specific room dataset with detailed time-series data. Including image recognition technology and manual adjustments for accuracy in the dataset is a strong approach. One suggestion would be to address how the model might handle high variability during events like COVID-19. Overall, this is a well-thought-out and relevant study with promising potential for energy efficiency! rating: 8 confidence: 4
H5rJNzKqFe
Predicting the Number of People In a Room
[ "Kuanghao Wang" ]
At present, the energy consumption of the whole building process and the energy consumption of building operation in China accounts for a high proportion of the country's total energy consumption, and reducing the energy consumption of building operation is an important part of saving energy, reducing greenhouse gas emissions, mitigating climate change and achieving carbon neutrality. Building energy consumption is mainly composed of two parts: cooling system energy consumption and heating system energy consumption, both of which are related to indoor cooling and heating loads. A large part of the indoor load is generated by indoor personnel, and the load has a lagging effect. Therefore, if the number of indoor personnel can be predicted in advance, the start/stop time and operating power of the cooling and heating systems can be determined in advance, so as to avoid wastage of cold and heat, and thus to achieve the purpose of energy saving and emission reduction. In the past, there are fewer studies on the time series prediction of indoor occupancy, so this paper will take a specific room as the object of study, based on its occupancy monitoring data, use different methods such as Support Vector Machine (SVM), Auto Regressive Integrated Moving Average (ARIMA), Random Forest, Deep Neural Networks, etc. to predict the number of indoor occupants, and evaluate the advantages and disadvantages of the effectiveness of various prediction methods.
[ "indoor people number", "SVM", "deep learning", "machine learning" ]
https://openreview.net/pdf?id=H5rJNzKqFe
NrIzeT7ukn
official_review
1,731,048,759,851
H5rJNzKqFe
[ "everyone" ]
[ "~Bowen_Su1" ]
title: Meaningful Question and Useful Method review: This proposal provides a detailed and practical discussion on predicting the number of people inside a house. Firstly, a comprehensive introduction is given to the background of the problem, and the necessity and importance of the problem are fully analyzed. Afterwards, a detailed introduction was given to the research of relevant scholars at home and abroad during the literature review stage, which well met the requirements for the proposal. In the stage of proposing methods, several types of methods that the author intends to adopt were listed, and the sources of the dataset were fully introduced. A solution has been proposed for a highly applicable and meaningful problem, which is a very meaningful research. However, the proposed solution can be more detailed, and some feature engineering methods can also be used to process existing datasets to improve overall presentation. rating: 9 confidence: 4
H5rJNzKqFe
Predicting the Number of People In a Room
[ "Kuanghao Wang" ]
At present, the energy consumption of the whole building process and the energy consumption of building operation in China accounts for a high proportion of the country's total energy consumption, and reducing the energy consumption of building operation is an important part of saving energy, reducing greenhouse gas emissions, mitigating climate change and achieving carbon neutrality. Building energy consumption is mainly composed of two parts: cooling system energy consumption and heating system energy consumption, both of which are related to indoor cooling and heating loads. A large part of the indoor load is generated by indoor personnel, and the load has a lagging effect. Therefore, if the number of indoor personnel can be predicted in advance, the start/stop time and operating power of the cooling and heating systems can be determined in advance, so as to avoid wastage of cold and heat, and thus to achieve the purpose of energy saving and emission reduction. In the past, there are fewer studies on the time series prediction of indoor occupancy, so this paper will take a specific room as the object of study, based on its occupancy monitoring data, use different methods such as Support Vector Machine (SVM), Auto Regressive Integrated Moving Average (ARIMA), Random Forest, Deep Neural Networks, etc. to predict the number of indoor occupants, and evaluate the advantages and disadvantages of the effectiveness of various prediction methods.
[ "indoor people number", "SVM", "deep learning", "machine learning" ]
https://openreview.net/pdf?id=H5rJNzKqFe
EmZTkHu8H0
official_review
1,730,957,127,999
H5rJNzKqFe
[ "everyone" ]
[ "~Iat_Long_Iong1" ]
title: Great idea but needs more specifics review: This proposal aims to design an accurate indoor occupancy prediction system to optimize air conditioning for energy savings and emission reductions. The proposal recognizes the importance of accurately predicting occupant numbers to optimize HVAC system operations and improve energy efficiency. However, the proposal lacks details of the prediction methodology, such as how many 15-minute intervals of data the algorithm would need to observe in order to make a prediction, and the advantages of using image recognition rather than a sensor-based approach. Addressing these details would improve its feasibility and practical application. Overall, the proposal is relevant and promising but needs more specifics and empirical evidence to enhance its credibility and practicality. rating: 8 confidence: 5
H5rJNzKqFe
Predicting the Number of People In a Room
[ "Kuanghao Wang" ]
At present, the energy consumption of the whole building process and the energy consumption of building operation in China accounts for a high proportion of the country's total energy consumption, and reducing the energy consumption of building operation is an important part of saving energy, reducing greenhouse gas emissions, mitigating climate change and achieving carbon neutrality. Building energy consumption is mainly composed of two parts: cooling system energy consumption and heating system energy consumption, both of which are related to indoor cooling and heating loads. A large part of the indoor load is generated by indoor personnel, and the load has a lagging effect. Therefore, if the number of indoor personnel can be predicted in advance, the start/stop time and operating power of the cooling and heating systems can be determined in advance, so as to avoid wastage of cold and heat, and thus to achieve the purpose of energy saving and emission reduction. In the past, there are fewer studies on the time series prediction of indoor occupancy, so this paper will take a specific room as the object of study, based on its occupancy monitoring data, use different methods such as Support Vector Machine (SVM), Auto Regressive Integrated Moving Average (ARIMA), Random Forest, Deep Neural Networks, etc. to predict the number of indoor occupants, and evaluate the advantages and disadvantages of the effectiveness of various prediction methods.
[ "indoor people number", "SVM", "deep learning", "machine learning" ]
https://openreview.net/pdf?id=H5rJNzKqFe
9nISUh4uwH
official_review
1,730,905,221,270
H5rJNzKqFe
[ "everyone" ]
[ "~Daniel_Wang4" ]
title: Optimizing Building Energy through Occupancy Prediction: A Practical Yet Data-Intensive Approach review: The proposal, Predicting the Number of People in a Room, presents a thoughtful approach to reducing building energy consumption by predicting indoor occupancy. Using a dataset from Tsinghua University, the authors plan to apply machine learning techniques like SVM, ARIMA, Random Forest, and deep neural networks to predict occupancy, with the aim of optimizing heating and cooling loads. The methodology is solid, leveraging a robust dataset with frequent measurements, but could benefit from more clarity on how different prediction models will be compared and selected. Overall, the proposal is practical and relevant, addressing a key issue in energy management with a data-driven approach. rating: 10 confidence: 3
H5rJNzKqFe
Predicting the Number of People In a Room
[ "Kuanghao Wang" ]
At present, the energy consumption of the whole building process and the energy consumption of building operation in China accounts for a high proportion of the country's total energy consumption, and reducing the energy consumption of building operation is an important part of saving energy, reducing greenhouse gas emissions, mitigating climate change and achieving carbon neutrality. Building energy consumption is mainly composed of two parts: cooling system energy consumption and heating system energy consumption, both of which are related to indoor cooling and heating loads. A large part of the indoor load is generated by indoor personnel, and the load has a lagging effect. Therefore, if the number of indoor personnel can be predicted in advance, the start/stop time and operating power of the cooling and heating systems can be determined in advance, so as to avoid wastage of cold and heat, and thus to achieve the purpose of energy saving and emission reduction. In the past, there are fewer studies on the time series prediction of indoor occupancy, so this paper will take a specific room as the object of study, based on its occupancy monitoring data, use different methods such as Support Vector Machine (SVM), Auto Regressive Integrated Moving Average (ARIMA), Random Forest, Deep Neural Networks, etc. to predict the number of indoor occupants, and evaluate the advantages and disadvantages of the effectiveness of various prediction methods.
[ "indoor people number", "SVM", "deep learning", "machine learning" ]
https://openreview.net/pdf?id=H5rJNzKqFe
9myIHTunAS
official_review
1,731,424,372,295
H5rJNzKqFe
[ "everyone" ]
[ "~liyingxin1" ]
title: Meaningful topic. But what is the connection with the large model ? review: A very meaningful topic. But the author need more effort to compare the basis and applicable scenarios for selecting different models in the methodology section, explain why these models were chosen for prediction, and provide the parameter settings and optimization process for each model, especially no connection with the content in this large model course. rating: 7 confidence: 4
H5rJNzKqFe
Predicting the Number of People In a Room
[ "Kuanghao Wang" ]
At present, the energy consumption of the whole building process and the energy consumption of building operation in China accounts for a high proportion of the country's total energy consumption, and reducing the energy consumption of building operation is an important part of saving energy, reducing greenhouse gas emissions, mitigating climate change and achieving carbon neutrality. Building energy consumption is mainly composed of two parts: cooling system energy consumption and heating system energy consumption, both of which are related to indoor cooling and heating loads. A large part of the indoor load is generated by indoor personnel, and the load has a lagging effect. Therefore, if the number of indoor personnel can be predicted in advance, the start/stop time and operating power of the cooling and heating systems can be determined in advance, so as to avoid wastage of cold and heat, and thus to achieve the purpose of energy saving and emission reduction. In the past, there are fewer studies on the time series prediction of indoor occupancy, so this paper will take a specific room as the object of study, based on its occupancy monitoring data, use different methods such as Support Vector Machine (SVM), Auto Regressive Integrated Moving Average (ARIMA), Random Forest, Deep Neural Networks, etc. to predict the number of indoor occupants, and evaluate the advantages and disadvantages of the effectiveness of various prediction methods.
[ "indoor people number", "SVM", "deep learning", "machine learning" ]
https://openreview.net/pdf?id=H5rJNzKqFe
4Co4wzYFc9
official_review
1,731,302,445,203
H5rJNzKqFe
[ "everyone" ]
[ "~Hector_Rodriguez_Rodriguez1" ]
title: Interesting Use Case for Predictive Control review: The author proposes using SVM, ARIMA, Random Forest, or Deep Neural Networks to predict the heat load in a room based on date, time, and other environmental measurements. - The proposal’s background is solid but could be strengthened by emphasizing that predictive control offers advantages beyond temperature stabilization. Unlike reactive control, predictive control allows for adjustments to be made in advance, enabling the AC system to operate at a higher efficiency point. - The literature review is thorough and clearly highlights the need for further research on this topic. - The research methodology presents a solid dataset and clear preprocessing plan. However, the selection and evaluation criteria of the predictive models could be disclosed in more detail. Overall, the writing is clear, though it could be more concise to improve readability. rating: 8 confidence: 4
GCdVXIaAbe
[Proposal-ML] Cross-Cultural Language Adaptation: Fine-Tuning Gemma 2 for Diverse Linguistic Contexts
[ "Yunghwei Lai", "Jia-Nuo Liew", "Grace Xin-Yue Yi" ]
This paper introduces a novel approach for enhancing translation quality between low-resource languages, specifically focusing on Chinese and Malay in scientific domains. Leveraging Google’s GEMMA-2-9B model, we utilize a pivot-based Multilingual Neural Machine Translation (MNMT) strategy with English as an intermediary language. Our methodology involves a multi-step process of fine-tuning, using Supervised Fine-Tuning (SFT) and Low-Rank Adaptation (LoRA), to improve translation efficiency and accuracy. This approach includes dataset curation from biology-specific scientific articles, supplemented with synthetic data to strengthen the model's handling of domain-specific terminology and complex linguistic structures. To assess model performance, we employ both automated and human evaluation metrics, including COMET, BLEU, BERT-F1, and CHRF, targeting high-quality results. Our findings demonstrate the potential of pivot-based MNMT methods in bridging low-resource language gaps in scientific knowledge, presenting a scalable solution that could expand to other languages and domains, fostering inclusivity in multilingual communication.
[ "Multiligual Model", "Large Language Model", "Knowledge Transfer", "Multilingual Neural Machine Translation" ]
https://openreview.net/pdf?id=GCdVXIaAbe
wS0HwvXtln
official_review
1,731,223,071,908
GCdVXIaAbe
[ "everyone" ]
[ "~Xiaoqian_Liu7" ]
title: Clear Problem Statement and Methodology review: The proposal "Cross-Cultural Language Adaptation: Fine-Tuning GEMMA 2 for Diverse Linguistic Contexts" presents a method to improve translation quality between Chinese and Malay, particularly for scientific papers, using English as a pivot language. The authors aim to leverage Google's GEMMA 2 model family to facilitate knowledge transfer from high-resource languages to low-resource ones, addressing the language bias on the web and promoting inclusivity in multilingual communication. The methodology is sound, utilizing Supervised Fine-Tuning (SFT) with Low-Rank Adaptation (LoRA) for parameter-efficient model adaptation and developing a direct translation capability between the two languages through Multilingual Neural Machine Translation (MNMT). The authors also propose a curated dataset from scientific articles and synthetic data generation to focus on domain-specific vocabulary and sentence structures. The evaluation plan includes a combination of human and automated metrics such as COMET, BLEU, BERT-F1, and CHRF, which are appropriate for assessing translation quality and domain-specific language preservation. rating: 10 confidence: 3
GCdVXIaAbe
[Proposal-ML] Cross-Cultural Language Adaptation: Fine-Tuning Gemma 2 for Diverse Linguistic Contexts
[ "Yunghwei Lai", "Jia-Nuo Liew", "Grace Xin-Yue Yi" ]
This paper introduces a novel approach for enhancing translation quality between low-resource languages, specifically focusing on Chinese and Malay in scientific domains. Leveraging Google’s GEMMA-2-9B model, we utilize a pivot-based Multilingual Neural Machine Translation (MNMT) strategy with English as an intermediary language. Our methodology involves a multi-step process of fine-tuning, using Supervised Fine-Tuning (SFT) and Low-Rank Adaptation (LoRA), to improve translation efficiency and accuracy. This approach includes dataset curation from biology-specific scientific articles, supplemented with synthetic data to strengthen the model's handling of domain-specific terminology and complex linguistic structures. To assess model performance, we employ both automated and human evaluation metrics, including COMET, BLEU, BERT-F1, and CHRF, targeting high-quality results. Our findings demonstrate the potential of pivot-based MNMT methods in bridging low-resource language gaps in scientific knowledge, presenting a scalable solution that could expand to other languages and domains, fostering inclusivity in multilingual communication.
[ "Multiligual Model", "Large Language Model", "Knowledge Transfer", "Multilingual Neural Machine Translation" ]
https://openreview.net/pdf?id=GCdVXIaAbe
qbLJayGpEe
official_review
1,730,904,205,974
GCdVXIaAbe
[ "everyone" ]
[ "~Daniel_Wang4" ]
title: Submission 47 Review review: The proposal, presents a well-structured approach to improving translation between Chinese and Malay, focusing specifically on scientific texts. By leveraging Google’s Gemma-2 model and using English as a pivot, the authors address the challenge of low-resource language translation, which is essential for making scientific knowledge more widely accessible. Their methodology is sound, involving a two-step translation process,first from Chinese to English, then from English to Mala, using fine-tuning techniques that include LoRA for efficient adaptation. The proposal also introduces a direct translation pathway from Chinese to Malay, enhancing speed and accuracy in the process. A carefully curated dataset of biology-related texts, complemented by synthetic data, further strengthens the approach, helping the model become adept at handling specialized vocabulary. Overall, this is a promising proposal with a clear vision and thoughtful methodology. The focus on cross-linguistic scientific communication is unique, and the techniques outlined are well-suited to the project’s objectives. rating: 10 confidence: 5
GCdVXIaAbe
[Proposal-ML] Cross-Cultural Language Adaptation: Fine-Tuning Gemma 2 for Diverse Linguistic Contexts
[ "Yunghwei Lai", "Jia-Nuo Liew", "Grace Xin-Yue Yi" ]
This paper introduces a novel approach for enhancing translation quality between low-resource languages, specifically focusing on Chinese and Malay in scientific domains. Leveraging Google’s GEMMA-2-9B model, we utilize a pivot-based Multilingual Neural Machine Translation (MNMT) strategy with English as an intermediary language. Our methodology involves a multi-step process of fine-tuning, using Supervised Fine-Tuning (SFT) and Low-Rank Adaptation (LoRA), to improve translation efficiency and accuracy. This approach includes dataset curation from biology-specific scientific articles, supplemented with synthetic data to strengthen the model's handling of domain-specific terminology and complex linguistic structures. To assess model performance, we employ both automated and human evaluation metrics, including COMET, BLEU, BERT-F1, and CHRF, targeting high-quality results. Our findings demonstrate the potential of pivot-based MNMT methods in bridging low-resource language gaps in scientific knowledge, presenting a scalable solution that could expand to other languages and domains, fostering inclusivity in multilingual communication.
[ "Multiligual Model", "Large Language Model", "Knowledge Transfer", "Multilingual Neural Machine Translation" ]
https://openreview.net/pdf?id=GCdVXIaAbe
m7e7i4gJ4i
official_review
1,731,090,714,016
GCdVXIaAbe
[ "everyone" ]
[ "~Zijun_Liu2" ]
title: Review and Feedback review: ## Overview The proposal aims to enhance multilingual translation models by focusing on low-resource language pairs, specifically Chinese-Malay translations in scientific contexts. The authors proposed to use English as a pivot language and wanted to explore cross-language transfer methods. ## Comments The proposed use of both human evaluation and automated metrics aligns well with best practices in NLP. And the introduction of related work is informative. However, given the curation process of the dataset, I would recommend the authors to compare with a baseline that uses all Chinese-Malay parallel data synthesized from English. This would help to show the significance of cross-lingual transfer methods. rating: 10 confidence: 4
GCdVXIaAbe
[Proposal-ML] Cross-Cultural Language Adaptation: Fine-Tuning Gemma 2 for Diverse Linguistic Contexts
[ "Yunghwei Lai", "Jia-Nuo Liew", "Grace Xin-Yue Yi" ]
This paper introduces a novel approach for enhancing translation quality between low-resource languages, specifically focusing on Chinese and Malay in scientific domains. Leveraging Google’s GEMMA-2-9B model, we utilize a pivot-based Multilingual Neural Machine Translation (MNMT) strategy with English as an intermediary language. Our methodology involves a multi-step process of fine-tuning, using Supervised Fine-Tuning (SFT) and Low-Rank Adaptation (LoRA), to improve translation efficiency and accuracy. This approach includes dataset curation from biology-specific scientific articles, supplemented with synthetic data to strengthen the model's handling of domain-specific terminology and complex linguistic structures. To assess model performance, we employ both automated and human evaluation metrics, including COMET, BLEU, BERT-F1, and CHRF, targeting high-quality results. Our findings demonstrate the potential of pivot-based MNMT methods in bridging low-resource language gaps in scientific knowledge, presenting a scalable solution that could expand to other languages and domains, fostering inclusivity in multilingual communication.
[ "Multiligual Model", "Large Language Model", "Knowledge Transfer", "Multilingual Neural Machine Translation" ]
https://openreview.net/pdf?id=GCdVXIaAbe
m7ZV2ZmE6Z
official_review
1,731,078,747,285
GCdVXIaAbe
[ "everyone" ]
[ "~Joydeep_Chandra2" ]
title: The focus on adapting the Gemma 2 model for low-resource languages like Chinese and Malay is relevant and aligned with current trends in multilingual NLP. review: The use of a pivot-based translation strategy with English as an intermediary, combined with supervised fine-tuning (SFT) and synthetic data generation, is well thought out for handling domain-specific challenges. The proposed metrics (COMET, BLEU, BERT-F1, CHRF) are good choices, but the plan for how these metrics will be applied could be clarified. Including a brief explanation of the expected baseline comparisons would provide more context. rating: 8 confidence: 4
GCdVXIaAbe
[Proposal-ML] Cross-Cultural Language Adaptation: Fine-Tuning Gemma 2 for Diverse Linguistic Contexts
[ "Yunghwei Lai", "Jia-Nuo Liew", "Grace Xin-Yue Yi" ]
This paper introduces a novel approach for enhancing translation quality between low-resource languages, specifically focusing on Chinese and Malay in scientific domains. Leveraging Google’s GEMMA-2-9B model, we utilize a pivot-based Multilingual Neural Machine Translation (MNMT) strategy with English as an intermediary language. Our methodology involves a multi-step process of fine-tuning, using Supervised Fine-Tuning (SFT) and Low-Rank Adaptation (LoRA), to improve translation efficiency and accuracy. This approach includes dataset curation from biology-specific scientific articles, supplemented with synthetic data to strengthen the model's handling of domain-specific terminology and complex linguistic structures. To assess model performance, we employ both automated and human evaluation metrics, including COMET, BLEU, BERT-F1, and CHRF, targeting high-quality results. Our findings demonstrate the potential of pivot-based MNMT methods in bridging low-resource language gaps in scientific knowledge, presenting a scalable solution that could expand to other languages and domains, fostering inclusivity in multilingual communication.
[ "Multiligual Model", "Large Language Model", "Knowledge Transfer", "Multilingual Neural Machine Translation" ]
https://openreview.net/pdf?id=GCdVXIaAbe
jwPc5as8e4
official_review
1,731,182,311,186
GCdVXIaAbe
[ "everyone" ]
[ "~Lu_Fan_DB1" ]
title: Review of the Proposal: "Cross-Cultural Language Adaptation: Fine-Tuning Gemma 2 for Diverse Linguistic Contexts" review: Summary: This proposal focuses on improving translation quality between Chinese and Malay, especially in scientific texts, by fine-tuning Google’s Gemma-2 model with English as a pivot language. The approach leverages techniques like Supervised Fine-Tuning (SFT) with Low-Rank Adaptation (LoRA) and a multilingual neural machine translation (MNMT) model, aiming to enhance accessibility to scientific knowledge across low-resource languages. Strengths: Innovative Pivot-Based Methodology: The use of English as an intermediary language to improve low-resource language translations is a well-motivated approach, promising in scenarios where direct translations are challenging. Domain-Specific Focus: Targeting biology-related texts enhances the model's real-world applicability, as scientific translation is often limited in low-resource languages. Comprehensive Evaluation Plan: The proposal includes a well-rounded evaluation strategy using metrics like COMET, BLEU, and BERT-F1, which are suitable for assessing translation quality across multiple dimensions. Areas for Improvement: Dataset Expansion: Increasing the size and diversity of datasets, particularly with more real-world, domain-specific examples, would likely enhance the model’s performance and generalizability. Potential Limitations: Briefly discussing challenges such as syntactic differences between Chinese and Malay or limitations in using English as a pivot could provide a clearer scope of the project. Overall Assessment: This proposal presents a thoughtful and structured approach to address cross-cultural language adaptation in low-resource settings. By refining the evaluation and addressing some practical challenges, the study could make a significant impact on multilingual accessibility in scientific literature. rating: 8 confidence: 4
GCdVXIaAbe
[Proposal-ML] Cross-Cultural Language Adaptation: Fine-Tuning Gemma 2 for Diverse Linguistic Contexts
[ "Yunghwei Lai", "Jia-Nuo Liew", "Grace Xin-Yue Yi" ]
This paper introduces a novel approach for enhancing translation quality between low-resource languages, specifically focusing on Chinese and Malay in scientific domains. Leveraging Google’s GEMMA-2-9B model, we utilize a pivot-based Multilingual Neural Machine Translation (MNMT) strategy with English as an intermediary language. Our methodology involves a multi-step process of fine-tuning, using Supervised Fine-Tuning (SFT) and Low-Rank Adaptation (LoRA), to improve translation efficiency and accuracy. This approach includes dataset curation from biology-specific scientific articles, supplemented with synthetic data to strengthen the model's handling of domain-specific terminology and complex linguistic structures. To assess model performance, we employ both automated and human evaluation metrics, including COMET, BLEU, BERT-F1, and CHRF, targeting high-quality results. Our findings demonstrate the potential of pivot-based MNMT methods in bridging low-resource language gaps in scientific knowledge, presenting a scalable solution that could expand to other languages and domains, fostering inclusivity in multilingual communication.
[ "Multiligual Model", "Large Language Model", "Knowledge Transfer", "Multilingual Neural Machine Translation" ]
https://openreview.net/pdf?id=GCdVXIaAbe
jQbbXKAH62
official_review
1,731,158,822,811
GCdVXIaAbe
[ "everyone" ]
[ "~Tim_Bakkenes1" ]
title: Good proposal review: This is a very good proposal, well done. Pros: - There is a clear problem statement which motivates the need for your research. - The proposed methodology is well structured and innovative. To use English as a pivot language is an interesting approach and you clearly outline how the translation process will work. How the dataset will be curated is also clearly outlined which strengthens your proposal. Cons: - It would have been nice if a more formal problem definition was included. - It would also be nice if you described what baseline you would be comparing your model to. The metrics are very well described but is there another model you would compare your model to for those metrics? Overall, very good and I look forward to seeing how your model performs. rating: 10 confidence: 4
GCdVXIaAbe
[Proposal-ML] Cross-Cultural Language Adaptation: Fine-Tuning Gemma 2 for Diverse Linguistic Contexts
[ "Yunghwei Lai", "Jia-Nuo Liew", "Grace Xin-Yue Yi" ]
This paper introduces a novel approach for enhancing translation quality between low-resource languages, specifically focusing on Chinese and Malay in scientific domains. Leveraging Google’s GEMMA-2-9B model, we utilize a pivot-based Multilingual Neural Machine Translation (MNMT) strategy with English as an intermediary language. Our methodology involves a multi-step process of fine-tuning, using Supervised Fine-Tuning (SFT) and Low-Rank Adaptation (LoRA), to improve translation efficiency and accuracy. This approach includes dataset curation from biology-specific scientific articles, supplemented with synthetic data to strengthen the model's handling of domain-specific terminology and complex linguistic structures. To assess model performance, we employ both automated and human evaluation metrics, including COMET, BLEU, BERT-F1, and CHRF, targeting high-quality results. Our findings demonstrate the potential of pivot-based MNMT methods in bridging low-resource language gaps in scientific knowledge, presenting a scalable solution that could expand to other languages and domains, fostering inclusivity in multilingual communication.
[ "Multiligual Model", "Large Language Model", "Knowledge Transfer", "Multilingual Neural Machine Translation" ]
https://openreview.net/pdf?id=GCdVXIaAbe
XmidKSsRHK
official_review
1,731,343,270,803
GCdVXIaAbe
[ "everyone" ]
[ "~Michael_Hua_Wang1" ]
title: Review review: The absence of a sufficient corpus of training data is a perennial problem when it comes to training ML models, and this absence directly contributes to poor performance of LLMs in languages other than those most commonly used online. This project proposes to use the language with the most available data (English) as an intermediary to enhance translations between two other languages (in this case, Chinese and Malay). The mechanism by which this might be done is well explained, and it seems feasible to accomplish this. However, as far as I can tell, the proposal does not directly address the issue of data paucity with respect to low-resource languages, as presumably the root model being used for translation would be subject to the same issues. More explicit discussion to link the project with the problem initially discussed is perhaps warranted. rating: 8 confidence: 4
GCdVXIaAbe
[Proposal-ML] Cross-Cultural Language Adaptation: Fine-Tuning Gemma 2 for Diverse Linguistic Contexts
[ "Yunghwei Lai", "Jia-Nuo Liew", "Grace Xin-Yue Yi" ]
This paper introduces a novel approach for enhancing translation quality between low-resource languages, specifically focusing on Chinese and Malay in scientific domains. Leveraging Google’s GEMMA-2-9B model, we utilize a pivot-based Multilingual Neural Machine Translation (MNMT) strategy with English as an intermediary language. Our methodology involves a multi-step process of fine-tuning, using Supervised Fine-Tuning (SFT) and Low-Rank Adaptation (LoRA), to improve translation efficiency and accuracy. This approach includes dataset curation from biology-specific scientific articles, supplemented with synthetic data to strengthen the model's handling of domain-specific terminology and complex linguistic structures. To assess model performance, we employ both automated and human evaluation metrics, including COMET, BLEU, BERT-F1, and CHRF, targeting high-quality results. Our findings demonstrate the potential of pivot-based MNMT methods in bridging low-resource language gaps in scientific knowledge, presenting a scalable solution that could expand to other languages and domains, fostering inclusivity in multilingual communication.
[ "Multiligual Model", "Large Language Model", "Knowledge Transfer", "Multilingual Neural Machine Translation" ]
https://openreview.net/pdf?id=GCdVXIaAbe
QaRiKFeDeB
official_review
1,731,389,857,305
GCdVXIaAbe
[ "everyone" ]
[ "~Zhuofan_Sun1" ]
title: Review of "Cross-Cultural Language Adaptation: Fine-Tuning Gemma 2 for Diverse Linguistic Contexts" review: The work specifically targets low-resource languages in the context of scientific translations, using English as a pivot language. Strengths:1.Clear Problem Definition: The proposal identifies a relevant problem: the lack of robust machine translation models for low-resource languages, particularly in scientific contexts. This addresses a gap in multilingual natural language processing, especially for languages like Malay, which often lack extensive datasets.; 2. Innovative Approach: The methodology is innovative in its use of English as a pivot language, allowing for incremental fine-tuning with LoRA and supervised fine-tuning (SFT). This modular approach allows adaptation without retraining the entire model, which is efficient and cost-effective. 3.Detailed Methodology: The authors present a well-defined, step-by-step methodology, starting from fine-tuning the model for Chinese-English translation and then moving on to English-Malay. The planned transition to a direct Chinese-Malay translation model without the English pivot demonstrates a clear progression in model development. 4.Domain-Specific Focus: The selection of biology-related scientific texts is a strategic choice. It allows the model to develop an understanding of specific vocabulary and structures, making it more effective for the intended application in scientific translation. Overall, this proposal is well-structured and addresses a significant challenge in multilingual NLP. By focusing on scientific translation between low-resource languages, the authors contribute to making scientific knowledge more accessible across linguistic barriers. The methodology is well-thought-out, with appropriate technical sophistication and a strong foundation in prior work. Addressing the concerns around dataset size, pivot language limitations, and evaluation expansion would enhance the project’s impact and scalability. rating: 10 confidence: 4
GCdVXIaAbe
[Proposal-ML] Cross-Cultural Language Adaptation: Fine-Tuning Gemma 2 for Diverse Linguistic Contexts
[ "Yunghwei Lai", "Jia-Nuo Liew", "Grace Xin-Yue Yi" ]
This paper introduces a novel approach for enhancing translation quality between low-resource languages, specifically focusing on Chinese and Malay in scientific domains. Leveraging Google’s GEMMA-2-9B model, we utilize a pivot-based Multilingual Neural Machine Translation (MNMT) strategy with English as an intermediary language. Our methodology involves a multi-step process of fine-tuning, using Supervised Fine-Tuning (SFT) and Low-Rank Adaptation (LoRA), to improve translation efficiency and accuracy. This approach includes dataset curation from biology-specific scientific articles, supplemented with synthetic data to strengthen the model's handling of domain-specific terminology and complex linguistic structures. To assess model performance, we employ both automated and human evaluation metrics, including COMET, BLEU, BERT-F1, and CHRF, targeting high-quality results. Our findings demonstrate the potential of pivot-based MNMT methods in bridging low-resource language gaps in scientific knowledge, presenting a scalable solution that could expand to other languages and domains, fostering inclusivity in multilingual communication.
[ "Multiligual Model", "Large Language Model", "Knowledge Transfer", "Multilingual Neural Machine Translation" ]
https://openreview.net/pdf?id=GCdVXIaAbe
OMpdMfkhIK
official_review
1,731,419,576,912
GCdVXIaAbe
[ "everyone" ]
[ "~Chumeng_Jiang1" ]
title: Well-designed Pipeline review: This proposal seeks to enhance multilingual machine translation between low-resource languages, currently focusing on Chinese and Malay, and with an emphasis on translating scientific texts. Leveraging Google’s GEMMA 2 model, the approach includes fine-tuning the model through an English pivot to achieve high-quality translations and eventually eliminate the need for an English intermediary. Key methodologies include supervised fine-tuning with LoRA for parameter efficiency and transfer learning for direct cross-linguistic adaptation. The project will use metrics like COMET and BLEU for evaluation. Strengths: - **Methodological Clarity and Thoughtfulness:** The proposal is well-structured with a clear multi-step approach, encompassing pivot-based fine-tuning, transfer learning, and dataset curation. This detailed methodology reflects a deep understanding of translation model adaptation challenges. - **Comprehensive Automated Evaluation Framework:** A strong set of evaluation metrics (COMET, BLEU, BERT-F1, and CHRF) has been proposed to ensure a comprehensive assessment of translation quality, considering both semantic similarity and technical language preservation. Weaknesses: - **Unclear explanation of human evaluation process:** The combination of low-resource language and scientific texts is likely to make human evaluation quite challenging. I’m wondering how this part will be specifically designed. - **Lack of Justification for English Pivot Removal:** Why would applying MNMT after the multi-step process lead to an improvement rather than a loss in performance? Theoretically, MNMT might perform worse than the proposed method. rating: 9 confidence: 4
GCdVXIaAbe
[Proposal-ML] Cross-Cultural Language Adaptation: Fine-Tuning Gemma 2 for Diverse Linguistic Contexts
[ "Yunghwei Lai", "Jia-Nuo Liew", "Grace Xin-Yue Yi" ]
This paper introduces a novel approach for enhancing translation quality between low-resource languages, specifically focusing on Chinese and Malay in scientific domains. Leveraging Google’s GEMMA-2-9B model, we utilize a pivot-based Multilingual Neural Machine Translation (MNMT) strategy with English as an intermediary language. Our methodology involves a multi-step process of fine-tuning, using Supervised Fine-Tuning (SFT) and Low-Rank Adaptation (LoRA), to improve translation efficiency and accuracy. This approach includes dataset curation from biology-specific scientific articles, supplemented with synthetic data to strengthen the model's handling of domain-specific terminology and complex linguistic structures. To assess model performance, we employ both automated and human evaluation metrics, including COMET, BLEU, BERT-F1, and CHRF, targeting high-quality results. Our findings demonstrate the potential of pivot-based MNMT methods in bridging low-resource language gaps in scientific knowledge, presenting a scalable solution that could expand to other languages and domains, fostering inclusivity in multilingual communication.
[ "Multiligual Model", "Large Language Model", "Knowledge Transfer", "Multilingual Neural Machine Translation" ]
https://openreview.net/pdf?id=GCdVXIaAbe
9Tr5FI6sMS
official_review
1,731,054,247,378
GCdVXIaAbe
[ "everyone" ]
[ "~Anton_Johansson1" ]
title: Good proposal review: The proposal has a clear layout and is easy to follow, sticking to the two-page rule. The proposal is well-supported by literature, adding credibility to the research. Your methodology is solid and clear. For example, you have a specific number for your dataset curation (30-50k sentences), which is good. Further, data primarily from biology-related articles is great for ensuring domain-specific accuracy and limiting the scope. Including human and automated metrics (COMET, BLEU, BERT-F1, etc.) will provide a thorough assessment and adds robustness to your evaluation process. Good luck with your project! rating: 10 confidence: 4
GCdVXIaAbe
[Proposal-ML] Cross-Cultural Language Adaptation: Fine-Tuning Gemma 2 for Diverse Linguistic Contexts
[ "Yunghwei Lai", "Jia-Nuo Liew", "Grace Xin-Yue Yi" ]
This paper introduces a novel approach for enhancing translation quality between low-resource languages, specifically focusing on Chinese and Malay in scientific domains. Leveraging Google’s GEMMA-2-9B model, we utilize a pivot-based Multilingual Neural Machine Translation (MNMT) strategy with English as an intermediary language. Our methodology involves a multi-step process of fine-tuning, using Supervised Fine-Tuning (SFT) and Low-Rank Adaptation (LoRA), to improve translation efficiency and accuracy. This approach includes dataset curation from biology-specific scientific articles, supplemented with synthetic data to strengthen the model's handling of domain-specific terminology and complex linguistic structures. To assess model performance, we employ both automated and human evaluation metrics, including COMET, BLEU, BERT-F1, and CHRF, targeting high-quality results. Our findings demonstrate the potential of pivot-based MNMT methods in bridging low-resource language gaps in scientific knowledge, presenting a scalable solution that could expand to other languages and domains, fostering inclusivity in multilingual communication.
[ "Multiligual Model", "Large Language Model", "Knowledge Transfer", "Multilingual Neural Machine Translation" ]
https://openreview.net/pdf?id=GCdVXIaAbe
8bFegs0JeM
official_review
1,730,885,163,189
GCdVXIaAbe
[ "everyone" ]
[ "~Aleksandr_Algazinov1" ]
title: Clear problem statement and methodology review: The proposal is easy to follow and well-written. The authors consider the relevant machine translation for a specific category of languages problem. The authors explain in detail the proposed methodology, as well as the model evaluation methods. Moreover, they explain the potential benefits and scalability of the project. rating: 10 confidence: 4
FMe1EenIGh
[Proposal] Enhancing Multi-Domain Recommendations via LLM-Generated Data
[ "Chumeng Jiang", "Kairong Luo", "Zhixuan Pan" ]
Due to the increasingly severe information overload issue, the application of recommendation systems (RS) prevails across all kinds of Internet platforms, as they can provide personalized items for each user. Recently, with the growing number of specialized domains in one comprehensive platform, such as short video recommendations, article recommendations, and product recommendations in the same app, multi-domain recommendation has garnered significant attention. The multi-domain recommendation can simultaneously leverage knowledge from different domains, alleviating the data sparsity issue and allowing a single model to make recommendations across multiple domains, reducing the deploying costs. Nevertheless, the historical data size between different domains varies. Some areas may have significantly more data than others, namely the rich or cold-start scenarios. This disparity in data size can lead to certain limitations during model training. For instance, the learning of domain-specific parameters in cold-start scenarios may be insufficient, while the learning of domain-shared parameters may be dominated by rich scenarios. Previous work mainly addresses these issues through meticulous structural design of the models. In this paper, we adopt a different perspective by addressing this issue from the data standpoint. The emergence of LLMs has made it possible to generate virtual user and item data. Furthermore, LLMs, with their extensive world knowledge and outstanding comprehending capability, have demonstrated impressive recommendation capabilities in cold-start scenarios. In this case, we utilize the LLMs to simulate users in cold-start scenarios and synthesize more sufficient positive samples after learning from existing multi-domain historical interactions. Through an elaborately designed data filtering and denoising strategy, the recommendation quality of the multi-domain models can be enhanced.Moreover, through the lens of recommendation systems, we may get more insight into the synthetic data from existing LLMs.
[ "Multi-Domain Recommendation", "Large Language Model" ]
https://openreview.net/pdf?id=FMe1EenIGh
wtWlOBphmD
official_review
1,730,986,193,441
FMe1EenIGh
[ "everyone" ]
[ "~Juncheng_Yu1" ]
title: Advancing Multi-Domain Recommendation through Synthetic Data Generation with LLMs: A Promising Data-Driven Solution review: ## Summary This paper addresses the challenge of unbalanced data across different domains in multi-domain recommendation systems by generating synthetic data using large language models. With carefully designed data filtering and denoising strategies, this approach is believed to have the potential to significantly enhance multi-domain recommendation performance. ## Strengths - **Relevance of Research Problem**: In introduction, the authors show a strong understanding of the recommendation domain, supported by a thorough literature review. They have clearly identified a precise and significant problem within the field. - **Clarity of Task Definition**: In methods and problem formulation, the authors have clearly illustrated the task, providing a detailed pipeline to solve the problem, effectively formulating the research objective. ## Weaknesses - **Ambiguity when mentioning CTR Prediction**: In the first sequence of problem formulation, It is unclear if the formulation that follows the initial problem statement pertains directly to CTR prediction. Clarification would enhance the coherence of this section. - **Lack of Clarity in Key Methodological Innovations**: Although the Related Work section extensively covers prior research on LLM-enhanced recommendation, the Problem Formulation and Method sections lack clarity on how LLMs **in their work** are applied to enhance this specific problem. The authors introduce the problem, datasets, model, and metrics of the recommendation, but omit a clear explanation of how the LLM generates data and how this process enhances the recommendation system. - **Potential for Model Pattern Disruption from Generated Data**: Recent researches on utilizing generated data to train large language figure out that the use of synthetic data from LLMs may risk altering the model’s inherent patterns. It would be better if they add additional discussion or experimentation to demonstrate mitigation of this risk, which can also strengthen the contribution. ## Score - **Soundness**: 8/10 - **Contribution**: 7/10 - **Presentation**: 6/10 rating: 7 confidence: 3
FMe1EenIGh
[Proposal] Enhancing Multi-Domain Recommendations via LLM-Generated Data
[ "Chumeng Jiang", "Kairong Luo", "Zhixuan Pan" ]
Due to the increasingly severe information overload issue, the application of recommendation systems (RS) prevails across all kinds of Internet platforms, as they can provide personalized items for each user. Recently, with the growing number of specialized domains in one comprehensive platform, such as short video recommendations, article recommendations, and product recommendations in the same app, multi-domain recommendation has garnered significant attention. The multi-domain recommendation can simultaneously leverage knowledge from different domains, alleviating the data sparsity issue and allowing a single model to make recommendations across multiple domains, reducing the deploying costs. Nevertheless, the historical data size between different domains varies. Some areas may have significantly more data than others, namely the rich or cold-start scenarios. This disparity in data size can lead to certain limitations during model training. For instance, the learning of domain-specific parameters in cold-start scenarios may be insufficient, while the learning of domain-shared parameters may be dominated by rich scenarios. Previous work mainly addresses these issues through meticulous structural design of the models. In this paper, we adopt a different perspective by addressing this issue from the data standpoint. The emergence of LLMs has made it possible to generate virtual user and item data. Furthermore, LLMs, with their extensive world knowledge and outstanding comprehending capability, have demonstrated impressive recommendation capabilities in cold-start scenarios. In this case, we utilize the LLMs to simulate users in cold-start scenarios and synthesize more sufficient positive samples after learning from existing multi-domain historical interactions. Through an elaborately designed data filtering and denoising strategy, the recommendation quality of the multi-domain models can be enhanced.Moreover, through the lens of recommendation systems, we may get more insight into the synthetic data from existing LLMs.
[ "Multi-Domain Recommendation", "Large Language Model" ]
https://openreview.net/pdf?id=FMe1EenIGh
pU4uUJLk0m
official_review
1,731,344,415,595
FMe1EenIGh
[ "everyone" ]
[ "~Jin_Zhu_Xu1" ]
title: Clear Motivation and Well Define Problem review: The proposal points out a clear problem and motivation for the idea however, the methodology is not convincing enough. It would be better if the proposal thoroughly included more concrete details of techniques implementation and the roles of LLM. Although the author tried to explain briefly how synthetic data can improve the recommendation quality of the current models, the explanation of technical terms is somewhat unclear and there is a notable lack of detail in the implementation approach. rating: 8 confidence: 4
FMe1EenIGh
[Proposal] Enhancing Multi-Domain Recommendations via LLM-Generated Data
[ "Chumeng Jiang", "Kairong Luo", "Zhixuan Pan" ]
Due to the increasingly severe information overload issue, the application of recommendation systems (RS) prevails across all kinds of Internet platforms, as they can provide personalized items for each user. Recently, with the growing number of specialized domains in one comprehensive platform, such as short video recommendations, article recommendations, and product recommendations in the same app, multi-domain recommendation has garnered significant attention. The multi-domain recommendation can simultaneously leverage knowledge from different domains, alleviating the data sparsity issue and allowing a single model to make recommendations across multiple domains, reducing the deploying costs. Nevertheless, the historical data size between different domains varies. Some areas may have significantly more data than others, namely the rich or cold-start scenarios. This disparity in data size can lead to certain limitations during model training. For instance, the learning of domain-specific parameters in cold-start scenarios may be insufficient, while the learning of domain-shared parameters may be dominated by rich scenarios. Previous work mainly addresses these issues through meticulous structural design of the models. In this paper, we adopt a different perspective by addressing this issue from the data standpoint. The emergence of LLMs has made it possible to generate virtual user and item data. Furthermore, LLMs, with their extensive world knowledge and outstanding comprehending capability, have demonstrated impressive recommendation capabilities in cold-start scenarios. In this case, we utilize the LLMs to simulate users in cold-start scenarios and synthesize more sufficient positive samples after learning from existing multi-domain historical interactions. Through an elaborately designed data filtering and denoising strategy, the recommendation quality of the multi-domain models can be enhanced.Moreover, through the lens of recommendation systems, we may get more insight into the synthetic data from existing LLMs.
[ "Multi-Domain Recommendation", "Large Language Model" ]
https://openreview.net/pdf?id=FMe1EenIGh
oW5fNZIHkP
official_review
1,731,142,490,028
FMe1EenIGh
[ "everyone" ]
[ "~Guangjie_Xu1" ]
title: Review review: **Summary** Overall, this proposal is robust, addresses a critical research gap, and suggests a promising approach to an emerging issue in recommendation systems. **Research Problem** The problem is both innovative and relevant, exploring LLMs’ applications beyond traditional NLP, positioning the research at the frontier of recommendation system technology. This approach taps into a high-interest area, capitalizing on recent trends in leveraging LLMs to address data limitations. **Reliability of the Idea** The proposal’s concept appears reliable. Recent studies show LLMs can simulate user behavior effectively with minimal historical data, making the application of LLM-generated data for cold-start scenarios a well-founded choice. **Plan** The proposal outlines specific models and evaluation metrics, providing a clear roadmap for implementation. However, detailing how the data filtering strategy will handle differences across domains would further strengthen the plan. **Writing** The language is generally precise and professionally written, though some technical terms could benefit from further clarification for readability. Most sections are well-explained and demonstrate a clear, logical flow. rating: 8 confidence: 4
FMe1EenIGh
[Proposal] Enhancing Multi-Domain Recommendations via LLM-Generated Data
[ "Chumeng Jiang", "Kairong Luo", "Zhixuan Pan" ]
Due to the increasingly severe information overload issue, the application of recommendation systems (RS) prevails across all kinds of Internet platforms, as they can provide personalized items for each user. Recently, with the growing number of specialized domains in one comprehensive platform, such as short video recommendations, article recommendations, and product recommendations in the same app, multi-domain recommendation has garnered significant attention. The multi-domain recommendation can simultaneously leverage knowledge from different domains, alleviating the data sparsity issue and allowing a single model to make recommendations across multiple domains, reducing the deploying costs. Nevertheless, the historical data size between different domains varies. Some areas may have significantly more data than others, namely the rich or cold-start scenarios. This disparity in data size can lead to certain limitations during model training. For instance, the learning of domain-specific parameters in cold-start scenarios may be insufficient, while the learning of domain-shared parameters may be dominated by rich scenarios. Previous work mainly addresses these issues through meticulous structural design of the models. In this paper, we adopt a different perspective by addressing this issue from the data standpoint. The emergence of LLMs has made it possible to generate virtual user and item data. Furthermore, LLMs, with their extensive world knowledge and outstanding comprehending capability, have demonstrated impressive recommendation capabilities in cold-start scenarios. In this case, we utilize the LLMs to simulate users in cold-start scenarios and synthesize more sufficient positive samples after learning from existing multi-domain historical interactions. Through an elaborately designed data filtering and denoising strategy, the recommendation quality of the multi-domain models can be enhanced.Moreover, through the lens of recommendation systems, we may get more insight into the synthetic data from existing LLMs.
[ "Multi-Domain Recommendation", "Large Language Model" ]
https://openreview.net/pdf?id=FMe1EenIGh
jSvgSZTjiP
official_review
1,731,134,755,931
FMe1EenIGh
[ "everyone" ]
[ "~Kangping_Xu1" ]
title: Review of "Enhancing Multi-Domain Recommendations via LLM-Generated Data" review: ## Pros 1. **Novel Approach to Data Imbalance** - Takes a fresh perspective by addressing multi-domain recommendation challenges through data enhancement rather than model architecture - Leverages the emerging capabilities of LLMs in a practical application, potentially creating a new direction for recommendation system research 2. **Comprehensive Evaluation Framework** - Proposes a well-rounded evaluation strategy using both accuracy (AUC) and fairness metrics (Gini coefficient, item coverage) - Includes comparison with both single-domain and multi-domain baseline models, providing a thorough validation approach ## Cons 1. **Limited Technical Details** - The method part lacks specific details about the LLM data generation process and the "elaborately designed data filtering and denoising strategy" mentioned in the introduction, which I think is the most important enhancement and - There is no clear explanation of how the synthetic data will be validated for quality and authenticity. I am unsure if just evaluating the end-to-end recommendation metrics would guarantee the quality of the synthetic data. The proposal presents an interesting direction for research but would benefit from more detailed technical specifications and consideration of practical implementation challenges. rating: 8 confidence: 4
FMe1EenIGh
[Proposal] Enhancing Multi-Domain Recommendations via LLM-Generated Data
[ "Chumeng Jiang", "Kairong Luo", "Zhixuan Pan" ]
Due to the increasingly severe information overload issue, the application of recommendation systems (RS) prevails across all kinds of Internet platforms, as they can provide personalized items for each user. Recently, with the growing number of specialized domains in one comprehensive platform, such as short video recommendations, article recommendations, and product recommendations in the same app, multi-domain recommendation has garnered significant attention. The multi-domain recommendation can simultaneously leverage knowledge from different domains, alleviating the data sparsity issue and allowing a single model to make recommendations across multiple domains, reducing the deploying costs. Nevertheless, the historical data size between different domains varies. Some areas may have significantly more data than others, namely the rich or cold-start scenarios. This disparity in data size can lead to certain limitations during model training. For instance, the learning of domain-specific parameters in cold-start scenarios may be insufficient, while the learning of domain-shared parameters may be dominated by rich scenarios. Previous work mainly addresses these issues through meticulous structural design of the models. In this paper, we adopt a different perspective by addressing this issue from the data standpoint. The emergence of LLMs has made it possible to generate virtual user and item data. Furthermore, LLMs, with their extensive world knowledge and outstanding comprehending capability, have demonstrated impressive recommendation capabilities in cold-start scenarios. In this case, we utilize the LLMs to simulate users in cold-start scenarios and synthesize more sufficient positive samples after learning from existing multi-domain historical interactions. Through an elaborately designed data filtering and denoising strategy, the recommendation quality of the multi-domain models can be enhanced.Moreover, through the lens of recommendation systems, we may get more insight into the synthetic data from existing LLMs.
[ "Multi-Domain Recommendation", "Large Language Model" ]
https://openreview.net/pdf?id=FMe1EenIGh
XrhLdvZ2uZ
official_review
1,730,970,695,761
FMe1EenIGh
[ "everyone" ]
[ "~Huajun_Bai1" ]
title: Advancing Multi-Domain Recommendations with LLM-Synthesized Data: A Review of Innovation and Its Challenges review: Strengths 1. Novelty: The paper presents an innovative approach to multi-domain recommendation systems by leveraging LLMs to generate synthetic data, addressing the cold-start problem in a unique way. 2. Comprehensive Literature Review: The authors have provided an extensive review of related works, mentioning numerous papers that cover multi-domain recommendation and data synthesis, demonstrating a solid understanding of the field. 3. Practical Relevance: The paper tackles the real-world issue of data sparsity in multi-domain RS, which is particularly pertinent given the increasing number of platforms offering a variety of services. Weaknesses 1. Data Filtering and Denoising: The proposal lacks a detailed explanation of the methodology for filtering and denoising the synthetic data generated by LLMs, which is essential for ensuring the quality and reliability of the recommendation system. 2. Multi-Domain Evaluation Metrics: Although the paper employs established metrics such as AUC, Gini coefficient, and item coverage, there is a need for a more explicit discussion on how these metrics will specifically assess the impact of synthetic data on the fairness and accuracy of recommendations across multiple domains. rating: 5 confidence: 3
FMe1EenIGh
[Proposal] Enhancing Multi-Domain Recommendations via LLM-Generated Data
[ "Chumeng Jiang", "Kairong Luo", "Zhixuan Pan" ]
Due to the increasingly severe information overload issue, the application of recommendation systems (RS) prevails across all kinds of Internet platforms, as they can provide personalized items for each user. Recently, with the growing number of specialized domains in one comprehensive platform, such as short video recommendations, article recommendations, and product recommendations in the same app, multi-domain recommendation has garnered significant attention. The multi-domain recommendation can simultaneously leverage knowledge from different domains, alleviating the data sparsity issue and allowing a single model to make recommendations across multiple domains, reducing the deploying costs. Nevertheless, the historical data size between different domains varies. Some areas may have significantly more data than others, namely the rich or cold-start scenarios. This disparity in data size can lead to certain limitations during model training. For instance, the learning of domain-specific parameters in cold-start scenarios may be insufficient, while the learning of domain-shared parameters may be dominated by rich scenarios. Previous work mainly addresses these issues through meticulous structural design of the models. In this paper, we adopt a different perspective by addressing this issue from the data standpoint. The emergence of LLMs has made it possible to generate virtual user and item data. Furthermore, LLMs, with their extensive world knowledge and outstanding comprehending capability, have demonstrated impressive recommendation capabilities in cold-start scenarios. In this case, we utilize the LLMs to simulate users in cold-start scenarios and synthesize more sufficient positive samples after learning from existing multi-domain historical interactions. Through an elaborately designed data filtering and denoising strategy, the recommendation quality of the multi-domain models can be enhanced.Moreover, through the lens of recommendation systems, we may get more insight into the synthetic data from existing LLMs.
[ "Multi-Domain Recommendation", "Large Language Model" ]
https://openreview.net/pdf?id=FMe1EenIGh
WQWqBBQmdp
official_review
1,731,262,504,445
FMe1EenIGh
[ "everyone" ]
[ "~Tong_Yu9" ]
title: Review of "Enhancing Multi-Domain Recommendations via LLM-Generated Data" review: Quality: The quality of the paper is generally high, the methodology is clear, and the effective solution is proposed for the sparse data problem in the multi-domain recommendation system. However, the details of the data generation and implementation process are not well described. Originality: Although the idea of using LLM to generate data was novel, its distinction and advantage from existing methods needed to be further emphasized in the application of multi-field recommendation system. Significance: This research is of great significance in multi-domain recommendation systems, especially in cold start scenarios. However, the representativeness of the generated data may affect its practical application. Pros: Innovative approach: The idea of using LLM to generate virtual user and project data is forward-looking. Solve the problem of data sparsity: This method effectively alleviates the problem of data sparsity in multi-domain recommendation. Cons: Lack of originality: Although the method is novel, it lacks obvious innovation points compared to existing research. Complexity: The complexity of data generation and denoising can affect the feasibility of practical applications. Data representation problem: The generated data may not fully represent the real user behavior, affecting the recommendation effect. rating: 8 confidence: 3
FMe1EenIGh
[Proposal] Enhancing Multi-Domain Recommendations via LLM-Generated Data
[ "Chumeng Jiang", "Kairong Luo", "Zhixuan Pan" ]
Due to the increasingly severe information overload issue, the application of recommendation systems (RS) prevails across all kinds of Internet platforms, as they can provide personalized items for each user. Recently, with the growing number of specialized domains in one comprehensive platform, such as short video recommendations, article recommendations, and product recommendations in the same app, multi-domain recommendation has garnered significant attention. The multi-domain recommendation can simultaneously leverage knowledge from different domains, alleviating the data sparsity issue and allowing a single model to make recommendations across multiple domains, reducing the deploying costs. Nevertheless, the historical data size between different domains varies. Some areas may have significantly more data than others, namely the rich or cold-start scenarios. This disparity in data size can lead to certain limitations during model training. For instance, the learning of domain-specific parameters in cold-start scenarios may be insufficient, while the learning of domain-shared parameters may be dominated by rich scenarios. Previous work mainly addresses these issues through meticulous structural design of the models. In this paper, we adopt a different perspective by addressing this issue from the data standpoint. The emergence of LLMs has made it possible to generate virtual user and item data. Furthermore, LLMs, with their extensive world knowledge and outstanding comprehending capability, have demonstrated impressive recommendation capabilities in cold-start scenarios. In this case, we utilize the LLMs to simulate users in cold-start scenarios and synthesize more sufficient positive samples after learning from existing multi-domain historical interactions. Through an elaborately designed data filtering and denoising strategy, the recommendation quality of the multi-domain models can be enhanced.Moreover, through the lens of recommendation systems, we may get more insight into the synthetic data from existing LLMs.
[ "Multi-Domain Recommendation", "Large Language Model" ]
https://openreview.net/pdf?id=FMe1EenIGh
Ne53N3IO7g
official_review
1,731,324,487,636
FMe1EenIGh
[ "everyone" ]
[ "~Changsong_Lei2" ]
title: Review of "Enhancing Multi-Domain Recommendations via LLM-Generated Data" review: ### Summary: This proposal tries to address the challenge of data sparsity and imbalance in multi-domain recommendation systems by leveraging synthetic data generated by large language models (LLMs). ### Pros: - The proposal demonstrates a strong understanding of the technical requirements for multi-domain recommendations. It outlines methods for data synthesis, noise control, and the incorporation of LLM-generated data within the system. - The CTR prediction task is clearly defined, and the paper provides necessary mathematical notations to describe the problem and evaluation metrics, such as AUC, Gini coefficient, and item coverage. ### Cons: - Given the computational intensity of LLMs, it would be useful to discuss the scalability of the approach.. - Lack clear description for their proposed method and motivation. Generally speaking, this proposal is well written and demonstrates their understanding of the problem. rating: 8 confidence: 4
FMe1EenIGh
[Proposal] Enhancing Multi-Domain Recommendations via LLM-Generated Data
[ "Chumeng Jiang", "Kairong Luo", "Zhixuan Pan" ]
Due to the increasingly severe information overload issue, the application of recommendation systems (RS) prevails across all kinds of Internet platforms, as they can provide personalized items for each user. Recently, with the growing number of specialized domains in one comprehensive platform, such as short video recommendations, article recommendations, and product recommendations in the same app, multi-domain recommendation has garnered significant attention. The multi-domain recommendation can simultaneously leverage knowledge from different domains, alleviating the data sparsity issue and allowing a single model to make recommendations across multiple domains, reducing the deploying costs. Nevertheless, the historical data size between different domains varies. Some areas may have significantly more data than others, namely the rich or cold-start scenarios. This disparity in data size can lead to certain limitations during model training. For instance, the learning of domain-specific parameters in cold-start scenarios may be insufficient, while the learning of domain-shared parameters may be dominated by rich scenarios. Previous work mainly addresses these issues through meticulous structural design of the models. In this paper, we adopt a different perspective by addressing this issue from the data standpoint. The emergence of LLMs has made it possible to generate virtual user and item data. Furthermore, LLMs, with their extensive world knowledge and outstanding comprehending capability, have demonstrated impressive recommendation capabilities in cold-start scenarios. In this case, we utilize the LLMs to simulate users in cold-start scenarios and synthesize more sufficient positive samples after learning from existing multi-domain historical interactions. Through an elaborately designed data filtering and denoising strategy, the recommendation quality of the multi-domain models can be enhanced.Moreover, through the lens of recommendation systems, we may get more insight into the synthetic data from existing LLMs.
[ "Multi-Domain Recommendation", "Large Language Model" ]
https://openreview.net/pdf?id=FMe1EenIGh
BwDoeEdXe4
official_review
1,731,420,775,746
FMe1EenIGh
[ "everyone" ]
[ "~Yuji_Wang4" ]
title: Review of "Enhancing Multi-Domain Recommendations via LLM-Generated Data" review: The project aims to explore the research topic of multi-domain recommendations. The authors propose to address the problem raised by imbalanced data size between different domains by enhancing the training with LLM-generated data. ### Strengths: 1. Topic selection: Multi-domain recommendation is a research topic with high practical value. The proposal highlights the challenges in this area and offers targeted solutions that address these difficulties. 2. Well-structured writing: The proposal is well-organized. It provides a comprehensive review of related works and the research problem is well-defined. Meanwhile, the experimental plan is detailed and feasible, providing a solid foundation for the study. ### Weaknesses: 1. Using LLM-generated data to enhance the training process is a widely-used approach. It could be better if innovative methods can be designed when implement this approach. 2. The method section lacks specific details about the LLM used for data generation and the algorithms for data augmentation and preprocessing. rating: 8 confidence: 3
FMe1EenIGh
[Proposal] Enhancing Multi-Domain Recommendations via LLM-Generated Data
[ "Chumeng Jiang", "Kairong Luo", "Zhixuan Pan" ]
Due to the increasingly severe information overload issue, the application of recommendation systems (RS) prevails across all kinds of Internet platforms, as they can provide personalized items for each user. Recently, with the growing number of specialized domains in one comprehensive platform, such as short video recommendations, article recommendations, and product recommendations in the same app, multi-domain recommendation has garnered significant attention. The multi-domain recommendation can simultaneously leverage knowledge from different domains, alleviating the data sparsity issue and allowing a single model to make recommendations across multiple domains, reducing the deploying costs. Nevertheless, the historical data size between different domains varies. Some areas may have significantly more data than others, namely the rich or cold-start scenarios. This disparity in data size can lead to certain limitations during model training. For instance, the learning of domain-specific parameters in cold-start scenarios may be insufficient, while the learning of domain-shared parameters may be dominated by rich scenarios. Previous work mainly addresses these issues through meticulous structural design of the models. In this paper, we adopt a different perspective by addressing this issue from the data standpoint. The emergence of LLMs has made it possible to generate virtual user and item data. Furthermore, LLMs, with their extensive world knowledge and outstanding comprehending capability, have demonstrated impressive recommendation capabilities in cold-start scenarios. In this case, we utilize the LLMs to simulate users in cold-start scenarios and synthesize more sufficient positive samples after learning from existing multi-domain historical interactions. Through an elaborately designed data filtering and denoising strategy, the recommendation quality of the multi-domain models can be enhanced.Moreover, through the lens of recommendation systems, we may get more insight into the synthetic data from existing LLMs.
[ "Multi-Domain Recommendation", "Large Language Model" ]
https://openreview.net/pdf?id=FMe1EenIGh
9kNVjcAOSx
official_review
1,731,165,150,258
FMe1EenIGh
[ "everyone" ]
[ "~Liutao7" ]
title: The proposal has a good concept, exploring the use of large models to solve some practical problems of recommendation systems from a data perspective. review: The proposal has good application potential and presents an interesting idea of using large language models to generate data to enhance the performance of multi-domain recommendation systems. I think: 1) The proposal has good integrity. 2) The proposal explores the potential of LLM in generating virtual user and item data, which offers a novel approach to addressing the issue of data sparsity. 3) The workload is reasonable. One suggestion: It would be important to further refine the dataset construction method in the proposal; also, consider whether the issue of hallucinations in large models itself needs to be addressed, and how to verify the quality and authenticity of the generated data. rating: 9 confidence: 4
FMe1EenIGh
[Proposal] Enhancing Multi-Domain Recommendations via LLM-Generated Data
[ "Chumeng Jiang", "Kairong Luo", "Zhixuan Pan" ]
Due to the increasingly severe information overload issue, the application of recommendation systems (RS) prevails across all kinds of Internet platforms, as they can provide personalized items for each user. Recently, with the growing number of specialized domains in one comprehensive platform, such as short video recommendations, article recommendations, and product recommendations in the same app, multi-domain recommendation has garnered significant attention. The multi-domain recommendation can simultaneously leverage knowledge from different domains, alleviating the data sparsity issue and allowing a single model to make recommendations across multiple domains, reducing the deploying costs. Nevertheless, the historical data size between different domains varies. Some areas may have significantly more data than others, namely the rich or cold-start scenarios. This disparity in data size can lead to certain limitations during model training. For instance, the learning of domain-specific parameters in cold-start scenarios may be insufficient, while the learning of domain-shared parameters may be dominated by rich scenarios. Previous work mainly addresses these issues through meticulous structural design of the models. In this paper, we adopt a different perspective by addressing this issue from the data standpoint. The emergence of LLMs has made it possible to generate virtual user and item data. Furthermore, LLMs, with their extensive world knowledge and outstanding comprehending capability, have demonstrated impressive recommendation capabilities in cold-start scenarios. In this case, we utilize the LLMs to simulate users in cold-start scenarios and synthesize more sufficient positive samples after learning from existing multi-domain historical interactions. Through an elaborately designed data filtering and denoising strategy, the recommendation quality of the multi-domain models can be enhanced.Moreover, through the lens of recommendation systems, we may get more insight into the synthetic data from existing LLMs.
[ "Multi-Domain Recommendation", "Large Language Model" ]
https://openreview.net/pdf?id=FMe1EenIGh
9L6e8jO1ee
official_review
1,730,894,479,157
FMe1EenIGh
[ "everyone" ]
[ "~Ziang_Zheng1" ]
title: Leveraging LLMs for Data Synthesis in Multi-Domain Recommendation: A Promising Data-Centric Approach with Considerations for Quality and Fairness review: **Summary** This paper addresses the data sparsity issue in multi-domain recommendation systems (RS) by proposing an approach that leverages large language models (LLMs) to generate synthetic user and item data in cold-start scenarios. By focusing on data synthesis rather than structural model adjustments, the authors aim to improve recommendation quality and reduce deployment costs across multiple domains. **Strengths** 1. **Novelty**: The use of LLMs for generating virtual user and item data to mitigate cold-start issues in multi-domain RS is a novel approach. The authors bring a unique perspective by focusing on the data itself rather than only refining model architectures. 2. **Timeliness**: Multi-domain recommendation is increasingly important due to the rise of comprehensive platforms offering various services. This work is timely and addresses a real-world problem. 3. **Solid Motivation**: The authors clearly explain the challenges of data imbalance across domains and how LLMs, with their extensive knowledge base, offer a feasible solution for generating high-quality data. 4. **Comprehensive Related Work**: The related work section is thorough and demonstrates a strong grasp of prior research in both multi-domain RS and synthetic data generation. **Weaknesses** 1. **Data Quality and Filtering**: The proposal briefly mentions filtering and denoising synthetic data generated by LLMs. However, it lacks a detailed methodology on how the authors plan to address potential noise and bias in the generated data, which is crucial for ensuring RS performance. 2. **Potential LLM Biases**: LLMs may introduce unintended biases or inaccuracies in generated data, especially for nuanced user preferences in multi-domain scenarios. The proposal would benefit from a discussion on mitigating such biases. 3. **Evaluation Plan**: While the proposal includes AUC, Gini coefficient, and item coverage metrics, it lacks clarity on how these metrics will capture the impact of synthetic data on recommendation fairness across domains. **Overall Evaluation** The proposal presents a promising approach to improving multi-domain recommendation systems using LLMs for data synthesis. With additional clarity on data filtering and bias mitigation, the proposed methodology has strong potential to advance multi-domain RS research. **Score** - **Originality**: 8/10 - **Clarity**: 7/10 - **Technical Soundness**: 6/10 - **Relevance**: 9/10 **Recommendation**: Accept with minor revisions. rating: 8 confidence: 3
FMe1EenIGh
[Proposal] Enhancing Multi-Domain Recommendations via LLM-Generated Data
[ "Chumeng Jiang", "Kairong Luo", "Zhixuan Pan" ]
Due to the increasingly severe information overload issue, the application of recommendation systems (RS) prevails across all kinds of Internet platforms, as they can provide personalized items for each user. Recently, with the growing number of specialized domains in one comprehensive platform, such as short video recommendations, article recommendations, and product recommendations in the same app, multi-domain recommendation has garnered significant attention. The multi-domain recommendation can simultaneously leverage knowledge from different domains, alleviating the data sparsity issue and allowing a single model to make recommendations across multiple domains, reducing the deploying costs. Nevertheless, the historical data size between different domains varies. Some areas may have significantly more data than others, namely the rich or cold-start scenarios. This disparity in data size can lead to certain limitations during model training. For instance, the learning of domain-specific parameters in cold-start scenarios may be insufficient, while the learning of domain-shared parameters may be dominated by rich scenarios. Previous work mainly addresses these issues through meticulous structural design of the models. In this paper, we adopt a different perspective by addressing this issue from the data standpoint. The emergence of LLMs has made it possible to generate virtual user and item data. Furthermore, LLMs, with their extensive world knowledge and outstanding comprehending capability, have demonstrated impressive recommendation capabilities in cold-start scenarios. In this case, we utilize the LLMs to simulate users in cold-start scenarios and synthesize more sufficient positive samples after learning from existing multi-domain historical interactions. Through an elaborately designed data filtering and denoising strategy, the recommendation quality of the multi-domain models can be enhanced.Moreover, through the lens of recommendation systems, we may get more insight into the synthetic data from existing LLMs.
[ "Multi-Domain Recommendation", "Large Language Model" ]
https://openreview.net/pdf?id=FMe1EenIGh
6VYSqDSErv
official_review
1,731,425,542,137
FMe1EenIGh
[ "everyone" ]
[ "~Chendong_Xiang1" ]
title: review review: Strengths: 1. Innovative Use of LLMs for Data Generation: The paper takes a fresh approach by using LLMs to generate synthetic data, tackling the data sparsity issue in multi-domain recommendations. 2. Effective Cold-Start Handling: It addresses the cold-start problem by creating simulated user interactions, improving recommendation quality in low-data domains. 3. Comprehensive Evaluation: The study uses multiple metrics (AUC, Gini coefficient, item coverage) to assess accuracy and fairness, providing a well-rounded evaluation. 4. Cost Reduction Potential: A single, cross-domain model may reduce deployment and maintenance costs, benefiting platforms with diverse domains. Weaknesses: 1. Limited Domain-Specific Analysis: The impact of synthetic data on different domains isn’t thoroughly explored; some domains may benefit more than others. 2. Noise from Synthetic Data: The risk of noisy data impacting performance remains, and the denoising process could be explained more clearly. rating: 7 confidence: 2
FLHYoQ9PJ6
【Proposal】 Agent for Oversea Marketing Support
[ "Zhijie shen", "Wuqian", "Keyu Shen" ]
Traditional methods for overseas marketing require extensive manual efforts, such as reading documents, conducting interviews, following news, and analyzing statistical reports about target countries. In this project, we propose an agent-based system utilizing Large Language Models (LLMs) to enhance the efficiency and intelligence of country studies.
[ "Agent", "LLM", "Crawler" ]
https://openreview.net/pdf?id=FLHYoQ9PJ6
vJeF7bdJmZ
official_review
1,731,149,704,552
FLHYoQ9PJ6
[ "everyone" ]
[ "~Guangjie_Xu1" ]
title: Review review: The proposal plan to address a relevant and impactful issue—enhancing the efficiency of overseas market research for Chinese companies. **Pros** 1. The planned steps—data ingestion, cleaning, analysis, and output generation—are logical and align well with the project’s goals. **Cons** 1. While aiming for timely information, the proposal does not thoroughly explain how it will manage real-time data challenges. 2. The proposal could benefit from providing more specific examples or use cases to illustrate the agent’s applications in overseas marketing tasks. 3. No references rating: 5 confidence: 3
FLHYoQ9PJ6
【Proposal】 Agent for Oversea Marketing Support
[ "Zhijie shen", "Wuqian", "Keyu Shen" ]
Traditional methods for overseas marketing require extensive manual efforts, such as reading documents, conducting interviews, following news, and analyzing statistical reports about target countries. In this project, we propose an agent-based system utilizing Large Language Models (LLMs) to enhance the efficiency and intelligence of country studies.
[ "Agent", "LLM", "Crawler" ]
https://openreview.net/pdf?id=FLHYoQ9PJ6
ovu5s9dBPa
official_review
1,731,330,335,783
FLHYoQ9PJ6
[ "everyone" ]
[ "~Yang_Ouyang2" ]
title: Good structure and understanding, but lack details review: Strengths: Relevant Solution: Addresses traditional marketing research limitations such as costs, outdated information, usability Well-Defined: Authors have a clear understanding in the challenges of traditional overseas marketing methods. Structured Methodology: Authors provide a clear workflow, including data ingestion, analysis, and feedback. Weaknesses: Lacks Specificity in LLM Application: Lacks detail on how LLMs can diverse datasets. Unclear LangChain usage: How does it help? Data Privacy: Since the Agent collects data, is the data collected allowed? rating: 7 confidence: 3
FLHYoQ9PJ6
【Proposal】 Agent for Oversea Marketing Support
[ "Zhijie shen", "Wuqian", "Keyu Shen" ]
Traditional methods for overseas marketing require extensive manual efforts, such as reading documents, conducting interviews, following news, and analyzing statistical reports about target countries. In this project, we propose an agent-based system utilizing Large Language Models (LLMs) to enhance the efficiency and intelligence of country studies.
[ "Agent", "LLM", "Crawler" ]
https://openreview.net/pdf?id=FLHYoQ9PJ6
oNmXMELmCz
official_review
1,731,139,327,832
FLHYoQ9PJ6
[ "everyone" ]
[ "~Tim_Bakkenes1" ]
title: Review review: The problem highlighted by the proposal is very clear and the benefit of utilising agents for oversea marketing support could have a big impact on it. Breaking down the problems with current approach into 3 main parts, as you do, strengthens the potential of your research. There are some issues with the proposal. 1. The grammar: "we plane use LangChain as AI agent Frameworks" - This sentence should be: "We plan to use LangChain as an AI agent Framework". There are more instances where the grammar is awkward or incorrect. Make sure to proofread before you submit. 2. References: It would improve the quality of the proposal if it was built on more trusted sources. 3. Methodology: The method proposed in the proposal fails to give examples on what data sources can be used to get data from websites and what LLM:s that could be utilised. While the method is broad, it would benefit from a bit more depth. Overall the proposal presents an interesting research topic but could be refined to make it more clear for the reader how these agents could be developed. rating: 5 confidence: 3
FLHYoQ9PJ6
【Proposal】 Agent for Oversea Marketing Support
[ "Zhijie shen", "Wuqian", "Keyu Shen" ]
Traditional methods for overseas marketing require extensive manual efforts, such as reading documents, conducting interviews, following news, and analyzing statistical reports about target countries. In this project, we propose an agent-based system utilizing Large Language Models (LLMs) to enhance the efficiency and intelligence of country studies.
[ "Agent", "LLM", "Crawler" ]
https://openreview.net/pdf?id=FLHYoQ9PJ6
Vy8D31k8Vj
official_review
1,731,331,608,313
FLHYoQ9PJ6
[ "everyone" ]
[ "~XueZeng1" ]
title: Review review: This proposal shows the clear steps to solve the problem. However,there is no in-depth thinking about the technical challenges encountered in solving this problem, such as the ability of LLM to deal with problems in a specific field,and no references. rating: 4 confidence: 3
FLHYoQ9PJ6
【Proposal】 Agent for Oversea Marketing Support
[ "Zhijie shen", "Wuqian", "Keyu Shen" ]
Traditional methods for overseas marketing require extensive manual efforts, such as reading documents, conducting interviews, following news, and analyzing statistical reports about target countries. In this project, we propose an agent-based system utilizing Large Language Models (LLMs) to enhance the efficiency and intelligence of country studies.
[ "Agent", "LLM", "Crawler" ]
https://openreview.net/pdf?id=FLHYoQ9PJ6
JQE4hAsZ6e
official_review
1,731,120,418,307
FLHYoQ9PJ6
[ "everyone" ]
[ "~Renrui_Tian1" ]
title: Clear Problem Identification, but Methodology and Writing Require Refinement review: **Strengths**: * **Identifies a clear problem**: The proposal effectively highlights the limitations of traditional overseas marketing methods, particularly the reliance on outdated reports and manual data analysis. **Weaknesses**: * **Vague and underdeveloped methodology**: The proposed workflow lacks specific details on data collection methods, LLM model selection and training, data cleaning techniques, and analysis methodologies. * **Lack of specificity in data sources**: The proposal mentions data collection from online databases, social media, and official reports but does not specify which sources will be used. This lack of detail raises concerns about the comprehensiveness and reliability of the data. * **Lack of clarity on LLM utilization**: The proposal mentions the use of open-source LLM models but lacks details on specific models, potential fine-tuning requirements, and how the models will be integrated into the agent's workflow. * **Lack of bibliography and references**: The proposal does not include a bibliography or references section, making it difficult to verify the sources of information and research mentioned. * **Grammar and style issues**: The proposal contains some obvious grammatical errors and inconsistencies, impacting its rigor and professionalism. **Overall**: The proposal presents a general idea for utilizing AI and LLMs in overseas marketing support. However, it lacks sufficient detail, clarity, and consideration of critical aspects such as data sources and LLM utilization. Addressing these weaknesses is crucial for developing a viable and impactful solution. rating: 4 confidence: 4
FLHYoQ9PJ6
【Proposal】 Agent for Oversea Marketing Support
[ "Zhijie shen", "Wuqian", "Keyu Shen" ]
Traditional methods for overseas marketing require extensive manual efforts, such as reading documents, conducting interviews, following news, and analyzing statistical reports about target countries. In this project, we propose an agent-based system utilizing Large Language Models (LLMs) to enhance the efficiency and intelligence of country studies.
[ "Agent", "LLM", "Crawler" ]
https://openreview.net/pdf?id=FLHYoQ9PJ6
Gmncxb7Qyu
official_review
1,731,311,440,969
FLHYoQ9PJ6
[ "everyone" ]
[ "~Wenjing_Wu1" ]
title: Review review: **Summary**: The proposal outlines a method based on Large Language Models (LLMs) designed to address challenges in traditional overseas market research. **Strengths**: - **Clear Problem Identification**: The proposal effectively describes the limitations of traditional overseas market research approaches, providing a clear context and rationale for the new method. - **Logical Structure in Methodology**: The methodology section is organized in a step-by-step format, which makes it easy to follow and logically sound. **Weaknesses**: - **Formatting Issues**: The proposal’s format diverges from the conventional structure. For instance, there is only a "1.1" subsection without a subsequent "1.2", which appears unnecessary and disrupts the document's flow. - **Issue in Organizational Structure**: The abstract seems to function as the introduction, which could lead to confusion. - **Lack of References**: The proposal would benefit from a set of relevant references to strengthen its credibility and demonstrate familiarity with existing literature. - **Insufficient Detail in Methodology**: While the methodology is logically organized, it lacks concrete details for each step. Adding specific information would enhance the clarity and depth of the approach. rating: 5 confidence: 3
FLHYoQ9PJ6
【Proposal】 Agent for Oversea Marketing Support
[ "Zhijie shen", "Wuqian", "Keyu Shen" ]
Traditional methods for overseas marketing require extensive manual efforts, such as reading documents, conducting interviews, following news, and analyzing statistical reports about target countries. In this project, we propose an agent-based system utilizing Large Language Models (LLMs) to enhance the efficiency and intelligence of country studies.
[ "Agent", "LLM", "Crawler" ]
https://openreview.net/pdf?id=FLHYoQ9PJ6
DPJt38hlcb
official_review
1,730,882,851,372
FLHYoQ9PJ6
[ "everyone" ]
[ "~Feihong_Zhang1" ]
title: review review: No references are listed, many of them seem to be generated by GPT, the idea is not very clear, and the scheme is not fully understood. rating: 4 confidence: 2
FLHYoQ9PJ6
【Proposal】 Agent for Oversea Marketing Support
[ "Zhijie shen", "Wuqian", "Keyu Shen" ]
Traditional methods for overseas marketing require extensive manual efforts, such as reading documents, conducting interviews, following news, and analyzing statistical reports about target countries. In this project, we propose an agent-based system utilizing Large Language Models (LLMs) to enhance the efficiency and intelligence of country studies.
[ "Agent", "LLM", "Crawler" ]
https://openreview.net/pdf?id=FLHYoQ9PJ6
AfTKdcBXcp
official_review
1,731,413,714,908
FLHYoQ9PJ6
[ "everyone" ]
[ "~Maanping_Shao1" ]
title: Review review: This proposal outlines a promising approach to streamline overseas marketing through an AI-driven agent utilizing Large Language Models (LLMs). The project aims to address the high costs, outdated information, and limited usability associated with traditional market analysis methods. By incorporating LLMs and tools like web scraping and social media APIs, the proposed agent will enable real-time data ingestion, analysis, and output generation, enhancing decision-making efficiency for international market insights. The methodology is well-defined, focusing on automated data collection, preprocessing, analysis, and reporting, with a structured feedback loop to improve performance. This approach could significantly reduce costs and provide up-to-date, actionable insights, making it a strong candidate for further development. However, clarity on specific challenges in LLM-based analysis, data privacy, and accuracy metrics would strengthen the proposal. rating: 6 confidence: 3
FLHYoQ9PJ6
【Proposal】 Agent for Oversea Marketing Support
[ "Zhijie shen", "Wuqian", "Keyu Shen" ]
Traditional methods for overseas marketing require extensive manual efforts, such as reading documents, conducting interviews, following news, and analyzing statistical reports about target countries. In this project, we propose an agent-based system utilizing Large Language Models (LLMs) to enhance the efficiency and intelligence of country studies.
[ "Agent", "LLM", "Crawler" ]
https://openreview.net/pdf?id=FLHYoQ9PJ6
6VGpxdBDgR
official_review
1,731,061,263,240
FLHYoQ9PJ6
[ "everyone" ]
[ "~Yunghwei_Lai1" ]
title: review review: No references and insufficient elaboration on methodology. The whole proposal is too vague. rating: 4 confidence: 3
DZI0v9KRU0
[Proposal-ML] Relating Physical Activity to Problematic Internet Use
[ "Killian Conyngham", "Jackson M Luckey", "Fabian Pawelczyk" ]
The focus of our project is to predict the risk of problematic internet usage (PIU) by minors, based on data of their physical activities. In particular, our goal is to build a model for entrance in the Kaggle competition run by the Child Mind Institute on this topic. Here, one key challenge lies in the high level of missing data—particularly concerning the target variable—and the actigraphy time-series component of the dataset. Our primary objective will be to design and train a deep neural network to effectively address these issues.
[ "Deep Learning", "Time Series", "Actigraphy", "Semi-Supervised Learning" ]
https://openreview.net/pdf?id=DZI0v9KRU0
vHgEr5Gvbq
official_review
1,731,383,664,793
DZI0v9KRU0
[ "everyone" ]
[ "~Cheng_Gao2" ]
title: Review for Relating Physical Activity to Problematic Internet Use review: Strengths: - The issue highlighted is a highly relevant topic is practical and innovative. - The methodological approach is well thought out, with a clear plan to use deep learning models and a hybrid neural network to integrate time series and tabular data. - The use of semi-supervised learning (pseudo-labelling) to handle missing labels and techniques like GRU for time series is promising and aligns with current best practices. - The proposal is thorough in its consideration of data limitations and shows an understanding of the challenges in data imputation and model design. Weaknesses: - The proposal could benefit from a **clearer task definition**. For instance, specifying the exact format and type of model input would enhance the reader's understanding of this task. - There could be more discussion on potential **evaluation strategies** and error analysis methods to ensure model robustness, particularly for edge cases in the data. rating: 8 confidence: 4
DZI0v9KRU0
[Proposal-ML] Relating Physical Activity to Problematic Internet Use
[ "Killian Conyngham", "Jackson M Luckey", "Fabian Pawelczyk" ]
The focus of our project is to predict the risk of problematic internet usage (PIU) by minors, based on data of their physical activities. In particular, our goal is to build a model for entrance in the Kaggle competition run by the Child Mind Institute on this topic. Here, one key challenge lies in the high level of missing data—particularly concerning the target variable—and the actigraphy time-series component of the dataset. Our primary objective will be to design and train a deep neural network to effectively address these issues.
[ "Deep Learning", "Time Series", "Actigraphy", "Semi-Supervised Learning" ]
https://openreview.net/pdf?id=DZI0v9KRU0
p1yTUUhMac
official_review
1,731,331,982,817
DZI0v9KRU0
[ "everyone" ]
[ "~Yang_Ouyang2" ]
title: Well-researched approach, but would benefit from clearer model justification and more discussion of evaluation methods. review: Strengths Relevant Problem Focus: Children and health is a big concern of current age. Thorough Literature Review Clear Objective and Methodology: The authors have identified the target variable, Severity Impairment Index, and the primary goal of maximizing the quadratic weighted kappa metric. Weaknesses Lacks Justification for for choosing specific models. Limited Discussion on evaluation. Will there be cross evaluation? The proposal presents a well-researched approach, but would benefit from clearer model justification and more discussion of evaluation methods. rating: 10 confidence: 4
DZI0v9KRU0
[Proposal-ML] Relating Physical Activity to Problematic Internet Use
[ "Killian Conyngham", "Jackson M Luckey", "Fabian Pawelczyk" ]
The focus of our project is to predict the risk of problematic internet usage (PIU) by minors, based on data of their physical activities. In particular, our goal is to build a model for entrance in the Kaggle competition run by the Child Mind Institute on this topic. Here, one key challenge lies in the high level of missing data—particularly concerning the target variable—and the actigraphy time-series component of the dataset. Our primary objective will be to design and train a deep neural network to effectively address these issues.
[ "Deep Learning", "Time Series", "Actigraphy", "Semi-Supervised Learning" ]
https://openreview.net/pdf?id=DZI0v9KRU0
oshTwwCmHA
official_review
1,730,966,137,525
DZI0v9KRU0
[ "everyone" ]
[ "~Lily_Sheng1" ]
title: Review of Submission 51 review: The submission addresses the issue of Problematic Internet Use (PIU) and using physical indicators to predict PIU in children and adolescents. The approach employs a semi-supervised learning approach with a Hybrid Neural Network, combining LSTM or GRU architectures, to handle missing data and integrate accelerometer time series with health and demographic data for PIU prediction. Pros: 1. The use of imputation and semi-supervised learning addresses missing data challenges. 2. The topic aligns well with current research trends, as it combines machine learning techniques and physical activity data to address behavioral health issues like PIU. Cons: 1. There could be more details on what the evaluation metric would be for different choices of architectures. rating: 8 confidence: 4
DZI0v9KRU0
[Proposal-ML] Relating Physical Activity to Problematic Internet Use
[ "Killian Conyngham", "Jackson M Luckey", "Fabian Pawelczyk" ]
The focus of our project is to predict the risk of problematic internet usage (PIU) by minors, based on data of their physical activities. In particular, our goal is to build a model for entrance in the Kaggle competition run by the Child Mind Institute on this topic. Here, one key challenge lies in the high level of missing data—particularly concerning the target variable—and the actigraphy time-series component of the dataset. Our primary objective will be to design and train a deep neural network to effectively address these issues.
[ "Deep Learning", "Time Series", "Actigraphy", "Semi-Supervised Learning" ]
https://openreview.net/pdf?id=DZI0v9KRU0
nu4rQdBCDD
official_review
1,731,423,560,561
DZI0v9KRU0
[ "everyone" ]
[ "~liyingxin1" ]
title: Should add more details about the method of handling time series data review: Very meaningful topic. Persuasive method and detailed related works. I think time series data has its difficulty to prepare. So it is suggested to elaborate on the methods of data handling, especially in dealing with missing data and time series data, to ensure the reproducibility and reliability of the methods. rating: 9 confidence: 4
DZI0v9KRU0
[Proposal-ML] Relating Physical Activity to Problematic Internet Use
[ "Killian Conyngham", "Jackson M Luckey", "Fabian Pawelczyk" ]
The focus of our project is to predict the risk of problematic internet usage (PIU) by minors, based on data of their physical activities. In particular, our goal is to build a model for entrance in the Kaggle competition run by the Child Mind Institute on this topic. Here, one key challenge lies in the high level of missing data—particularly concerning the target variable—and the actigraphy time-series component of the dataset. Our primary objective will be to design and train a deep neural network to effectively address these issues.
[ "Deep Learning", "Time Series", "Actigraphy", "Semi-Supervised Learning" ]
https://openreview.net/pdf?id=DZI0v9KRU0
WGhbYJepbP
official_review
1,731,417,488,906
DZI0v9KRU0
[ "everyone" ]
[ "~Han-Xi_Zhu1" ]
title: Review for Relating Physical Activity to Problematic Internet Use review: The proposal focuses on a concerned social problem, problematic internet usage (PIU). The authors try to design and train a deep neural network to predict the risk of PIU. ## Strengths 1. The author propose a hybrid neural network to integrate time series and tabular data, which is solid and practical. 2. The problem the author try to address is a globally concerned issue. It has significant implications for public health and social outcomes, particularly among young populations. ## Weaknesses 1. While QWK is mentioned as the primary metric, a more comprehensive evaluation strategy, including additional metrics and validation techniques, would strengthen the project. rating: 9 confidence: 4
DZI0v9KRU0
[Proposal-ML] Relating Physical Activity to Problematic Internet Use
[ "Killian Conyngham", "Jackson M Luckey", "Fabian Pawelczyk" ]
The focus of our project is to predict the risk of problematic internet usage (PIU) by minors, based on data of their physical activities. In particular, our goal is to build a model for entrance in the Kaggle competition run by the Child Mind Institute on this topic. Here, one key challenge lies in the high level of missing data—particularly concerning the target variable—and the actigraphy time-series component of the dataset. Our primary objective will be to design and train a deep neural network to effectively address these issues.
[ "Deep Learning", "Time Series", "Actigraphy", "Semi-Supervised Learning" ]
https://openreview.net/pdf?id=DZI0v9KRU0
PzcwJFPZ0M
official_review
1,730,879,949,224
DZI0v9KRU0
[ "everyone" ]
[ "~Guilherme_Félix_Diogo1" ]
title: Clear explanation of the problem but more could be said on the models review: The proposal discusses a relevant and socially significant issue, which is predicting problematic internet usage (PIU) of underage individuals driven by physical activity data. The project is relevant and supported by a great deal of existing literature, especially with regards to the use of physical activity data as a proxy for behavioral health measures. The employment of deep learning approaches for this objective seems to be justified owing to the characteristics of the data. A little bit more could be said on how better performing neural network systems built deep learning will deep learn tree models into extinction. rating: 8 confidence: 4
DZI0v9KRU0
[Proposal-ML] Relating Physical Activity to Problematic Internet Use
[ "Killian Conyngham", "Jackson M Luckey", "Fabian Pawelczyk" ]
The focus of our project is to predict the risk of problematic internet usage (PIU) by minors, based on data of their physical activities. In particular, our goal is to build a model for entrance in the Kaggle competition run by the Child Mind Institute on this topic. Here, one key challenge lies in the high level of missing data—particularly concerning the target variable—and the actigraphy time-series component of the dataset. Our primary objective will be to design and train a deep neural network to effectively address these issues.
[ "Deep Learning", "Time Series", "Actigraphy", "Semi-Supervised Learning" ]
https://openreview.net/pdf?id=DZI0v9KRU0
LjcLhuzthe
official_review
1,731,144,238,050
DZI0v9KRU0
[ "everyone" ]
[ "~André_Moreira_Leal_Leonor1" ]
title: Proposal 51 review review: The important, very timely issue that this proposal tries to address is the prediction of problematic Internet use among minors, using physical activity data as a proxy for psychological risk factors. In so doing, this approach considers not only invasive assessments but also the data that are widespread, hence underpinning the potential societal impact of the project. Deep learning methods apply best to such complex data, with missing values inherent in accelerometer readings; this justifies the methods chosen in reaching the goals of the project. The application of hybrid neural networks with semi-supervised learning methods is a well-thought-out approach in improving prediction accuracy by using techniques such as pseudo-labeling. While it is evident why deep learning is applicable, the proposal could go further in explaining how the chosen model architecture might outperform traditional tree-based ensemble methods, which may highlight special advantages of deep learning in dealing with time series and missing data in behavioral prediction. rating: 8 confidence: 4
DZI0v9KRU0
[Proposal-ML] Relating Physical Activity to Problematic Internet Use
[ "Killian Conyngham", "Jackson M Luckey", "Fabian Pawelczyk" ]
The focus of our project is to predict the risk of problematic internet usage (PIU) by minors, based on data of their physical activities. In particular, our goal is to build a model for entrance in the Kaggle competition run by the Child Mind Institute on this topic. Here, one key challenge lies in the high level of missing data—particularly concerning the target variable—and the actigraphy time-series component of the dataset. Our primary objective will be to design and train a deep neural network to effectively address these issues.
[ "Deep Learning", "Time Series", "Actigraphy", "Semi-Supervised Learning" ]
https://openreview.net/pdf?id=DZI0v9KRU0
JDJHIvuwu3
official_review
1,731,141,113,622
DZI0v9KRU0
[ "everyone" ]
[ "~Jinsong_Xiao1" ]
title: review for proposal 51 review: This proposal investigates the link between physical activity data and the risk of problematic internet use (PIU) among minors, with a focus on designing a deep neural network for PIU prediction. Pros: - Clear Objectives: The authors outline the problem effectively, with a well-defined target (Severity Impairment Index) and evaluation metric (Quadratic Weighted Kappa). - The use of a hybrid neural network to integrate time-series and tabular data is logical and supported by recent literature. Suggestions: A more detailed introduction of the evaluation scheme and the application of the new model rating: 8 confidence: 4
DZI0v9KRU0
[Proposal-ML] Relating Physical Activity to Problematic Internet Use
[ "Killian Conyngham", "Jackson M Luckey", "Fabian Pawelczyk" ]
The focus of our project is to predict the risk of problematic internet usage (PIU) by minors, based on data of their physical activities. In particular, our goal is to build a model for entrance in the Kaggle competition run by the Child Mind Institute on this topic. Here, one key challenge lies in the high level of missing data—particularly concerning the target variable—and the actigraphy time-series component of the dataset. Our primary objective will be to design and train a deep neural network to effectively address these issues.
[ "Deep Learning", "Time Series", "Actigraphy", "Semi-Supervised Learning" ]
https://openreview.net/pdf?id=DZI0v9KRU0
F514P7zueK
official_review
1,731,390,180,674
DZI0v9KRU0
[ "everyone" ]
[ "~Zhuofan_Sun1" ]
title: Review review: This paper proposes a deep learning approach to predict Problematic Internet Use (PIU) in minors based on their physical activity data. The authors aim to participate in a Kaggle competition on this topic and focus on developing a model that maximizes the quadratic weighted kappa (QWK) metric. The paper presents a clear background on PIU, defines the problem, and outlines related work. Strengths: Relevance and Importance: The study addresses a critical issue affecting a growing number of adolescents, with potential negative consequences for their mental and physical health. Clear Objective: The paper clearly defines the goal of predicting PIU severity using physical activity data, measured through accelerometers. Methodological Approach: The authors propose a comprehensive approach involving semi-supervised learning for handling missing labels and a deep neural network incorporating both time series and tabular data. Technical Considerations: The paper acknowledges and addresses key challenges, such as missing data in accelerometer data and the need to differentiate between non-wear and sedentary activity. References: The paper provides a solid foundation with relevant literature on PIU, time series classification, and deep learning techniques. Overall, this paper presents a promising approach to predicting PIU using physical activity data. Addressing the aforementioned weaknesses would strengthen the paper and provide a more comprehensive evaluation of the proposed model. rating: 10 confidence: 5
DZI0v9KRU0
[Proposal-ML] Relating Physical Activity to Problematic Internet Use
[ "Killian Conyngham", "Jackson M Luckey", "Fabian Pawelczyk" ]
The focus of our project is to predict the risk of problematic internet usage (PIU) by minors, based on data of their physical activities. In particular, our goal is to build a model for entrance in the Kaggle competition run by the Child Mind Institute on this topic. Here, one key challenge lies in the high level of missing data—particularly concerning the target variable—and the actigraphy time-series component of the dataset. Our primary objective will be to design and train a deep neural network to effectively address these issues.
[ "Deep Learning", "Time Series", "Actigraphy", "Semi-Supervised Learning" ]
https://openreview.net/pdf?id=DZI0v9KRU0
6HaN0qcGFD
official_review
1,731,405,844,938
DZI0v9KRU0
[ "everyone" ]
[ "~Isak_Tønnesen1" ]
title: Review: [Proposal-ML] Relating Physical Activity to Problematic Internet Use review: This proposal presents a highly relevant and well-structured approach to predicting problematic internet use (PIU) in minors using physical activity data. The methodology is particularly strong, combining state-of-the-art deep learning techniques (hybrid neural networks with LSTM/GRU) with innovative solutions for handling missing data through semi-supervised learning. The authors demonstrate good understanding of both the technical challenges and societal importance, supported by a promising literature review. The clear mathematical formulation of the problem, detailed evaluation metrics (QWK), and thoughtful consideration of data limitations make this a interesting proposal. The choice to integrate both time-series actigraphy data and tabular demographic information shows good understanding of the problem space. While minor clarification on model architecture specifics would be beneficial, the overall approach is thorough and promising for advancing both PIU detection and general behavioral health monitoring. rating: 10 confidence: 4
DZI0v9KRU0
[Proposal-ML] Relating Physical Activity to Problematic Internet Use
[ "Killian Conyngham", "Jackson M Luckey", "Fabian Pawelczyk" ]
The focus of our project is to predict the risk of problematic internet usage (PIU) by minors, based on data of their physical activities. In particular, our goal is to build a model for entrance in the Kaggle competition run by the Child Mind Institute on this topic. Here, one key challenge lies in the high level of missing data—particularly concerning the target variable—and the actigraphy time-series component of the dataset. Our primary objective will be to design and train a deep neural network to effectively address these issues.
[ "Deep Learning", "Time Series", "Actigraphy", "Semi-Supervised Learning" ]
https://openreview.net/pdf?id=DZI0v9KRU0
5vkHw6PVpG
official_review
1,731,419,638,999
DZI0v9KRU0
[ "everyone" ]
[ "~Wuqian1" ]
title: Review of "Relating Physical Activity to Problematic Internet Use" review: The proposal "Relating Physical Activity to Problematic Internet Use" is of high quality. It addresses a significant public health concern, the risk of problematic internet usage (PIU) among minors, through the lens of physical activity data. The project's approach to building a predictive model using deep neural networks is technically sound and aligns with current trends in data science and machine learning. Pros 1.Innovative Approach: Utilizes a deep neural network to predict PIU, offering a potentially more accurate and nuanced prediction model. 2.Technical Rigor: The proposed method includes sophisticated techniques like LSTM and GRU architectures for time series data. Cons 1.Complexity of Data: The project involves complex data, including missing data and the need for imputation, which could introduce errors or biases. 2.Dependence on Kaggle Data: The project relies heavily on the Kaggle dataset provided by the Child Mind Institute, which may limit the generalizability of the findings. rating: 8 confidence: 4
DZI0v9KRU0
[Proposal-ML] Relating Physical Activity to Problematic Internet Use
[ "Killian Conyngham", "Jackson M Luckey", "Fabian Pawelczyk" ]
The focus of our project is to predict the risk of problematic internet usage (PIU) by minors, based on data of their physical activities. In particular, our goal is to build a model for entrance in the Kaggle competition run by the Child Mind Institute on this topic. Here, one key challenge lies in the high level of missing data—particularly concerning the target variable—and the actigraphy time-series component of the dataset. Our primary objective will be to design and train a deep neural network to effectively address these issues.
[ "Deep Learning", "Time Series", "Actigraphy", "Semi-Supervised Learning" ]
https://openreview.net/pdf?id=DZI0v9KRU0
4zuD5RX99O
official_review
1,731,257,244,174
DZI0v9KRU0
[ "everyone" ]
[ "~Zhijie_shen3" ]
title: 51 Review review: ### Peer Review **Summary** The project has strong potential with a well-thought-out approach. **Pros**: 1. The choice of LSTM and GRU for time-series analysis is appropriate given their proven effectiveness in handling sequential data. The proposal shows a good understanding of leveraging deep learning for complex patterns. 2. The use of semi-supervised learning with pseudo-labeling to handle missing data is innovative and can enhance the model's robustness. This approach shows creativity in addressing data challenges. 3. Combining accelerometer time-series data with tabular demographic data is a strong approach. This comprehensive integration could improve the accuracy of predicting problematic internet use. **Suggestiong**: 1. The proposal lacks specific details on how the deep learning models will be optimized, such as hyperparameter tuning or strategies to prevent overfitting. Consider using techniques like cross-validation and dropout. 2. While pseudo-labeling is a creative solution, it may not be sufficient for large data gaps. Exploring additional imputation methods, such as RNN-based imputation or interpolation, could improve data quality. 3. While using QWK is a good choice, the proposal would benefit from including additional metrics like F1 score, precision, and recall to provide a more comprehensive evaluation of model performance. rating: 8 confidence: 3
CE85qdNSlp
[Proposal-ML] Mining Misconception in Mathematics
[ "Bryan Constantine Sadihin", "Hector Rodriguez Rodriguez", "Matteo Jiahao Chen" ]
Multiple-choice questions are widely used to evaluate student knowledge. Well-designed questions use distractors that are associated with common misconceptions. Large language models (LLMs) have performed well in math reasoning benchmarks, but they struggle with understanding misconceptions because they do not reason in the same way humans do. We propose a method to improve the zero-shot inference capability of LLMs based on in-context learning and fine tuning for multi-step mathematical reasoning. Our system will classify the misconceptions into 2,586 categories using multi-vector embedding retrieval. We will evaluate the mean average precision when classifying the misconceptions in a Kaggle dataset of 1,868 math-related multiple-choice questions. This model will establish a foundation for using LLMs to assess incorrect answers in math education.
[ "Multiple-choice questions", "Mathematical misconceptions", "Large language models", "Mathematical reasoning", "Zero-shot inference", "In-context learning", "Fine-tuning", "Multi-vector retrieval", "Misconception classification" ]
https://openreview.net/pdf?id=CE85qdNSlp
xHHz0icwzg
official_review
1,731,313,101,359
CE85qdNSlp
[ "everyone" ]
[ "~Shuangyue_Geng1" ]
title: Well-structured though definition could be clearer review: This proposal is well-structured, presenting a novel method for categorizing misconceptions in math. The approach shows strong potential, and the organization is logical, with clear sections on related works and methodology. However, the second section's definitions are not easily accessible to those without domain knowledge, making it difficult to see their connection to the proposed method. Providing more detailed and clear explanations of the definition would enhance the proposal’s accessibility and impact. rating: 9 confidence: 3
CE85qdNSlp
[Proposal-ML] Mining Misconception in Mathematics
[ "Bryan Constantine Sadihin", "Hector Rodriguez Rodriguez", "Matteo Jiahao Chen" ]
Multiple-choice questions are widely used to evaluate student knowledge. Well-designed questions use distractors that are associated with common misconceptions. Large language models (LLMs) have performed well in math reasoning benchmarks, but they struggle with understanding misconceptions because they do not reason in the same way humans do. We propose a method to improve the zero-shot inference capability of LLMs based on in-context learning and fine tuning for multi-step mathematical reasoning. Our system will classify the misconceptions into 2,586 categories using multi-vector embedding retrieval. We will evaluate the mean average precision when classifying the misconceptions in a Kaggle dataset of 1,868 math-related multiple-choice questions. This model will establish a foundation for using LLMs to assess incorrect answers in math education.
[ "Multiple-choice questions", "Mathematical misconceptions", "Large language models", "Mathematical reasoning", "Zero-shot inference", "In-context learning", "Fine-tuning", "Multi-vector retrieval", "Misconception classification" ]
https://openreview.net/pdf?id=CE85qdNSlp
rkaboW9x2k
official_review
1,731,137,228,922
CE85qdNSlp
[ "everyone" ]
[ "~Kangping_Xu1" ]
title: Review of "Mining Misconception in Mathematics" review: ## Pros: - Practical value - Automating misconception identification could significantly reduce the time and effort needed to create effective multiple-choice math assessments, which currently require manual analysis of distractors. - Comprehensive methodology - The proposal combines multiple advanced techniques (LLMs, in-context learning, fine-tuning, and multi-vector embeddings) to address the complexity of mathematical misconception analysis. ## Cons: - Dataset limitations - The Eedi dataset only contains 1,868 problems, which may be insufficient for training robust models, especially given the large number of possible misconceptions (2,586). While they propose generating additional data, synthetic data might not accurately represent real student misconceptions. rating: 9 confidence: 2
CE85qdNSlp
[Proposal-ML] Mining Misconception in Mathematics
[ "Bryan Constantine Sadihin", "Hector Rodriguez Rodriguez", "Matteo Jiahao Chen" ]
Multiple-choice questions are widely used to evaluate student knowledge. Well-designed questions use distractors that are associated with common misconceptions. Large language models (LLMs) have performed well in math reasoning benchmarks, but they struggle with understanding misconceptions because they do not reason in the same way humans do. We propose a method to improve the zero-shot inference capability of LLMs based on in-context learning and fine tuning for multi-step mathematical reasoning. Our system will classify the misconceptions into 2,586 categories using multi-vector embedding retrieval. We will evaluate the mean average precision when classifying the misconceptions in a Kaggle dataset of 1,868 math-related multiple-choice questions. This model will establish a foundation for using LLMs to assess incorrect answers in math education.
[ "Multiple-choice questions", "Mathematical misconceptions", "Large language models", "Mathematical reasoning", "Zero-shot inference", "In-context learning", "Fine-tuning", "Multi-vector retrieval", "Misconception classification" ]
https://openreview.net/pdf?id=CE85qdNSlp
m0XPV2cbyW
official_review
1,731,379,150,785
CE85qdNSlp
[ "everyone" ]
[ "~Changsong_Lei2" ]
title: review of "Mining Misconception in Mathematics" review: ### Summary: the proposal presents a model that leverages large language models (LLMs) to identify misconceptions in math multiple-choice questions (MCQs). By examining incorrect answers (distractors), the model aims to automatically predict the associated misconception. ### Pros: - Proposes clear methodologies like multi-vector retrieval and in-context learning, showing a well-structured approach to addressing specific challenges in misconception mining. - Gives a clear defination to the problem and the related dataset. ### Cons: - Lacking a detailed discussion on the feasibility of this method. For instance, with fewer than 2000 MWPs in the dataset, can the model be effectively fine-tuned? rating: 9 confidence: 4
CE85qdNSlp
[Proposal-ML] Mining Misconception in Mathematics
[ "Bryan Constantine Sadihin", "Hector Rodriguez Rodriguez", "Matteo Jiahao Chen" ]
Multiple-choice questions are widely used to evaluate student knowledge. Well-designed questions use distractors that are associated with common misconceptions. Large language models (LLMs) have performed well in math reasoning benchmarks, but they struggle with understanding misconceptions because they do not reason in the same way humans do. We propose a method to improve the zero-shot inference capability of LLMs based on in-context learning and fine tuning for multi-step mathematical reasoning. Our system will classify the misconceptions into 2,586 categories using multi-vector embedding retrieval. We will evaluate the mean average precision when classifying the misconceptions in a Kaggle dataset of 1,868 math-related multiple-choice questions. This model will establish a foundation for using LLMs to assess incorrect answers in math education.
[ "Multiple-choice questions", "Mathematical misconceptions", "Large language models", "Mathematical reasoning", "Zero-shot inference", "In-context learning", "Fine-tuning", "Multi-vector retrieval", "Misconception classification" ]
https://openreview.net/pdf?id=CE85qdNSlp
fAg36RbdsI
official_review
1,731,336,792,611
CE85qdNSlp
[ "everyone" ]
[ "~XueZeng1" ]
title: Review review: It proposes to retrieve correlated misconceptions through vector embedding search of misconceptions, thereby narrowing down the list fed to the LLM.This is a clever way that enables the LLM to handle the misconception information related to the given dataset more effectively, avoiding processing difficulties or inaccuracies caused by the limitation of the context window.It combines different learning and fine-tuning methods.Focus is placed on expanding the in-context learning and fine-tuning methods for LLM's mathematical reasoning to understand mathematical misconceptions, and aiming to evaluate the difference in reasoning ability between these two methods. Howerver,although it is mentioned in the text to evaluate the difference in reasoning ability of the LLM under different methods, the performance of BGE-M3, etc., overall, there is a lack of a specific and quantifiable evaluation indicator system rating: 8 confidence: 4
CE85qdNSlp
[Proposal-ML] Mining Misconception in Mathematics
[ "Bryan Constantine Sadihin", "Hector Rodriguez Rodriguez", "Matteo Jiahao Chen" ]
Multiple-choice questions are widely used to evaluate student knowledge. Well-designed questions use distractors that are associated with common misconceptions. Large language models (LLMs) have performed well in math reasoning benchmarks, but they struggle with understanding misconceptions because they do not reason in the same way humans do. We propose a method to improve the zero-shot inference capability of LLMs based on in-context learning and fine tuning for multi-step mathematical reasoning. Our system will classify the misconceptions into 2,586 categories using multi-vector embedding retrieval. We will evaluate the mean average precision when classifying the misconceptions in a Kaggle dataset of 1,868 math-related multiple-choice questions. This model will establish a foundation for using LLMs to assess incorrect answers in math education.
[ "Multiple-choice questions", "Mathematical misconceptions", "Large language models", "Mathematical reasoning", "Zero-shot inference", "In-context learning", "Fine-tuning", "Multi-vector retrieval", "Misconception classification" ]
https://openreview.net/pdf?id=CE85qdNSlp
X9B6j645We
official_review
1,731,418,437,094
CE85qdNSlp
[ "everyone" ]
[ "~Chumeng_Jiang1" ]
title: Needs More Clarification review: This proposal aims to develop a model that can automatically identify misconceptions in mathematical multiple-choice questions. The authors propose in-context learning and fine-tuning techniques to enhance the LLM’s reasoning ability to associate incorrect answers with specific misconceptions. They also plan to leverage LLMs to generate additional data for underrepresented misconceptions. **Strengths:** - **Practical application value of the research topic:** Proper classification of misconceptions in multiple-choice questions is highly desired in educational institutions. - **Detailed pipeline design:** The proposal involves multiple stages, with considerations for data generation, retrieval, fine-tuning, and evaluation. **Weaknesses:** - **Requires further clarification:** Why use multi-vector retrieval, and at what stage of the LLM pipeline will it be applied? - **How to ensure the accuracy of data generated by other LLMs:** The proposal mentions that part of the dataset will be generated by other LLMs. This raises the question of whether training on this data aligns the model with the generating LLM’s patterns rather than with correct misconceptions. rating: 7 confidence: 3
CE85qdNSlp
[Proposal-ML] Mining Misconception in Mathematics
[ "Bryan Constantine Sadihin", "Hector Rodriguez Rodriguez", "Matteo Jiahao Chen" ]
Multiple-choice questions are widely used to evaluate student knowledge. Well-designed questions use distractors that are associated with common misconceptions. Large language models (LLMs) have performed well in math reasoning benchmarks, but they struggle with understanding misconceptions because they do not reason in the same way humans do. We propose a method to improve the zero-shot inference capability of LLMs based on in-context learning and fine tuning for multi-step mathematical reasoning. Our system will classify the misconceptions into 2,586 categories using multi-vector embedding retrieval. We will evaluate the mean average precision when classifying the misconceptions in a Kaggle dataset of 1,868 math-related multiple-choice questions. This model will establish a foundation for using LLMs to assess incorrect answers in math education.
[ "Multiple-choice questions", "Mathematical misconceptions", "Large language models", "Mathematical reasoning", "Zero-shot inference", "In-context learning", "Fine-tuning", "Multi-vector retrieval", "Misconception classification" ]
https://openreview.net/pdf?id=CE85qdNSlp
V0JYiPX8Bo
official_review
1,731,423,125,414
CE85qdNSlp
[ "everyone" ]
[ "~Yangchi_Gao1" ]
title: review review: The proposal presents a promising approach to leveraging LLMs for educational assessment. It has the potential to significantly improve how misconceptions are identified and addressed in mathematics education. Advantages: 1.The proposal to utilize large language models (LLMs) for mining misconceptions in mathematics is a novel approach that could significantly enhance educational assessment tools. 2.By focusing on identifying misconceptions associated with incorrect answers, the project has the potential to improve understanding of common student errors and knowledge gaps. Disadvantage: 1.Misconceptions in mathematics can be highly nuanced and context-dependent, which might be challenging for an LLM to accurately capture. rating: 9 confidence: 4
CE85qdNSlp
[Proposal-ML] Mining Misconception in Mathematics
[ "Bryan Constantine Sadihin", "Hector Rodriguez Rodriguez", "Matteo Jiahao Chen" ]
Multiple-choice questions are widely used to evaluate student knowledge. Well-designed questions use distractors that are associated with common misconceptions. Large language models (LLMs) have performed well in math reasoning benchmarks, but they struggle with understanding misconceptions because they do not reason in the same way humans do. We propose a method to improve the zero-shot inference capability of LLMs based on in-context learning and fine tuning for multi-step mathematical reasoning. Our system will classify the misconceptions into 2,586 categories using multi-vector embedding retrieval. We will evaluate the mean average precision when classifying the misconceptions in a Kaggle dataset of 1,868 math-related multiple-choice questions. This model will establish a foundation for using LLMs to assess incorrect answers in math education.
[ "Multiple-choice questions", "Mathematical misconceptions", "Large language models", "Mathematical reasoning", "Zero-shot inference", "In-context learning", "Fine-tuning", "Multi-vector retrieval", "Misconception classification" ]
https://openreview.net/pdf?id=CE85qdNSlp
Tp8HPO25uI
official_review
1,731,381,425,434
CE85qdNSlp
[ "everyone" ]
[ "~Yifan_Luo2" ]
title: A chanllenging topic review: **Summary:** The proposal, titled "Mining Misconceptions in Mathematics," outlines a project aimed at using large language models (LLMs) to predict math misconceptions associated with incorrect multiple-choice answers. Traditional methods of identifying these misconceptions are time-consuming, so the authors propose an optimized LLM approach. They plan to use vector embeddings to match incorrect answers with misconception categories efficiently. **Pros:** 1. **Efficiency:** Automating misconception identification can save time and resources. 2. **Innovative Use of LLMs:** Utilizing LLMs for this purpose can enhance their application in educational assessments. 3. **Data Utilization:** The approach makes good use of existing datasets and attempts to improve them by generating additional data. 4. **Advanced Techniques:** Incorporating vector embeddings and multi-vector retrieval can improve prediction accuracy. **Cons:** 1. **Complexity:** Implementing such advanced techniques may require significant computational resources and expertise. 2. **LLM Limitations:** LLMs may struggle with nuances of misconceptions, as they are optimized for correct answers. 3. **Dataset Dependence:** The proposal relies heavily on the quality and comprehensiveness of the Eedi dataset. 4. **Scalability Concerns:** The approach may face challenges when scaling to diverse educational contexts or languages. rating: 8 confidence: 4
CE85qdNSlp
[Proposal-ML] Mining Misconception in Mathematics
[ "Bryan Constantine Sadihin", "Hector Rodriguez Rodriguez", "Matteo Jiahao Chen" ]
Multiple-choice questions are widely used to evaluate student knowledge. Well-designed questions use distractors that are associated with common misconceptions. Large language models (LLMs) have performed well in math reasoning benchmarks, but they struggle with understanding misconceptions because they do not reason in the same way humans do. We propose a method to improve the zero-shot inference capability of LLMs based on in-context learning and fine tuning for multi-step mathematical reasoning. Our system will classify the misconceptions into 2,586 categories using multi-vector embedding retrieval. We will evaluate the mean average precision when classifying the misconceptions in a Kaggle dataset of 1,868 math-related multiple-choice questions. This model will establish a foundation for using LLMs to assess incorrect answers in math education.
[ "Multiple-choice questions", "Mathematical misconceptions", "Large language models", "Mathematical reasoning", "Zero-shot inference", "In-context learning", "Fine-tuning", "Multi-vector retrieval", "Misconception classification" ]
https://openreview.net/pdf?id=CE85qdNSlp
Ol1x5WdjEr
official_review
1,731,321,388,957
CE85qdNSlp
[ "everyone" ]
[ "~Yida_Lu1" ]
title: Clear task definition with method needed to be further clarified review: This study utilizes an LLM to understand misconceptions within incorrect answers of math word problem through in-context learning and fine-tuning, aiming to identify the knowledge gaps of students. The task is meaningful in practice and clearly defined, and the proposal is well-structured. However, the method part can be further clarified by illustrating how in-context learning and fine-tuning will be implemented in detail, as well as the innovation points within these two methods. Providing more implementation details will help readers better understand the methods proposed in this study. rating: 8 confidence: 3
CE85qdNSlp
[Proposal-ML] Mining Misconception in Mathematics
[ "Bryan Constantine Sadihin", "Hector Rodriguez Rodriguez", "Matteo Jiahao Chen" ]
Multiple-choice questions are widely used to evaluate student knowledge. Well-designed questions use distractors that are associated with common misconceptions. Large language models (LLMs) have performed well in math reasoning benchmarks, but they struggle with understanding misconceptions because they do not reason in the same way humans do. We propose a method to improve the zero-shot inference capability of LLMs based on in-context learning and fine tuning for multi-step mathematical reasoning. Our system will classify the misconceptions into 2,586 categories using multi-vector embedding retrieval. We will evaluate the mean average precision when classifying the misconceptions in a Kaggle dataset of 1,868 math-related multiple-choice questions. This model will establish a foundation for using LLMs to assess incorrect answers in math education.
[ "Multiple-choice questions", "Mathematical misconceptions", "Large language models", "Mathematical reasoning", "Zero-shot inference", "In-context learning", "Fine-tuning", "Multi-vector retrieval", "Misconception classification" ]
https://openreview.net/pdf?id=CE85qdNSlp
KpXtHFYwuS
official_review
1,731,328,064,565
CE85qdNSlp
[ "everyone" ]
[ "~Yu_Zhang61" ]
title: Review of "Mining Misconception in Mathematics" review: The "Mining Misconception in Mathematics" proposal outlines a novel approach for improving large language models' (LLMs) capacity to classify and understand mathematical misconceptions. By leveraging a combination of zero-shot inference, in-context learning, and fine-tuning, the authors aim to advance LLMs in diagnosing misconceptions from multiple-choice questions, where distractors are often tied to common errors. The model will employ a multi-vector embedding retrieval system to categorize misconceptions into a remarkably granular set of 2,586 categories, which could significantly improve educational assessments by providing detailed insights into students’ misunderstandings. This is an ambitious and potentially impactful project, though it would benefit from a clearer description of how the fine-tuning and in-context learning components will interact with multi-vector retrieval in practice. Additionally, the proposal could elaborate on how mean average precision will be used to interpret the model's performance meaningfully. Addressing these areas would improve the proposal's clarity, feasibility, and contribution to educational technology. rating: 7 confidence: 4
CE85qdNSlp
[Proposal-ML] Mining Misconception in Mathematics
[ "Bryan Constantine Sadihin", "Hector Rodriguez Rodriguez", "Matteo Jiahao Chen" ]
Multiple-choice questions are widely used to evaluate student knowledge. Well-designed questions use distractors that are associated with common misconceptions. Large language models (LLMs) have performed well in math reasoning benchmarks, but they struggle with understanding misconceptions because they do not reason in the same way humans do. We propose a method to improve the zero-shot inference capability of LLMs based on in-context learning and fine tuning for multi-step mathematical reasoning. Our system will classify the misconceptions into 2,586 categories using multi-vector embedding retrieval. We will evaluate the mean average precision when classifying the misconceptions in a Kaggle dataset of 1,868 math-related multiple-choice questions. This model will establish a foundation for using LLMs to assess incorrect answers in math education.
[ "Multiple-choice questions", "Mathematical misconceptions", "Large language models", "Mathematical reasoning", "Zero-shot inference", "In-context learning", "Fine-tuning", "Multi-vector retrieval", "Misconception classification" ]
https://openreview.net/pdf?id=CE85qdNSlp
94V7Uhd0Sa
official_review
1,731,064,112,174
CE85qdNSlp
[ "everyone" ]
[ "~Yunghwei_Lai1" ]
title: review review: The plan to use vector databases for retrieval-augmented generation is forward-thinking and highly beneficial, as it allows for efficient contextual storage and retrieval. This is especially valuable for reducing token-based costs in large language models, making the project both technically efficient and economically viable. This project stands out for its innovative approaches to enhancing mathematical reasoning in LLMs, with a well-rounded dataset strategy, a focus on misconception modeling, and advanced retrieval techniques. Some additional clarity on implementation and evaluation would make the project even stronger, but overall, this is a promising and well-conceived proposal. rating: 10 confidence: 5
CE85qdNSlp
[Proposal-ML] Mining Misconception in Mathematics
[ "Bryan Constantine Sadihin", "Hector Rodriguez Rodriguez", "Matteo Jiahao Chen" ]
Multiple-choice questions are widely used to evaluate student knowledge. Well-designed questions use distractors that are associated with common misconceptions. Large language models (LLMs) have performed well in math reasoning benchmarks, but they struggle with understanding misconceptions because they do not reason in the same way humans do. We propose a method to improve the zero-shot inference capability of LLMs based on in-context learning and fine tuning for multi-step mathematical reasoning. Our system will classify the misconceptions into 2,586 categories using multi-vector embedding retrieval. We will evaluate the mean average precision when classifying the misconceptions in a Kaggle dataset of 1,868 math-related multiple-choice questions. This model will establish a foundation for using LLMs to assess incorrect answers in math education.
[ "Multiple-choice questions", "Mathematical misconceptions", "Large language models", "Mathematical reasoning", "Zero-shot inference", "In-context learning", "Fine-tuning", "Multi-vector retrieval", "Misconception classification" ]
https://openreview.net/pdf?id=CE85qdNSlp
8zSN7cPVjr
official_review
1,730,902,558,331
CE85qdNSlp
[ "everyone" ]
[ "~Daniel_Wang4" ]
title: Mining Misconception in Mathematics Proposal Review review: The proposal did an excellent job in describing the objective: to improve misconception detection in math assessments by leveraging the distractor answers to identify where exactly students are struggling with. This way of tracking misunderstandings is logical and well-founded. The choice by the authors to employ large language models with vector embeddings and utilize the MAP@25 metric shows thoughtful planningd. However, relying only on the Kaggle Eedi dataset, while a good starting point, is too narrow to cover all the variation in misconceptions that exist. It would have been beneficial to extend the dataset with more variant examples and further widen the scope of the present study. It would also strengthen the proposal to specify how misconceptions will be categorized, as similar types of misunderstandings could complicate the retrieval approach. Additional explanation of the choice of multi-vector embedding and anticipated challenges in in-context learning and fine-tuning would enhance the technical clarity. Overall, this is a great proposal. rating: 9 confidence: 3
CE85qdNSlp
[Proposal-ML] Mining Misconception in Mathematics
[ "Bryan Constantine Sadihin", "Hector Rodriguez Rodriguez", "Matteo Jiahao Chen" ]
Multiple-choice questions are widely used to evaluate student knowledge. Well-designed questions use distractors that are associated with common misconceptions. Large language models (LLMs) have performed well in math reasoning benchmarks, but they struggle with understanding misconceptions because they do not reason in the same way humans do. We propose a method to improve the zero-shot inference capability of LLMs based on in-context learning and fine tuning for multi-step mathematical reasoning. Our system will classify the misconceptions into 2,586 categories using multi-vector embedding retrieval. We will evaluate the mean average precision when classifying the misconceptions in a Kaggle dataset of 1,868 math-related multiple-choice questions. This model will establish a foundation for using LLMs to assess incorrect answers in math education.
[ "Multiple-choice questions", "Mathematical misconceptions", "Large language models", "Mathematical reasoning", "Zero-shot inference", "In-context learning", "Fine-tuning", "Multi-vector retrieval", "Misconception classification" ]
https://openreview.net/pdf?id=CE85qdNSlp
0t7DeIoMhD
official_review
1,730,992,042,844
CE85qdNSlp
[ "everyone" ]
[ "~Ziyi_Liu9" ]
title: Well Done review: The introduction and proposed method are well-described, and there is a thorough review of the related work. However, Chapter 2 could be strengthened by including a mathematical formulation or a more detailed problem statement to clarify the research objectives. rating: 9 confidence: 4
ATXlcxWGY3
【Proposal】Systematic Idea Refinement for Machine Learning Research Agents
[ "Zijun Liu", "Cheng Gao", "Hanxi Zhu" ]
This project aims to enhance the machine learning (ML) research capabilities of large language model (LLM)-empowered agents beyond basic code generation. While recent advancements have demonstrated the use of single-agent systems for generating code across various ML tasks, these methods often focus solely on improving code validity. They lack the ability to explore diverse methodologies for a given problem, which limits their adaptability and performance. To address this gap, the proposed project develops a multi-agent framework that systematically refines research idea guidelines through automatic proposal, feedback integration, and inference-time scaling. By incorporating multi-level feedback from LLM judgments, code generation processes, and experimental results, this approach enables agents to explore a broader range of solution pathways, similar to human researchers. The framework is plug-and-play on code generation agents, and will be evaluated on $\textbf{a total number of 75 Kaggle competitions}$. The expected outcome is an improvement in the understanding and performance of machine learning research agents through a comprehensive exploration of methodological ideas.
[ "Automatic Machine Learning", "Automatic Research", "Research Idea Generation", "LLM", "Agent" ]
https://openreview.net/pdf?id=ATXlcxWGY3
sVqRJhytb4
official_review
1,731,336,679,475
ATXlcxWGY3
[ "everyone" ]
[ "~Jiaxiang_Liu7" ]
title: Review review: This proposal presents a multi-agent framework aimed at enhancing the research idea generation and refinement capabilities of machine learning agents. The proposed system combines idea proposal, feedback integration, and inference-time scaling to tackle the limitations of single-agent models, particularly in exploring diverse methodologies. By using multi-level feedback from LLM-based judgments, code generation, and empirical results, the framework allows agents to refine solutions systematically, akin to human researchers. The methodology is well-delineated and shows promise for improving ML research automation, though further elaboration on handling computational efficiency and ensuring the quality of generated ideas would strengthen its practical feasibility. Overall, this proposal addresses a crucial aspect in advancing LLM-based agents for complex research tasks and holds significant potential for impactful contributions. rating: 8 confidence: 4
ATXlcxWGY3
【Proposal】Systematic Idea Refinement for Machine Learning Research Agents
[ "Zijun Liu", "Cheng Gao", "Hanxi Zhu" ]
This project aims to enhance the machine learning (ML) research capabilities of large language model (LLM)-empowered agents beyond basic code generation. While recent advancements have demonstrated the use of single-agent systems for generating code across various ML tasks, these methods often focus solely on improving code validity. They lack the ability to explore diverse methodologies for a given problem, which limits their adaptability and performance. To address this gap, the proposed project develops a multi-agent framework that systematically refines research idea guidelines through automatic proposal, feedback integration, and inference-time scaling. By incorporating multi-level feedback from LLM judgments, code generation processes, and experimental results, this approach enables agents to explore a broader range of solution pathways, similar to human researchers. The framework is plug-and-play on code generation agents, and will be evaluated on $\textbf{a total number of 75 Kaggle competitions}$. The expected outcome is an improvement in the understanding and performance of machine learning research agents through a comprehensive exploration of methodological ideas.
[ "Automatic Machine Learning", "Automatic Research", "Research Idea Generation", "LLM", "Agent" ]
https://openreview.net/pdf?id=ATXlcxWGY3
ltoj6ORejg
official_review
1,731,329,982,820
ATXlcxWGY3
[ "everyone" ]
[ "~Jiajun_Xu3" ]
title: Great Proposal review: The proposal presents an intriguing approach to enhancing the capabilities of large language model (LLM)-empowered agents in the domain of code generation. The authors have identified a significant gap in current LLM applications, which tend to focus on code validity rather than exploring diverse methodologies for solving problems. The proposed multi-agent framework aims to fill in the gap. With solid foundations on the introduction to the problem and relevant researches, the proposal outlines a comprehensive framework that incorporates idea proposal, feedback integration, and scaling. And its plan to evaluate the framework on MLE-Bench, which includes 75 Kaggle competitions, provides a rigorous testbed for the proposed methods. rating: 10 confidence: 4
ATXlcxWGY3
【Proposal】Systematic Idea Refinement for Machine Learning Research Agents
[ "Zijun Liu", "Cheng Gao", "Hanxi Zhu" ]
This project aims to enhance the machine learning (ML) research capabilities of large language model (LLM)-empowered agents beyond basic code generation. While recent advancements have demonstrated the use of single-agent systems for generating code across various ML tasks, these methods often focus solely on improving code validity. They lack the ability to explore diverse methodologies for a given problem, which limits their adaptability and performance. To address this gap, the proposed project develops a multi-agent framework that systematically refines research idea guidelines through automatic proposal, feedback integration, and inference-time scaling. By incorporating multi-level feedback from LLM judgments, code generation processes, and experimental results, this approach enables agents to explore a broader range of solution pathways, similar to human researchers. The framework is plug-and-play on code generation agents, and will be evaluated on $\textbf{a total number of 75 Kaggle competitions}$. The expected outcome is an improvement in the understanding and performance of machine learning research agents through a comprehensive exploration of methodological ideas.
[ "Automatic Machine Learning", "Automatic Research", "Research Idea Generation", "LLM", "Agent" ]
https://openreview.net/pdf?id=ATXlcxWGY3
hFO5NCI3hm
official_review
1,731,299,268,906
ATXlcxWGY3
[ "everyone" ]
[ "~Mingdao_Liu1" ]
title: Review for "Systematic Idea Refinement for Machine Learning Research Agents" review: The proposal aims to enhance the machine learning research abilities of LLM-agents through y incorporating multi-level feedback from LLM judgments and exploring a broader range of solution pathways. The authors plan to evaluate the proposed framework on 75 Kaggle competitions. The proposal is well-written and includes all required parts for a proposal. The research question highlights an important setting applicable to realistic macine-learning scenerios. The proposed research plan is concrete and detailed, including methods, baselines, evaluation metrics and expected results. rating: 10 confidence: 4
ATXlcxWGY3
【Proposal】Systematic Idea Refinement for Machine Learning Research Agents
[ "Zijun Liu", "Cheng Gao", "Hanxi Zhu" ]
This project aims to enhance the machine learning (ML) research capabilities of large language model (LLM)-empowered agents beyond basic code generation. While recent advancements have demonstrated the use of single-agent systems for generating code across various ML tasks, these methods often focus solely on improving code validity. They lack the ability to explore diverse methodologies for a given problem, which limits their adaptability and performance. To address this gap, the proposed project develops a multi-agent framework that systematically refines research idea guidelines through automatic proposal, feedback integration, and inference-time scaling. By incorporating multi-level feedback from LLM judgments, code generation processes, and experimental results, this approach enables agents to explore a broader range of solution pathways, similar to human researchers. The framework is plug-and-play on code generation agents, and will be evaluated on $\textbf{a total number of 75 Kaggle competitions}$. The expected outcome is an improvement in the understanding and performance of machine learning research agents through a comprehensive exploration of methodological ideas.
[ "Automatic Machine Learning", "Automatic Research", "Research Idea Generation", "LLM", "Agent" ]
https://openreview.net/pdf?id=ATXlcxWGY3
eFM55OkHyA
decision
1,731,919,040,377
ATXlcxWGY3
[ "everyone" ]
[ "tsinghua.edu.cn/THU/2024/Fall/AML/Program_Chairs" ]
decision: Strong Accept (Long Presentation) comment: **Systematic Improvements in Machine Learning Research Agents** **2.3.1 Key Innovations** 1. Introducing multi-agent techniques for automated machine learning 2. Developing workflows inspired by human researchers **2.3.2 Additional Key Information** None **2.3.3 Advantages** 1. Creative and innovative topic selection **2.3.4 Areas for Improvement** 1. Explore the current advancements of multi-agent systems in automated machine learning tasks 2. Clarify how performance is evaluated, as executable feasibility and efficient execution may require distinct evaluation metrics **2.3.5 Recommendations** 1. Define the task boundaries clearly, breaking them down into more specific and actionable components to facilitate focused research in a shorter time frame. title: Paper Decision
ATXlcxWGY3
【Proposal】Systematic Idea Refinement for Machine Learning Research Agents
[ "Zijun Liu", "Cheng Gao", "Hanxi Zhu" ]
This project aims to enhance the machine learning (ML) research capabilities of large language model (LLM)-empowered agents beyond basic code generation. While recent advancements have demonstrated the use of single-agent systems for generating code across various ML tasks, these methods often focus solely on improving code validity. They lack the ability to explore diverse methodologies for a given problem, which limits their adaptability and performance. To address this gap, the proposed project develops a multi-agent framework that systematically refines research idea guidelines through automatic proposal, feedback integration, and inference-time scaling. By incorporating multi-level feedback from LLM judgments, code generation processes, and experimental results, this approach enables agents to explore a broader range of solution pathways, similar to human researchers. The framework is plug-and-play on code generation agents, and will be evaluated on $\textbf{a total number of 75 Kaggle competitions}$. The expected outcome is an improvement in the understanding and performance of machine learning research agents through a comprehensive exploration of methodological ideas.
[ "Automatic Machine Learning", "Automatic Research", "Research Idea Generation", "LLM", "Agent" ]
https://openreview.net/pdf?id=ATXlcxWGY3
ce9FRs84de
official_review
1,731,345,568,612
ATXlcxWGY3
[ "everyone" ]
[ "~KAI_JUN_TEH1" ]
title: Clear problem statement and technical approach. review: This project aims to enhance the machine learning (ML) research capabilities of large language model (LLM)-empowered agents beyond basic code generation. The proposed multi-agent framework systematically refines research idea guidelines through automatic proposal, feedback integration, and inference-time scaling. Firstly, the text identifies the reasons for the agents' insufficient performance and suggests avenues for improvement. Then, it introduces the mechanisms of both single-agent and multi-agent systems. Finally, it presents some innovative and feasible research proposals. I wish you success! rating: 10 confidence: 5
ATXlcxWGY3
【Proposal】Systematic Idea Refinement for Machine Learning Research Agents
[ "Zijun Liu", "Cheng Gao", "Hanxi Zhu" ]
This project aims to enhance the machine learning (ML) research capabilities of large language model (LLM)-empowered agents beyond basic code generation. While recent advancements have demonstrated the use of single-agent systems for generating code across various ML tasks, these methods often focus solely on improving code validity. They lack the ability to explore diverse methodologies for a given problem, which limits their adaptability and performance. To address this gap, the proposed project develops a multi-agent framework that systematically refines research idea guidelines through automatic proposal, feedback integration, and inference-time scaling. By incorporating multi-level feedback from LLM judgments, code generation processes, and experimental results, this approach enables agents to explore a broader range of solution pathways, similar to human researchers. The framework is plug-and-play on code generation agents, and will be evaluated on $\textbf{a total number of 75 Kaggle competitions}$. The expected outcome is an improvement in the understanding and performance of machine learning research agents through a comprehensive exploration of methodological ideas.
[ "Automatic Machine Learning", "Automatic Research", "Research Idea Generation", "LLM", "Agent" ]
https://openreview.net/pdf?id=ATXlcxWGY3
YzgrE0MWRp
official_review
1,731,312,311,780
ATXlcxWGY3
[ "everyone" ]
[ "~Gausse_Mael_DONGMO_KENFACK1" ]
title: Clear Motivations review: This paper presents a novel approach for improving LLM-empowered agents in machine learning research by enhancing their capacity to generate and refine diverse methodologies. The framework introduces a multi-agent system where agents collaborate to generate, evaluate, and refine research ideas through multi-level feedback, emulating human research processes more closely. strengths : The authors identify the limitations of current LLM-based ML research agents; adressing theses limitations has the potential to enhance their generalization capabilities. The proposal of a multi-agent system with distinct roles could produce more diverse solutions. weakness: Lack of details about the feedback mechanisms and the final agent generating the code. rating: 8 confidence: 3
ATXlcxWGY3
【Proposal】Systematic Idea Refinement for Machine Learning Research Agents
[ "Zijun Liu", "Cheng Gao", "Hanxi Zhu" ]
This project aims to enhance the machine learning (ML) research capabilities of large language model (LLM)-empowered agents beyond basic code generation. While recent advancements have demonstrated the use of single-agent systems for generating code across various ML tasks, these methods often focus solely on improving code validity. They lack the ability to explore diverse methodologies for a given problem, which limits their adaptability and performance. To address this gap, the proposed project develops a multi-agent framework that systematically refines research idea guidelines through automatic proposal, feedback integration, and inference-time scaling. By incorporating multi-level feedback from LLM judgments, code generation processes, and experimental results, this approach enables agents to explore a broader range of solution pathways, similar to human researchers. The framework is plug-and-play on code generation agents, and will be evaluated on $\textbf{a total number of 75 Kaggle competitions}$. The expected outcome is an improvement in the understanding and performance of machine learning research agents through a comprehensive exploration of methodological ideas.
[ "Automatic Machine Learning", "Automatic Research", "Research Idea Generation", "LLM", "Agent" ]
https://openreview.net/pdf?id=ATXlcxWGY3
XEBsK4yvue
official_review
1,731,401,618,607
ATXlcxWGY3
[ "everyone" ]
[ "~Wanlan_Ren1" ]
title: Review for "Systematic Idea Refinement for Machine Learning Research Agents" review: This proposal presents a promising approach to enhance machine learning research agents by developing a multi-agent framework that enables systematic idea refinement. The project’s emphasis on expanding LLM-driven agents beyond simple code generation is valuable, as it addresses current limitations in adaptability and performance. Strengths include the innovative use of multi-level feedback to diversify solution pathways and the plug-and-play compatibility with various code generation agents. However, further clarity on the metrics for evaluating the agents’ performance would strengthen the proposal. Overall, this work has the potential to significantly improve LLM-driven ML research and provides a solid foundation for advancing automated research processes. rating: 9 confidence: 4
ATXlcxWGY3
【Proposal】Systematic Idea Refinement for Machine Learning Research Agents
[ "Zijun Liu", "Cheng Gao", "Hanxi Zhu" ]
This project aims to enhance the machine learning (ML) research capabilities of large language model (LLM)-empowered agents beyond basic code generation. While recent advancements have demonstrated the use of single-agent systems for generating code across various ML tasks, these methods often focus solely on improving code validity. They lack the ability to explore diverse methodologies for a given problem, which limits their adaptability and performance. To address this gap, the proposed project develops a multi-agent framework that systematically refines research idea guidelines through automatic proposal, feedback integration, and inference-time scaling. By incorporating multi-level feedback from LLM judgments, code generation processes, and experimental results, this approach enables agents to explore a broader range of solution pathways, similar to human researchers. The framework is plug-and-play on code generation agents, and will be evaluated on $\textbf{a total number of 75 Kaggle competitions}$. The expected outcome is an improvement in the understanding and performance of machine learning research agents through a comprehensive exploration of methodological ideas.
[ "Automatic Machine Learning", "Automatic Research", "Research Idea Generation", "LLM", "Agent" ]
https://openreview.net/pdf?id=ATXlcxWGY3
RbqR6NZamR
official_review
1,731,414,801,343
ATXlcxWGY3
[ "everyone" ]
[ "~Kittaphot_Saengprachathanarak1" ]
title: Review of "Systematic Idea Refinement for Machine Learning Research Agents" review: The paper presents a well-structured multi-agent framework for ML research agents, integrating multi-level feedback from LLM judgments, code generation, and experiments to enhance solution diversity beyond single-agent systems. It stands out for its originality, emulating human-like research exploration to address the limitations of prior code-focused methods. The plug-and-play design adds versatility, allowing integration with different code generation agents across various ML tasks. While the paper’s clarity and organization are strong, further empirical validation and detail on agent communication protocols would strengthen its contributions. Therefore, this innovative framework offers promising advancements for adaptable, autonomous ML research tools. rating: 9 confidence: 4
ATXlcxWGY3
【Proposal】Systematic Idea Refinement for Machine Learning Research Agents
[ "Zijun Liu", "Cheng Gao", "Hanxi Zhu" ]
This project aims to enhance the machine learning (ML) research capabilities of large language model (LLM)-empowered agents beyond basic code generation. While recent advancements have demonstrated the use of single-agent systems for generating code across various ML tasks, these methods often focus solely on improving code validity. They lack the ability to explore diverse methodologies for a given problem, which limits their adaptability and performance. To address this gap, the proposed project develops a multi-agent framework that systematically refines research idea guidelines through automatic proposal, feedback integration, and inference-time scaling. By incorporating multi-level feedback from LLM judgments, code generation processes, and experimental results, this approach enables agents to explore a broader range of solution pathways, similar to human researchers. The framework is plug-and-play on code generation agents, and will be evaluated on $\textbf{a total number of 75 Kaggle competitions}$. The expected outcome is an improvement in the understanding and performance of machine learning research agents through a comprehensive exploration of methodological ideas.
[ "Automatic Machine Learning", "Automatic Research", "Research Idea Generation", "LLM", "Agent" ]
https://openreview.net/pdf?id=ATXlcxWGY3
Ai6OpoawIY
official_review
1,730,956,563,033
ATXlcxWGY3
[ "everyone" ]
[ "~Iat_Long_Iong1" ]
title: Good proposal review: The proposed multi-agent framework for systematic idea refinement in machine learning research demonstrates significant potential to revolutionize the field. By moving beyond code generation validity and embracing diverse methodologies, this approach enables LLM-empowered agents to explore a broader range of solutions, similar to human researchers. The integration of multi-level feedback ensures a dynamic and iterative learning process, leading to more informed decision-making and potentially groundbreaking discoveries. With its plug-and-play design and focus on real-world applicability, this framework holds the promise of transforming machine learning research, making it more efficient, accessible, and impactful for a wider audience. rating: 10 confidence: 4
ATXlcxWGY3
【Proposal】Systematic Idea Refinement for Machine Learning Research Agents
[ "Zijun Liu", "Cheng Gao", "Hanxi Zhu" ]
This project aims to enhance the machine learning (ML) research capabilities of large language model (LLM)-empowered agents beyond basic code generation. While recent advancements have demonstrated the use of single-agent systems for generating code across various ML tasks, these methods often focus solely on improving code validity. They lack the ability to explore diverse methodologies for a given problem, which limits their adaptability and performance. To address this gap, the proposed project develops a multi-agent framework that systematically refines research idea guidelines through automatic proposal, feedback integration, and inference-time scaling. By incorporating multi-level feedback from LLM judgments, code generation processes, and experimental results, this approach enables agents to explore a broader range of solution pathways, similar to human researchers. The framework is plug-and-play on code generation agents, and will be evaluated on $\textbf{a total number of 75 Kaggle competitions}$. The expected outcome is an improvement in the understanding and performance of machine learning research agents through a comprehensive exploration of methodological ideas.
[ "Automatic Machine Learning", "Automatic Research", "Research Idea Generation", "LLM", "Agent" ]
https://openreview.net/pdf?id=ATXlcxWGY3
7JRGpTRx87
official_review
1,731,299,214,306
ATXlcxWGY3
[ "everyone" ]
[ "~Feihong_Zhang1" ]
title: Systematic Idea Refinement for Machine Learning Research Agents review: Review Comments This paper proposes an innovative multi-agent framework aimed at enhancing the adaptability and diversity of large language model (LLM)-driven research agents for machine learning. By incorporating systematic research idea generation and a multi-level feedback mechanism, the framework provides guidance to code generation agents, enabling them to tackle a broader range of machine learning tasks. The work presents both theoretical novelty and practical applicability, with a clear structure and logical flow. Here are my detailed comments on the paper: Strengths Innovation: The authors introduce a multi-agent system to support research idea generation and a multi-level feedback mechanism, surpassing existing single-agent approaches. The framework’s feedback mechanism and methodological diversity promise improved adaptability when tackling complex tasks. Technical Details and Feasibility: The paper provides a well-defined description of the framework’s components, including the Proposal Agent, Code Generation Agent, Review Agent, and Experiment Conductor, clearly outlining each module’s role. This clarity offers a strong foundation for practical implementation. Evaluation Strategy: The authors plan to evaluate this framework across 75 Kaggle competition tasks and compare it with existing methods. Such empirical evaluation offers a robust means of validating the effectiveness and generalizability of the framework. Suggestions for Improvement Experimental Design Details: While the paper mentions that the framework will be tested across several Kaggle tasks, it lacks specific discussion on evaluation metrics such as accuracy or runtime. Including a more detailed performance comparison across different agent combinations would improve result credibility. Potential for Further Expansion: Currently, the framework mainly targets code generation agents. The authors are encouraged to explore this framework’s applicability to other areas, such as data processing or feature engineering, to demonstrate its versatility within broader machine learning workflows. Discussion on the Feedback Mechanism: Although the multi-level feedback mechanism is compelling, there is limited discussion on balancing feedback weights or dynamically adjusting feedback strategies. Further exploration of feedback mechanisms in future work could help optimize system performance. Summary Overall, this paper presents a highly innovative multi-agent framework that effectively enhances the adaptability and diversity of code generation agents through systematic idea generation and multi-level feedback. Although there is room for improvement in the experimental section, the framework design and implementation make a valuable contribution to the field. I recommend accepting this paper and look forward to seeing its broader applications and advancements in future research. rating: 8 confidence: 3
ATXlcxWGY3
【Proposal】Systematic Idea Refinement for Machine Learning Research Agents
[ "Zijun Liu", "Cheng Gao", "Hanxi Zhu" ]
This project aims to enhance the machine learning (ML) research capabilities of large language model (LLM)-empowered agents beyond basic code generation. While recent advancements have demonstrated the use of single-agent systems for generating code across various ML tasks, these methods often focus solely on improving code validity. They lack the ability to explore diverse methodologies for a given problem, which limits their adaptability and performance. To address this gap, the proposed project develops a multi-agent framework that systematically refines research idea guidelines through automatic proposal, feedback integration, and inference-time scaling. By incorporating multi-level feedback from LLM judgments, code generation processes, and experimental results, this approach enables agents to explore a broader range of solution pathways, similar to human researchers. The framework is plug-and-play on code generation agents, and will be evaluated on $\textbf{a total number of 75 Kaggle competitions}$. The expected outcome is an improvement in the understanding and performance of machine learning research agents through a comprehensive exploration of methodological ideas.
[ "Automatic Machine Learning", "Automatic Research", "Research Idea Generation", "LLM", "Agent" ]
https://openreview.net/pdf?id=ATXlcxWGY3
0Nqrdg5D8I
official_review
1,731,400,069,585
ATXlcxWGY3
[ "everyone" ]
[ "~Suraj_Joshi2" ]
title: Review on Proposal For ML Research Agents review: This project aims to expand the capabilities of LLM empowered agents beyond generating just a solution, that might not be optimal by using multiple agents. This project also aims to solve the problem of lack of innovation in output of LLM based agents that is heavily dependent on the initial prompt supplied. The "plug-and-play" nature of the code generation agent, designed to integrate easily with existing systems, is a practical feature that could facilitate broader adoption and adaptability. Overall, this a very good proposal, everything is clearly articulated. and the only thing I doubt about the project is if there are multiple agents exploring various ideas, and as the LLMs are non-deterministic models, sometimes small change in output of one agent can lead to highly inconsistent outputs. rating: 10 confidence: 5
87zKrpruDE
[Proposal-ML] Training an Agent to find Diamond in Minecraft, MineRL Competition 2021
[ "Gausse Mael DONGMO KENFACK", "Kittaphot Saengprachathanarak" ]
This proposal presents a novel approach for training an agent to solve the "Obtain Diamond" task in Minecraft, as defined by the MineRL competition 2021. The competition poses a unique challenge, as the agent must navigate a randomly generated environment to accomplish a series of complex tasks before successfully mining a diamond. Our method combines Hierarchical Reinforcement Learning (HRL) with Curriculum Learning, incorporating human demonstrations to enhance sample efficiency. Specifically, we adopt a "Top-Down" or "reverse" Curriculum Learning strategy, training the agent to accomplish the goal from progressively simpler sub-goals, starting near the objective and working backward to the starting state. By leveraging both human data and a structured hierarchy of goals, our approach aims to increase efficiency and adaptability, making it well-suited to handle the complex nature of the Minecraft environment.
[ "Reinforcement Learning", "Imitation Learning", "Minecraft", "MineRL" ]
https://openreview.net/pdf?id=87zKrpruDE
uSd6YP01ie
official_review
1,731,141,728,750
87zKrpruDE
[ "everyone" ]
[ "~Jinsong_Xiao1" ]
title: review for proposal 54 review: This proposal tackles the challenge of training an agent to achieve the diamond acquisition task in Minecraft for the MineRL Competition using a combination of Curriculum Learning and Hierarchical Reinforcement Learning (HRL). Pros: The proposal aligns with the objectives of the MineRL competition and tackles a widely recognized problem in RL. The use of reverse curriculum learning, combined with hierarchical reinforcement learning, provides a structured, sample-efficient approach to the diamond acquisition problem. rating: 9 confidence: 4
87zKrpruDE
[Proposal-ML] Training an Agent to find Diamond in Minecraft, MineRL Competition 2021
[ "Gausse Mael DONGMO KENFACK", "Kittaphot Saengprachathanarak" ]
This proposal presents a novel approach for training an agent to solve the "Obtain Diamond" task in Minecraft, as defined by the MineRL competition 2021. The competition poses a unique challenge, as the agent must navigate a randomly generated environment to accomplish a series of complex tasks before successfully mining a diamond. Our method combines Hierarchical Reinforcement Learning (HRL) with Curriculum Learning, incorporating human demonstrations to enhance sample efficiency. Specifically, we adopt a "Top-Down" or "reverse" Curriculum Learning strategy, training the agent to accomplish the goal from progressively simpler sub-goals, starting near the objective and working backward to the starting state. By leveraging both human data and a structured hierarchy of goals, our approach aims to increase efficiency and adaptability, making it well-suited to handle the complex nature of the Minecraft environment.
[ "Reinforcement Learning", "Imitation Learning", "Minecraft", "MineRL" ]
https://openreview.net/pdf?id=87zKrpruDE
moQf16oEeI
official_review
1,731,327,919,526
87zKrpruDE
[ "everyone" ]
[ "~Ruowen_Zhao1" ]
title: Summary and Concern review: ### Summary This paper presents a method combining Hierarchical Reinforcement Learning (HRL) with reverse Curriculum Learning to address the MineRL competition’s ObtainDiamond task. By training the agent near the goal (diamond mining) and gradually introducing earlier sub-goals, the approach improves sample efficiency and convergence speed. The proposed architecture employs a manager trained on human demonstration data to assign sub-tasks to specialized workers. The authors present the technical approach with clarity. ### Concern Relying on predefined sub-goals from human demonstrations could make the agent overly specialized to the sequence provided, limiting its ability to adapt to tasks that require alternate sub-goal paths or dynamic adjustments in real-time. rating: 8 confidence: 3
87zKrpruDE
[Proposal-ML] Training an Agent to find Diamond in Minecraft, MineRL Competition 2021
[ "Gausse Mael DONGMO KENFACK", "Kittaphot Saengprachathanarak" ]
This proposal presents a novel approach for training an agent to solve the "Obtain Diamond" task in Minecraft, as defined by the MineRL competition 2021. The competition poses a unique challenge, as the agent must navigate a randomly generated environment to accomplish a series of complex tasks before successfully mining a diamond. Our method combines Hierarchical Reinforcement Learning (HRL) with Curriculum Learning, incorporating human demonstrations to enhance sample efficiency. Specifically, we adopt a "Top-Down" or "reverse" Curriculum Learning strategy, training the agent to accomplish the goal from progressively simpler sub-goals, starting near the objective and working backward to the starting state. By leveraging both human data and a structured hierarchy of goals, our approach aims to increase efficiency and adaptability, making it well-suited to handle the complex nature of the Minecraft environment.
[ "Reinforcement Learning", "Imitation Learning", "Minecraft", "MineRL" ]
https://openreview.net/pdf?id=87zKrpruDE
hAX6ZhpLE6
official_review
1,731,403,427,154
87zKrpruDE
[ "everyone" ]
[ "~Wanlan_Ren1" ]
title: Review for "Training an Agent to find Diamond in Minecraft, MineRL Competition 2021" review: This proposal presents an effective approach to training an agent to obtain diamonds in Minecraft by combining hierarchical reinforcement learning (HRL) and reverse curriculum learning. The method leverages human demonstration data, enhancing the agent’s ability to learn complex tasks through structured sub-goals, which should improve sample efficiency and accelerate learning. However, the proposed approach may benefit from testing on more complex datasets that require multi-step logical reasoning, which would clarify its adaptability in diverse scenarios. Overall, this approach is promising and could contribute significantly to advancing agent performance in complex, multi-step decision-making tasks in reinforcement learning. rating: 8 confidence: 4
87zKrpruDE
[Proposal-ML] Training an Agent to find Diamond in Minecraft, MineRL Competition 2021
[ "Gausse Mael DONGMO KENFACK", "Kittaphot Saengprachathanarak" ]
This proposal presents a novel approach for training an agent to solve the "Obtain Diamond" task in Minecraft, as defined by the MineRL competition 2021. The competition poses a unique challenge, as the agent must navigate a randomly generated environment to accomplish a series of complex tasks before successfully mining a diamond. Our method combines Hierarchical Reinforcement Learning (HRL) with Curriculum Learning, incorporating human demonstrations to enhance sample efficiency. Specifically, we adopt a "Top-Down" or "reverse" Curriculum Learning strategy, training the agent to accomplish the goal from progressively simpler sub-goals, starting near the objective and working backward to the starting state. By leveraging both human data and a structured hierarchy of goals, our approach aims to increase efficiency and adaptability, making it well-suited to handle the complex nature of the Minecraft environment.
[ "Reinforcement Learning", "Imitation Learning", "Minecraft", "MineRL" ]
https://openreview.net/pdf?id=87zKrpruDE
g8EuVKYiFW
official_review
1,731,418,400,922
87zKrpruDE
[ "everyone" ]
[ "~jin_wang30" ]
title: Ambitious and interesting idea review: Overall, the article clearly demonstrates the ideas and methods of an innovative reinforcement learning project. The article has a high level of problem definition, a review of related research, and the proposed hierarchical learning strategy and reverse curriculum learning strategy, which demonstrates the potential value of the project in the MineRL competition task. Advantages: The article is well-structured and gradually introduces the background, problem definition, related research, and proposed methods. In the "Background" section, the article briefly outlines the successful application of reinforcement learning (RL) in the field of games and the goals of the MineRL competition, so that readers can quickly understand the background and importance of the project. The article defines the reinforcement learning problem in the Minecraft environment in detail, expresses the task as a Markov decision process (MDP), and explains concepts such as state, action, and reward. This definition helps to clarify the formal description of the problem and provides readers with a clear task framework. The article also proposes a "reverse curriculum learning" strategy, which enables the agent to learn goal-oriented behavior more efficiently by starting training from a state close to the goal and gradually introducing pre-tasks. This innovative training sequence can improve sample efficiency and accelerate convergence, demonstrating a unique idea for optimizing learning efficiency in complex tasks. The article introduces a "manager-worker" hierarchical structure in the method, allowing each worker to focus on completing a specific subtask, and the manager controls task allocation and subtask detection. This structure can better cope with the complexity of long-term tasks and enable agents to achieve goal-oriented learning in uncertain environments. Disadvantages: Although the article mentions the method architecture and hierarchical strategy, it lacks a detailed description of the specific algorithm implementation, especially in terms of the collaboration mechanism between managers and workers, subtask division and scheduling strategies. This may make it difficult for readers to understand the actual operational details and implementation difficulty of the method. In addition, although the article proposes an innovative combination of inverse curriculum learning and hierarchical reinforcement learning, it lacks an analysis of the limitations that these methods may bring. For example, the applicability of inverse curriculum learning in complex environments and the ability to handle sparse rewards are challenges that may be encountered in practical applications. rating: 9 confidence: 4
87zKrpruDE
[Proposal-ML] Training an Agent to find Diamond in Minecraft, MineRL Competition 2021
[ "Gausse Mael DONGMO KENFACK", "Kittaphot Saengprachathanarak" ]
This proposal presents a novel approach for training an agent to solve the "Obtain Diamond" task in Minecraft, as defined by the MineRL competition 2021. The competition poses a unique challenge, as the agent must navigate a randomly generated environment to accomplish a series of complex tasks before successfully mining a diamond. Our method combines Hierarchical Reinforcement Learning (HRL) with Curriculum Learning, incorporating human demonstrations to enhance sample efficiency. Specifically, we adopt a "Top-Down" or "reverse" Curriculum Learning strategy, training the agent to accomplish the goal from progressively simpler sub-goals, starting near the objective and working backward to the starting state. By leveraging both human data and a structured hierarchy of goals, our approach aims to increase efficiency and adaptability, making it well-suited to handle the complex nature of the Minecraft environment.
[ "Reinforcement Learning", "Imitation Learning", "Minecraft", "MineRL" ]
https://openreview.net/pdf?id=87zKrpruDE
c61tfYYOWC
official_review
1,730,957,021,797
87zKrpruDE
[ "everyone" ]
[ "~Iat_Long_Iong1" ]
title: Interesting idea with challenging environment review: The proposed method combines Curriculum Learning with Hierarchical Reinforcement Learning (HRL) to enhance efficiency in the Minecraft diamond acquisition task. By starting training near the goal and gradually adding earlier sub-goals, the agent can focus more effectively on goal-oriented tasks, improving sample efficiency and speeding up learning. The hierarchical approach allows a manager to assign specific sub-tasks to specialized workers, ensuring smooth progression through the task sequence. Minecraft's diverse challenges, such as encountering various monsters or lava in caves, pose significant obstacles to achieving the main goal of finding diamonds. Therefore, effectively allocating these challenges to different workers while ensuring the success rate of the main task becomes a crucial focus of this research. The proposal would benefit from a clearer explanation of how the manager's role in defining sub-goals and guiding workers will address these challenges and ensure task success. rating: 8 confidence: 4
87zKrpruDE
[Proposal-ML] Training an Agent to find Diamond in Minecraft, MineRL Competition 2021
[ "Gausse Mael DONGMO KENFACK", "Kittaphot Saengprachathanarak" ]
This proposal presents a novel approach for training an agent to solve the "Obtain Diamond" task in Minecraft, as defined by the MineRL competition 2021. The competition poses a unique challenge, as the agent must navigate a randomly generated environment to accomplish a series of complex tasks before successfully mining a diamond. Our method combines Hierarchical Reinforcement Learning (HRL) with Curriculum Learning, incorporating human demonstrations to enhance sample efficiency. Specifically, we adopt a "Top-Down" or "reverse" Curriculum Learning strategy, training the agent to accomplish the goal from progressively simpler sub-goals, starting near the objective and working backward to the starting state. By leveraging both human data and a structured hierarchy of goals, our approach aims to increase efficiency and adaptability, making it well-suited to handle the complex nature of the Minecraft environment.
[ "Reinforcement Learning", "Imitation Learning", "Minecraft", "MineRL" ]
https://openreview.net/pdf?id=87zKrpruDE
YGbev9rhGy
official_review
1,731,419,401,385
87zKrpruDE
[ "everyone" ]
[ "~Han-Xi_Zhu1" ]
title: Review for "Training an Agent to find Diamond in Minecraft, MineRL Competition 2021" review: The proposal outlines a project to train a reinforcement learning agent to find diamonds in Minecraft as part of the MineRL Competition 2021. The approach combines Curriculum Learning with Hierarchical Reinforcement Learning (HRL), using a "Top-Down" strategy starting near the goal and working backward to earlier sub-goals. The agent is designed to maximize rewards and ultimately obtain a diamond, with potential applications in imitating human behavior in complex, real-world settings. ## Strengths 1. The combination of Curriculum Learning and HRL is a creative strategy that could improve learning efficiency and goal-oriented behavior. 2. Authors give clear definition of the specific task and the task is very interesting. 3. Authors give detailed outline of their methodology. The material is well written. ## Weaknesses 1. The proposal could benefit from a more detailed discussion on how the agent's performance will be evaluated beyond the immediate task of diamond acquisition. rating: 8 confidence: 4
87zKrpruDE
[Proposal-ML] Training an Agent to find Diamond in Minecraft, MineRL Competition 2021
[ "Gausse Mael DONGMO KENFACK", "Kittaphot Saengprachathanarak" ]
This proposal presents a novel approach for training an agent to solve the "Obtain Diamond" task in Minecraft, as defined by the MineRL competition 2021. The competition poses a unique challenge, as the agent must navigate a randomly generated environment to accomplish a series of complex tasks before successfully mining a diamond. Our method combines Hierarchical Reinforcement Learning (HRL) with Curriculum Learning, incorporating human demonstrations to enhance sample efficiency. Specifically, we adopt a "Top-Down" or "reverse" Curriculum Learning strategy, training the agent to accomplish the goal from progressively simpler sub-goals, starting near the objective and working backward to the starting state. By leveraging both human data and a structured hierarchy of goals, our approach aims to increase efficiency and adaptability, making it well-suited to handle the complex nature of the Minecraft environment.
[ "Reinforcement Learning", "Imitation Learning", "Minecraft", "MineRL" ]
https://openreview.net/pdf?id=87zKrpruDE
W0Jrd2G2Zk
official_review
1,731,337,489,754
87zKrpruDE
[ "everyone" ]
[ "~Ziyu_Zhao6" ]
title: Review of "Training an Agent to find Diamond in Minecraft, MineRL Competition 2021" Proposal review: Overview: This proposal presents a sophisticated reinforcement learning approach aimed at tackling the MineRL ObtainDiamond task in Minecraft, a well-known benchmark for reinforcement learning and imitation learning methods. The proposed method combines Hierarchical Reinforcement Learning (HRL) with reverse Curriculum Learning to create a scalable, goal-driven training strategy. Strengths: 1. Strategic Curriculum Learning: The “Top-Down” or reverse Curriculum Learning approach is innovative, as it starts training from a goal-oriented perspective (near the final goal) and progressively adds earlier sub-tasks. This method improves sample efficiency and accelerates learning by reducing the need to train from random starting points. 2. Hierarchical Structure: The HRL setup, which divides tasks into sequential sub-goals and assigns each to specialized “worker” agents, promotes modular learning. This allows for a clear and organized approach, enabling each worker to specialize in a particular skill or sub-task. Limitations: 1. Complexity in Coordination: Coordinating the manager-worker framework with both HRL and Curriculum Learning increases the complexity of the implementation. 2. Scalability and Generalization: The proposed method might struggle to generalize to other long-horizon, real-world tasks without substantial adaptation. rating: 7 confidence: 3
87zKrpruDE
[Proposal-ML] Training an Agent to find Diamond in Minecraft, MineRL Competition 2021
[ "Gausse Mael DONGMO KENFACK", "Kittaphot Saengprachathanarak" ]
This proposal presents a novel approach for training an agent to solve the "Obtain Diamond" task in Minecraft, as defined by the MineRL competition 2021. The competition poses a unique challenge, as the agent must navigate a randomly generated environment to accomplish a series of complex tasks before successfully mining a diamond. Our method combines Hierarchical Reinforcement Learning (HRL) with Curriculum Learning, incorporating human demonstrations to enhance sample efficiency. Specifically, we adopt a "Top-Down" or "reverse" Curriculum Learning strategy, training the agent to accomplish the goal from progressively simpler sub-goals, starting near the objective and working backward to the starting state. By leveraging both human data and a structured hierarchy of goals, our approach aims to increase efficiency and adaptability, making it well-suited to handle the complex nature of the Minecraft environment.
[ "Reinforcement Learning", "Imitation Learning", "Minecraft", "MineRL" ]
https://openreview.net/pdf?id=87zKrpruDE
T4dN77Sbur
official_review
1,731,118,398,250
87zKrpruDE
[ "everyone" ]
[ "~Yufei_Zhuang1" ]
title: Interesting and worth-a-try ideas review: The combination of Hierarchical Reinforcement Learning and "Top - Down" Curriculum Learning with human demonstrations is a smart move. It effectively addresses the complexity of the randomly generated environment and the series of tasks needed to reach the diamond. The approach's focus on increasing sample efficiency and adaptability is promising, especially for handling Minecraft's challenging nature. Overall, it seems well - conceived and has great potential to yield excellent results in the competition. By combining Hierarchical Reinforcement Learning with Curriculum Learning and incorporating human demonstrations, it can effectively improve the efficiency of sample utilization. Compared with other methods that do not adopt such a combination, it can make better use of the existing data for training. Utilizing human data and constructing a structured hierarchy of goals makes the training more targeted and can more efficiently move towards the completion of the specific task of "obtaining diamonds". Compared with methods lacking such goal hierarchy construction and utilization of human data, it can be more efficient in accomplishing complex tasks. rating: 8 confidence: 3
87zKrpruDE
[Proposal-ML] Training an Agent to find Diamond in Minecraft, MineRL Competition 2021
[ "Gausse Mael DONGMO KENFACK", "Kittaphot Saengprachathanarak" ]
This proposal presents a novel approach for training an agent to solve the "Obtain Diamond" task in Minecraft, as defined by the MineRL competition 2021. The competition poses a unique challenge, as the agent must navigate a randomly generated environment to accomplish a series of complex tasks before successfully mining a diamond. Our method combines Hierarchical Reinforcement Learning (HRL) with Curriculum Learning, incorporating human demonstrations to enhance sample efficiency. Specifically, we adopt a "Top-Down" or "reverse" Curriculum Learning strategy, training the agent to accomplish the goal from progressively simpler sub-goals, starting near the objective and working backward to the starting state. By leveraging both human data and a structured hierarchy of goals, our approach aims to increase efficiency and adaptability, making it well-suited to handle the complex nature of the Minecraft environment.
[ "Reinforcement Learning", "Imitation Learning", "Minecraft", "MineRL" ]
https://openreview.net/pdf?id=87zKrpruDE
RRgi7nt83w
official_review
1,731,134,422,512
87zKrpruDE
[ "everyone" ]
[ "~Zijun_Liu2" ]
title: Review and Feedback review: ## Overview The project’s proposed solution to the MineRL contest combines Curriculum Learning with Hierarchical Reinforcement Learning (HRL), aiming to optimize the agent's efficiency by using a “Top-Down” Curriculum Learning approach. Also, the proposal has a comprehensive background and related work analysis, providing a solid foundation for the project. ## Suggestions and Feedback Overall, the proposal presents a promising approach to training an RL agent to tackle the complex task of obtaining a diamond in Minecraft. I suggest to include more in-depth evaluation metrics, such as convergence speed, sample efficiency, or adaptability across varied starting states, which would enable a more thorough assessment. rating: 9 confidence: 3
87zKrpruDE
[Proposal-ML] Training an Agent to find Diamond in Minecraft, MineRL Competition 2021
[ "Gausse Mael DONGMO KENFACK", "Kittaphot Saengprachathanarak" ]
This proposal presents a novel approach for training an agent to solve the "Obtain Diamond" task in Minecraft, as defined by the MineRL competition 2021. The competition poses a unique challenge, as the agent must navigate a randomly generated environment to accomplish a series of complex tasks before successfully mining a diamond. Our method combines Hierarchical Reinforcement Learning (HRL) with Curriculum Learning, incorporating human demonstrations to enhance sample efficiency. Specifically, we adopt a "Top-Down" or "reverse" Curriculum Learning strategy, training the agent to accomplish the goal from progressively simpler sub-goals, starting near the objective and working backward to the starting state. By leveraging both human data and a structured hierarchy of goals, our approach aims to increase efficiency and adaptability, making it well-suited to handle the complex nature of the Minecraft environment.
[ "Reinforcement Learning", "Imitation Learning", "Minecraft", "MineRL" ]
https://openreview.net/pdf?id=87zKrpruDE
KK7rTi8BoE
official_review
1,731,379,607,140
87zKrpruDE
[ "everyone" ]
[ "~Zou_Dongchen1" ]
title: Review for this proposal review: This is a very interesting proposal. The authors want to train an agent to find Diamond in Minecraft, MineRL Competition 2021.The authors propose to use reverse Curriculum Learning and Hierarchical Reinforcement Learning (HRL) combined approach. The former helps the agent to focus more on goal achievement, while the latter involves managers and workers in a hierarchical approach to task decomposition and completion. This project is rich in real-world data and therefore is operational. The writing of the proposal is clear and of high quality. The weakness is that the Definition section is not very well articulated with the Proposed Method section. In the Definition section, the author describes the task as a Markov problem, but in the Proposed Method section, the author does not mention this. I hope the authors will explain the relationship between Markov and Reverse Curriculum Learning and Hierarchical Reinforcement Learning (HRL). rating: 10 confidence: 4
87zKrpruDE
[Proposal-ML] Training an Agent to find Diamond in Minecraft, MineRL Competition 2021
[ "Gausse Mael DONGMO KENFACK", "Kittaphot Saengprachathanarak" ]
This proposal presents a novel approach for training an agent to solve the "Obtain Diamond" task in Minecraft, as defined by the MineRL competition 2021. The competition poses a unique challenge, as the agent must navigate a randomly generated environment to accomplish a series of complex tasks before successfully mining a diamond. Our method combines Hierarchical Reinforcement Learning (HRL) with Curriculum Learning, incorporating human demonstrations to enhance sample efficiency. Specifically, we adopt a "Top-Down" or "reverse" Curriculum Learning strategy, training the agent to accomplish the goal from progressively simpler sub-goals, starting near the objective and working backward to the starting state. By leveraging both human data and a structured hierarchy of goals, our approach aims to increase efficiency and adaptability, making it well-suited to handle the complex nature of the Minecraft environment.
[ "Reinforcement Learning", "Imitation Learning", "Minecraft", "MineRL" ]
https://openreview.net/pdf?id=87zKrpruDE
HlOU7F5jjD
official_review
1,731,300,328,494
87zKrpruDE
[ "everyone" ]
[ "~Ziang_Zheng1" ]
title: **Review for Paper Proposal: "Training an Agent to find Diamond in Minecraft, MineRL Competition 2021"** review: **Review for Paper Proposal: "Training an Agent to find Diamond in Minecraft, MineRL Competition 2021"** **Summary:** The paper proposes a method for training a reinforcement learning agent to find a diamond in Minecraft as part of the MineRL competition. The authors plan to use a combination of Curriculum Learning (CL) and Hierarchical Reinforcement Learning (HRL) to enhance the efficiency and success rate of the agent. They propose a novel “Top-Down” Curriculum Learning approach, where the agent is initially trained near the final goal and then gradually introduced to earlier sub-goals. Their method is structured around a manager-worker architecture that divides the task into sub-tasks, each assigned to specialized agents or “workers.” **Strengths:** 1. **Novel Curriculum Learning Approach**: The idea of starting training from the goal and working backward is innovative. By focusing on end goals first, the approach has the potential to improve sample efficiency and accelerate convergence. 2. **Clear Problem Definition and Background**: The proposal provides a comprehensive background on the MineRL competition and the challenges of the task. The authors demonstrate an understanding of prior approaches, particularly those that combine Imitation Learning (IL) with Reinforcement Learning (RL). 3. **Strong Motivation and Real-World Potential**: The authors make a compelling case for the broader applications of this approach beyond gaming, such as learning from human demonstrations in more complex real-world scenarios. **Areas for Improvement:** 1. **Lack of Detailed Evaluation Plan**: The proposal would benefit from a more detailed description of the evaluation metrics and methodologies. While maximizing reward is a goal, it would be helpful to specify how they intend to benchmark their agent against existing methods or measure the effectiveness of each component (e.g., HRL vs. CL). 2. **Clarification of Reverse Curriculum Learning Implementation**: While the reverse curriculum approach is novel, the paper could provide a clearer plan on how this technique will be implemented. How will the agent transition from one sub-goal to the next as the task complexity increases? Providing more insight into potential challenges, such as reward shaping, would strengthen this section. 3. **Discussion on Generalization**: It is unclear how well this method would generalize to scenarios with different environmental configurations or other Minecraft tasks. The proposal could benefit from a brief discussion on how adaptable their approach is for other settings or complex tasks beyond Minecraft. **Minor Points:** - Typographical or formatting errors, such as “Aset of actions” and “Arewardfunction,” need to be addressed. - The explanation of SQIL as a preferred baseline could be expanded with more specifics on why it might be particularly relevant in comparison to other methods. **Conclusion:** Overall, this proposal is promising and addresses a complex problem in reinforcement learning with a novel approach. While additional clarification on implementation and evaluation is needed, the method’s potential for advancing curriculum learning strategies in long-horizon tasks is substantial. With further development, this work could significantly contribute to the field of imitation learning and hierarchical reinforcement learning in complex environments. rating: 7 confidence: 3
87zKrpruDE
[Proposal-ML] Training an Agent to find Diamond in Minecraft, MineRL Competition 2021
[ "Gausse Mael DONGMO KENFACK", "Kittaphot Saengprachathanarak" ]
This proposal presents a novel approach for training an agent to solve the "Obtain Diamond" task in Minecraft, as defined by the MineRL competition 2021. The competition poses a unique challenge, as the agent must navigate a randomly generated environment to accomplish a series of complex tasks before successfully mining a diamond. Our method combines Hierarchical Reinforcement Learning (HRL) with Curriculum Learning, incorporating human demonstrations to enhance sample efficiency. Specifically, we adopt a "Top-Down" or "reverse" Curriculum Learning strategy, training the agent to accomplish the goal from progressively simpler sub-goals, starting near the objective and working backward to the starting state. By leveraging both human data and a structured hierarchy of goals, our approach aims to increase efficiency and adaptability, making it well-suited to handle the complex nature of the Minecraft environment.
[ "Reinforcement Learning", "Imitation Learning", "Minecraft", "MineRL" ]
https://openreview.net/pdf?id=87zKrpruDE
5etFmRp8LN
official_review
1,731,328,818,087
87zKrpruDE
[ "everyone" ]
[ "~Cheng_Gao2" ]
title: Review for Training an Agent to find Diamond in Minecraft, MineRL Competition 2021 review: Strengths: - Very clear task definition. - The proposed method, which combines Curriculum Learning with Hierarchical Reinforcement Learning, is innovative. The approach of decomposing the task in reverse from the goal should indeed be robust across a wide range of starting states. - The organization for the agents is also described in great detail. Weakness: - A brief introduction to the evaluation metrics and a short overview of other baselines would make this proposal more complete. Overall, the background and methodology in this proposal are described comprehensively. As I am not familiar with Minecraft, I don't think I can provide further suggestions. rating: 9 confidence: 3
7if3xZkBbG
[Proposal-ML] Mimicking Humanity: A synthetic data-based approach to voice cloning in Text to Speech Systems
[ "Michael Hua Wang", "Aleksandr Algazinov", "Joydeep Chandra" ]
Acquisition of training data poses significant challenges in many areas of machine learning. Data paucity has a negative impact on model accuracy and bias. One potential way to obtain more training data is by synthesizing it using some external tools. This project presents a voice cloning technique using synthetic data for text-to-speech systems, to address issues with data accessibility and privacy in conventional text-to-speech models. By utilizing artificial synthetic voice data, the model is created to mimic voice features such as pitch, tone, and timbre, allowing for the creation of speech that mimics a specific speaker while also keeping the original linguistic content intact. This method aims to improve model generalization across different speaker profiles and languages by not requiring large human speech datasets, allowing for local inference without needing proprietary data. If successful, this will demonstrate that synthetic data can be used to train AI systems if the task it is applied to is chosen correctly.
[ "Voice Cloning", "Speaker Generalization", "Text-To-Speech", "Synthetic Data", "Privacy" ]
https://openreview.net/pdf?id=7if3xZkBbG
pJGZMpaEFc
official_review
1,731,396,731,763
7if3xZkBbG
[ "everyone" ]
[ "~Suraj_Joshi2" ]
title: Review on Zero-Shot Locally Inferable Voice Cloning Model review: This project proposal proposes a good solution for generating real voices without requiring the target user to give their voice samples to third party. To achieve this goal, they propose idea of model that can run locally on edge devices, which can reduce the processing costs significantly. This model can be real helpful for low resource languages, which don't have publicly available large scale datasets for training large models, that are the current State of The Art models. Also, training on synthetic data can reduce the cost of data collection. Here are few things that would have made proposal a great one: 1. The proposal clearly demonstrates the inference pipeline, but I could not get how the model would be trained, what would be inputs to the model in the training phase. 2. How the model would compensate the effect of using synthetic dataset for training is also bit unclear from the proposal. 3. In the proposal authors mention, if training the model using synthetic data is successful in this case, the training methodology can be applied to other domains too. But I guess success of training model using synthetic data does not necessarily imply success on other domains too. There are multiple domains, where training models using synthetic data have been practiced for long time (like robotics) but these domains face a great issue to to Sim2Real gap. 4. Finally some information how to make model that can run inference on edge devices would also have made the proposal a great one. rating: 9 confidence: 4
7if3xZkBbG
[Proposal-ML] Mimicking Humanity: A synthetic data-based approach to voice cloning in Text to Speech Systems
[ "Michael Hua Wang", "Aleksandr Algazinov", "Joydeep Chandra" ]
Acquisition of training data poses significant challenges in many areas of machine learning. Data paucity has a negative impact on model accuracy and bias. One potential way to obtain more training data is by synthesizing it using some external tools. This project presents a voice cloning technique using synthetic data for text-to-speech systems, to address issues with data accessibility and privacy in conventional text-to-speech models. By utilizing artificial synthetic voice data, the model is created to mimic voice features such as pitch, tone, and timbre, allowing for the creation of speech that mimics a specific speaker while also keeping the original linguistic content intact. This method aims to improve model generalization across different speaker profiles and languages by not requiring large human speech datasets, allowing for local inference without needing proprietary data. If successful, this will demonstrate that synthetic data can be used to train AI systems if the task it is applied to is chosen correctly.
[ "Voice Cloning", "Speaker Generalization", "Text-To-Speech", "Synthetic Data", "Privacy" ]
https://openreview.net/pdf?id=7if3xZkBbG
lymnrf1iNF
official_review
1,731,168,775,240
7if3xZkBbG
[ "everyone" ]
[ "~Liutao7" ]
title: A Potential Synthetic Data-Based Voice Cloning Method, but Some Technical Challenges Need to Be Overcome review: The proposal presents a voice cloning method based on synthetic data, aiming to address the issue of traditional speech synthesis models' dependence on a large amount of real voice data, and to improve the generalization ability of voice cloning and user privacy protection. The proposal demonstrates high integrity and creativity, and proposes a clear model architecture and training plan. Advantages: Integrity and Creativity: The proposal clearly articulates the research background, related work, method design, and expected goals, and proposes an innovative solution using synthetic data for pre-training. Workload: The proposal shows a deep understanding of related literature and proposes a feasible technical roadmap, indicating that the researchers have done thorough preparatory work. Areas for Improvement: Lacks description of some key technical details, such as the voice conversion block and embedding; an evaluation system should be added, selecting appropriate metrics to assess the voice quality, naturalness, and similarity to the target speaker. rating: 9 confidence: 4
7if3xZkBbG
[Proposal-ML] Mimicking Humanity: A synthetic data-based approach to voice cloning in Text to Speech Systems
[ "Michael Hua Wang", "Aleksandr Algazinov", "Joydeep Chandra" ]
Acquisition of training data poses significant challenges in many areas of machine learning. Data paucity has a negative impact on model accuracy and bias. One potential way to obtain more training data is by synthesizing it using some external tools. This project presents a voice cloning technique using synthetic data for text-to-speech systems, to address issues with data accessibility and privacy in conventional text-to-speech models. By utilizing artificial synthetic voice data, the model is created to mimic voice features such as pitch, tone, and timbre, allowing for the creation of speech that mimics a specific speaker while also keeping the original linguistic content intact. This method aims to improve model generalization across different speaker profiles and languages by not requiring large human speech datasets, allowing for local inference without needing proprietary data. If successful, this will demonstrate that synthetic data can be used to train AI systems if the task it is applied to is chosen correctly.
[ "Voice Cloning", "Speaker Generalization", "Text-To-Speech", "Synthetic Data", "Privacy" ]
https://openreview.net/pdf?id=7if3xZkBbG
j6ElfhDU8S
official_review
1,730,867,775,473
7if3xZkBbG
[ "everyone" ]
[ "~Thomas_Adler2" ]
title: Straight to the point review: Proposal is easy to read, clear and straight to the point. The methodology is well explained and we can see how this approach is different compared to previous methods (using synthetic data makes it more generalisable, aim of making it more computationally efficient etc) rating: 10 confidence: 4