arxiv_id
stringlengths
9
12
paper
stringlengths
2.65k
90.8k
targets
sequencelengths
4
4
targets_idx
sequencelengths
4
4
cite_corpus_id_map
stringlengths
108
31.6k
2405.08839
<|paper_start|> Title: PromptMind Team at EHRSQL-2024: Improving Reliability of SQL Generation using Ensemble LLMs Abstract: PromptMind Team at EHRSQL-2024: Improving Reliability of SQL Generation using Ensemble LLMs: This paper presents our approach to the EHRSQL-2024 shared task, which aims to develop a reliable Text-to-SQL system for electronic health records. We propose two approaches that leverage large language models (LLMs) for prompting and fine-tuning to generate EHRSQL queries. In both techniques, we concentrate on bridging the gap between the real-world knowledge on which LLMs are trained and the domain specific knowledge required for the task. The paper provides the results of each approach individually, demonstrating that they achieve high execution accuracy. Additionally, we show that an ensemble approach further enhances generation reliability by reducing errors. This approach secured us 2nd place in the shared task competition. The methodologies outlined in this paper are designed to be transferable to domain-specific Text-to-SQL problems that emphasize both accuracy and reliability. Introduction Text-to-SQL technology translates natural language questions into executable SQL queries that can answer the questions using a provided database. A robust Text-to-SQL system could significantly increase productivity for anyone using databases by providing an easy-to-use natural language interface and reducing the need for expertise in different SQL dialects. These systems are particularly more valuable in domains where SQL knowledge is not essential, such as healthcare, where healthcare professionals like doctors, nurses, and hospital administrators spend a significant amount of time interacting with patient health records stored in databases. In the era of Large Language Models (LLMs), the field of Text-to-SQL is gaining prominence as these models demonstrate impressive text generation capabilities without the need for fine-tuning. Introduced in 2017, WikiSQL <|cite_start|> (Reference: Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning: A significant amount of the world's knowledge is stored in relational databases. However, the ability for users to retrieve facts from a database is limited due to a lack of understanding of query languages such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model leverages the structure of SQL queries to significantly reduce the output space of generated queries. Moreover, we use rewards from in-the-loop query execution over the database to learn a policy to generate unordered parts of the query, which we show are less suitable for optimization via cross entropy loss. In addition, we will publish WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables from Wikipedia. This dataset is required to train our model and is an order of magnitude larger than comparable datasets. By applying policy-based reinforcement learning with a query execution environment to WikiSQL, our model Seq2SQL outperforms attentional sequence to sequence models, improving execution accuracy from 35.9% to 59.4% and logical form accuracy from 23.4% to 48.3%.) <|cite_end|> remains one of the largest datasets for Text-to-SQL and primarily caters to relatively simple queries. Subsequently, the SPIDER <|cite_start|> (Reference: Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task: We present Spider, a large-scale, complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 college students. It consists of 10,181 questions and 5,693 unique complex SQL queries on 200 databases with multiple tables, covering 138 different domains. We define a new complex and cross-domain semantic parsing and text-to-SQL task where different complex SQL queries and databases appear in train and test sets. In this way, the task requires the model to generalize well to both new SQL queries and new database schemas. Spider is distinct from most of the previous semantic parsing tasks because they all use a single database and the exact same programs in the train set and the test set. We experiment with various state-of-the-art models and the best model achieves only 12.4% exact matching accuracy on a database split setting. This shows that Spider presents a strong challenge for future research. Our dataset and task are publicly available at https://yale-lily.github.io/spider) <|cite_end|> and MULTI-SPIDER <|cite_start|> (Reference: MultiSpider: Towards Benchmarking Multilingual Text-to-SQL Semantic Parsing: Text-to-SQL semantic parsing is an important NLP task, which greatly facilitates the interaction between users and the database and becomes the key component in many human-computer interaction systems. Much recent progress in text-to-SQL has been driven by large-scale datasets, but most of them are centered on English. In this work, we present MultiSpider, the largest multilingual text-to-SQL dataset which covers seven languages (English, German, French, Spanish, Japanese, Chinese, and Vietnamese). Upon MultiSpider, we further identify the lexical and structural challenges of text-to-SQL (caused by specific language properties and dialect sayings) and their intensity across different languages. Experimental results under three typical settings (zero-shot, monolingual and multilingual) reveal a 6.1% absolute drop in accuracy in non-English languages. Qualitative and quantitative analyses are conducted to understand the reason for the performance drop of each language. Besides the dataset, we also propose a simple schema augmentation framework SAVe (Schema-Augmentation-with-Verification), which significantly boosts the overall performance by about 1.8% and closes the 29.5% performance gap across languages.) <|cite_end|> datasets were developed. These datasets posed challenges with complex queries that required an understanding of the database schema and support for various languages. BIRD-Bench was introduced to bridge the gap between research and real-world applications by providing large and imperfect databases <|cite_start|> (Reference: Can LLM Already Serve as A Database Interface? A BIg Bench for Large-Scale Database Grounded Text-to-SQLs: Text-to-SQL parsing, which aims at converting natural language instructions into executable SQLs, has gained increasing attention in recent years. In particular, Codex and ChatGPT have shown impressive results in this task. However, most of the prevalent benchmarks, i.e., Spider, and WikiSQL, focus on database schema with few rows of database contents leaving the gap between academic study and real-world applications. To mitigate this gap, we present Bird, a big benchmark for large-scale database grounded in text-to-SQL tasks, containing 12,751 pairs of text-to-SQL data and 95 databases with a total size of 33.4 GB, spanning 37 professional domains. Our emphasis on database values highlights the new challenges of dirty database contents, external knowledge between NL questions and database contents, and SQL efficiency, particularly in the context of massive databases. To solve these problems, text-to-SQL models must feature database value comprehension in addition to semantic parsing. The experimental results demonstrate the significance of database values in generating accurate text-to-SQLs for big databases. Furthermore, even the most effective text-to-SQL models, i.e. ChatGPT, only achieves 40.08% in execution accuracy, which is still far from the human result of 92.96%, proving that challenges still stand. Besides, we also provide an efficiency analysis to offer insights into generating text-to-efficient-SQLs that are beneficial to industries. We believe that BIRD will contribute to advancing real-world applications of text-to-SQL research. The leaderboard and source code are available: https://bird-bench.github.io/.) <|cite_end|>. These datasets are good representations of typical Text-to-SQL tasks. However, the healthcare domain differs from these generic datasets for the following reasons: \begin{itemize} \item The questions asked by users maybe highly specialized and specific to the medical field. \item To answer such questions, systems must also possess an understanding of clinical terminology. \item Reliability is of paramount importance as errors can have serious consequences. \end{itemize} These differences present unique challenges for developing a reliable Text-to-SQL system for the healthcare domain. EHRSQL is the first dataset that closely captures the needs of hospital staff and serves appropriately for building and testing Text-to-SQL systems in the healthcare domain <|cite_start|> (Reference: EHRSQL: A Practical Text-to-SQL Benchmark for Electronic Health Records: We present a new text-to-SQL dataset for electronic health records (EHRs). The utterances were collected from 222 hospital staff members, including physicians, nurses, and insurance review and health records teams. To construct the QA dataset on structured EHR data, we conducted a poll at a university hospital and used the responses to create seed questions. We then manually linked these questions to two open-source EHR databases, MIMIC-III and eICU, and included various time expressions and held-out unanswerable questions in the dataset, which were also collected from the poll. Our dataset poses a unique set of challenges: the model needs to 1) generate SQL queries that reflect a wide range of needs in the hospital, including simple retrieval and complex operations such as calculating survival rate, 2) understand various time expressions to answer time-sensitive questions in healthcare, and 3) distinguish whether a given question is answerable or unanswerable. We believe our dataset, EHRSQL, can serve as a practical benchmark for developing and assessing QA models on structured EHR data and take a step further towards bridging the gap between text-to-SQL research and its real-life deployment in healthcare. EHRSQL is available at https://github.com/glee4810/EHRSQL.) <|cite_end|>. Our solution aims to create a Text to SQL system that emphasizes both reliability and accuracy. To achieve this, we divide the task into two phases: \begin{itemize} \setlength\itemsep{0em} \item SQL Generation \item SQL Validation \end{itemize} In the first stage, we focus on SQL generation employing different techniques that include prompting and fine-tuning of LLMs. In both approaches, we use the same prompting strategy to provide the LLM with database information and question-related context. Specifically, we use table schemas combined with sample column values as the database context, and similar questions from the training data as the task context. To identify similar questions from the training data, we employ an embedding-based similarity technique. Then, our goal is to maximize the LLM's ability to generate highly accurate SQL statements utilizing this approach. There are several reasons why LLMs may fail to generate correct SQL for a given question. Some common reasons include: \begin{itemize} \setlength\itemsep{0em} \item Misinterpretation of question's intent \item Incorrect assumptions or hallucinations about the database's tables or columns \item Inaccuracies or hallucinations in the generated SQL query \end{itemize} Unlike many text generation tasks, Text-to-SQL tasks have a limited number of correct answers but potentially infinite incorrect ones. Inspired by this, we develop a second stage that evaluates the accuracy of the generated SQL. To evaluate the same, we propose an approach for Text-to-SQL that combines the results of multiple robust LLMs. Stronger LLMs often produce consistent outputs despite variations in temperature or other parameters, while smaller LLMs show lower consistency and accuracy. By leveraging the strengths of several robust LLMs, our approach minimizes the number of incorrect SQL queries and enhances the overall robustness and reliability of the Text-to-SQL system. In the remainder of this paper, we discuss related work, introduce the EHRSQL-2024 task and dataset, and present our two-stage approach. We then provide the results of our experiments and conclude with a summary of our findings. Related Work Prior to the advent of LLMs, the primary focus of research in natural language processing involved refining specialized models using innovative strategies <|cite_start|> (Reference: RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers: When translating natural language questions into SQL queries to answer questions from a database, contemporary semantic parsing models struggle to generalize to unseen database schemas. The generalization challenge lies in (a) encoding the database relations in an accessible way for the semantic parser, and (b) modeling alignment between database columns and their mentions in a given query. We present a unified framework, based on the relation-aware self-attention mechanism, to address schema encoding, schema linking, and feature representation within a text-to-SQL encoder. On the challenging Spider dataset this framework boosts the exact match accuracy to 57.2%, surpassing its best counterparts by 8.7% absolute improvement. Further augmented with BERT, it achieves the new state-of-the-art performance of 65.6% on the Spider leaderboard. In addition, we observe qualitative improvements in the model’s understanding of schema linking and alignment. Our implementation will be open-sourced at https://github.com/Microsoft/rat-sql.) <|cite_end|>. Additionally, substantial efforts were devoted to developing sophisticated pre-training methodologies, such as those proposed by STAR <|cite_start|> (Reference: {{STAR: 焦虑测验研究协会(Society for Test Anxiety Research)于1987年6月25—27日在挪威的卑尔根大学召开了第八届国际大会。本届大会的主席由卑尔根大学心理系教授KnutA.Hagtvet 担任。在开幕式上,STAR 的主席、美国加利福尼亚大学(伯克利)心理学教授Martin V.Covington 与卑尔根大学校长Magne Lerheim 博士致了开幕词。大会共分) <|cite_end|>, and exploring decoding strategies, as exemplified by PICARD. However, these approaches typically require substantial computational resources and novel techniques. Large Language Models (LLMs) have been trained extensively on textual data, which has equipped them with vast knowledge. As a result, they exhibit exceptional probabilistic reasoning abilities and can excel at various tasks even without explicit training. Zero-shot prompting techniques, when used with LLMs, have not only narrowed the performance gap on Text-to-SQL but have also surpassed specialized pre-trained or fine-tuned models. Several prompt techniques have been developed based on this zero-shot approach for Text-to-SQL tasks, leading to remarkable achievements on datasets such as SPIDER <|cite_start|> (Reference: C3: Zero-shot Text-to-SQL with ChatGPT: This paper proposes a ChatGPT-based zero-shot Text-to-SQL method, dubbed C3, which achieves 82.3\% in terms of execution accuracy on the holdout test set of Spider and becomes the state-of-the-art zero-shot Text-to-SQL method on the Spider Challenge. C3 consists of three key components: Clear Prompting (CP), Calibration with Hints (CH), and Consistent Output (CO), which are corresponding to the model input, model bias and model output respectively. It provides a systematic treatment for zero-shot Text-to-SQL. Extensive experiments have been conducted to verify the effectiveness and efficiency of our proposed method.) <|cite_end|>, <|cite_start|> (Reference: A comprehensive evaluation of ChatGPT's zero-shot Text-to-SQL capability: This paper presents the first comprehensive analysis of ChatGPT's Text-to-SQL ability. Given the recent emergence of large-scale conversational language model ChatGPT and its impressive capabilities in both conversational abilities and code generation, we sought to evaluate its Text-to-SQL performance. We conducted experiments on 12 benchmark datasets with different languages, settings, or scenarios, and the results demonstrate that ChatGPT has strong text-to-SQL abilities. Although there is still a gap from the current state-of-the-art (SOTA) model performance, considering that the experiment was conducted in a zero-shot scenario, ChatGPT's performance is still impressive. Notably, in the ADVETA (RPL) scenario, the zero-shot ChatGPT even outperforms the SOTA model that requires fine-tuning on the Spider dataset by 4.1\%, demonstrating its potential for use in practical applications. To support further research in related fields, we have made the data generated by ChatGPT publicly available at https://github.com/THU-BPM/chatgpt-sql.) <|cite_end|>. Zero-shot generation capabilities can be further enhanced through techniques like in-context learning (ICL) and few-shot prompting. DIN-SQL <|cite_start|> (Reference: DIN-SQL: Decomposed In-Context Learning of Text-to-SQL with Self-Correction: There is currently a significant gap between the performance of fine-tuned models and prompting approaches using Large Language Models (LLMs) on the challenging task of text-to-SQL, as evaluated on datasets such as Spider. To improve the performance of LLMs in the reasoning process, we study how decomposing the task into smaller sub-tasks can be effective. In particular, we show that breaking down the generation problem into sub-problems and feeding the solutions of those sub-problems into LLMs can be an effective approach for significantly improving their performance. Our experiments with three LLMs show that this approach consistently improves their simple few-shot performance by roughly 10%, pushing the accuracy of LLMs towards SOTA or surpassing it. On the holdout test set of Spider, the SOTA, in terms of execution accuracy, was 79.9 and the new SOTA at the time of this writing using our approach is 85.3. Our approach with in-context learning beats many heavily fine-tuned models by at least 5%. Additionally, when evaluated on the BIRD benchmark, our approach achieved an execution accuracy of 55.9%, setting a new SOTA on its holdout test set.) <|cite_end|> adopts an in-context learning approach to break down complex SQL generation into manageable sub-tasks, leading to improved performance on intricate queries. Another technique, retrieval-augmented generation, provides relevant and helpful examples as a few-shot to guide SQL generation <|cite_start|> (Reference: Retrieval-augmented gpt-3.5-based text-to-sql framework with sample-aware prompting and dynamic revision chain: Text-to-SQL aims at generating SQL queries for the given natural language questions and thus helping users to query databases. Prompt learning with large language models (LLMs) has emerged as a recent approach, which designs prompts to lead LLMs to understand the input question and generate the corresponding SQL. However, it faces challenges with strict SQL syntax requirements. Existing work prompts the LLMs with a list of demonstration examples (i.e. question-SQL pairs) to generate SQL, but the fixed prompts can hardly handle the scenario where the semantic gap between the retrieved demonstration and the input question is large. In this paper, we propose a retrieval-augmented prompting method for a LLM-based Text-to-SQL framework, involving sample-aware prompting and a dynamic revision chain. Our approach incorporates sample-aware demonstrations, which include the composition of SQL operators and fine-grained information related to the given question. To retrieve questions sharing similar intents with input questions, we propose two strategies for assisting retrieval. Firstly, we leverage LLMs to simplify the original questions, unifying the syntax and thereby clarifying the users' intentions. To generate executable and accurate SQLs without human intervention, we design a dynamic revision chain which iteratively adapts fine-grained feedback from the previously generated SQL. Experimental results on three Text-to-SQL benchmarks demonstrate the superiority of our method over strong baseline models.) <|cite_end|>. These approaches have proven effective on general Text-to-SQL tasks but they have not yet been studied rigorously on domain-specific Text-to-SQL problems. Retrieval Augmented Fine-tuning (RAFT) introduces a novel fine-tuning technique that improves the in-domain performance of RAG while integrating domain-specific knowledge <|cite_start|> (Reference: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks: Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre-trained models with a differentiable access mechanism to explicit non-parametric memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) -- models which combine pre-trained parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.) <|cite_end|>. Through our work, we delve into the application of these techniques for the EHRSQL-2024 task. <|paper_end|>
[ "<|reference_start|> Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning: A significant amount of the world's knowledge is stored in relational databases. However, the ability for users to retrieve facts from a database is limited due to a lack of understanding of query languages such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model leverages the structure of SQL queries to significantly reduce the output space of generated queries. Moreover, we use rewards from in-the-loop query execution over the database to learn a policy to generate unordered parts of the query, which we show are less suitable for optimization via cross entropy loss. In addition, we will publish WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables from Wikipedia. This dataset is required to train our model and is an order of magnitude larger than comparable datasets. By applying policy-based reinforcement learning with a query execution environment to WikiSQL, our model Seq2SQL outperforms attentional sequence to sequence models, improving execution accuracy from 35.9% to 59.4% and logical form accuracy from 23.4% to 48.3%. <|reference_end|>", "<|reference_start|> EHRSQL: A Practical Text-to-SQL Benchmark for Electronic Health Records: We present a new text-to-SQL dataset for electronic health records (EHRs). The utterances were collected from 222 hospital staff members, including physicians, nurses, and insurance review and health records teams. To construct the QA dataset on structured EHR data, we conducted a poll at a university hospital and used the responses to create seed questions. We then manually linked these questions to two open-source EHR databases, MIMIC-III and eICU, and included various time expressions and held-out unanswerable questions in the dataset, which were also collected from the poll. Our dataset poses a unique set of challenges: the model needs to 1) generate SQL queries that reflect a wide range of needs in the hospital, including simple retrieval and complex operations such as calculating survival rate, 2) understand various time expressions to answer time-sensitive questions in healthcare, and 3) distinguish whether a given question is answerable or unanswerable. We believe our dataset, EHRSQL, can serve as a practical benchmark for developing and assessing QA models on structured EHR data and take a step further towards bridging the gap between text-to-SQL research and its real-life deployment in healthcare. EHRSQL is available at https://github.com/glee4810/EHRSQL. <|reference_end|>", "<|reference_start|> RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers: When translating natural language questions into SQL queries to answer questions from a database, contemporary semantic parsing models struggle to generalize to unseen database schemas. The generalization challenge lies in (a) encoding the database relations in an accessible way for the semantic parser, and (b) modeling alignment between database columns and their mentions in a given query. We present a unified framework, based on the relation-aware self-attention mechanism, to address schema encoding, schema linking, and feature representation within a text-to-SQL encoder. On the challenging Spider dataset this framework boosts the exact match accuracy to 57.2%, surpassing its best counterparts by 8.7% absolute improvement. Further augmented with BERT, it achieves the new state-of-the-art performance of 65.6% on the Spider leaderboard. In addition, we observe qualitative improvements in the model’s understanding of schema linking and alignment. Our implementation will be open-sourced at https://github.com/Microsoft/rat-sql. <|reference_end|>", "<|reference_start|> {{STAR: 焦虑测验研究协会(Society for Test Anxiety Research)于1987年6月25—27日在挪威的卑尔根大学召开了第八届国际大会。本届大会的主席由卑尔根大学心理系教授KnutA.Hagtvet 担任。在开幕式上,STAR 的主席、美国加利福尼亚大学(伯克利)心理学教授Martin V.Covington 与卑尔根大学校长Magne Lerheim 博士致了开幕词。大会共分 <|reference_end|>" ]
[ 0, 4, 5, 6 ]
{"<|cite_11|>": "arxiv-133344", "<|cite_12|>": "arxiv-173798", "<|cite_13|>": "arxiv-471954", "<|cite_1|>": "arxiv-502274", "<|cite_2|>": "arxiv-475465", "<|cite_3|>": "ss-735584", "<|cite_4|>": "ss-944738", "<|cite_6|>": "arxiv-523229", "<|cite_7|>": "arxiv-491508", "<|cite_8|>": "arxiv-498879", "<|cite_9|>": "ss-1600919", "<|cite_10|>": "ss-759091"}
2311.15838
<|paper_start|> Title: Utilizing Explainability Techniques for Reinforcement Learning Model Assurance Abstract: Utilizing Explainability Techniques for Reinforcement Learning Model Assurance: Explainable Reinforcement Learning (XRL) can provide transparency into the decision-making process of a Deep Reinforcement Learning (DRL) model and increase user trust and adoption in real-world use cases. By utilizing XRL techniques, researchers can identify potential vulnerabilities within a trained DRL model prior to deployment, therefore limiting the potential for mission failure or mistakes by the system. This paper introduces the ARLIN (Assured RL Model Interrogation) Toolkit, an open-source Python library that identifies potential vulnerabilities and critical points within trained DRL models through detailed, human-interpretable explainability outputs. To illustrate ARLIN's effectiveness, we provide explainability visualizations and vulnerability analysis for a publicly available DRL model. The open-source code repository is available for download at https://github.com/mitre/arlin. Introduction Over the last decade, reinforcement learning has increased in popularity due to its ability to achieve superhuman performance on a variety of classic board <|cite_start|> (Reference: Mastering the game of Go with deep neural networks and tree search: ) <|cite_end|> and video game <|cite_start|> (Reference: Playing Atari with Deep Reinforcement Learning: We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.) <|cite_end|> environments. This gain in popularity has sparked an interest in using DRL for both decision support and autonomous operation within safety-critical scenarios such as air-to-air combat <|cite_start|> (Reference: Hierarchical Reinforcement Learning for Air-to-Air Combat: Artificial Intelligence (AI) is becoming a critical component in the defense industry, as recently demonstrated by DARPA`s AlphaDogfight Trials (ADT). ADT sought to vet the feasibility of AI algorithms capable of piloting an F-16 in simulated air-to-air combat. As a participant in ADT, Lockheed Martin`s (LM) approach combines a hierarchical architecture with maximum-entropy reinforcement learning (RL), integrates expert knowledge through reward shaping, and supports modularity of policies. This approach achieved a $2^{nd}$ place finish in the final ADT event (among eight total competitors) and defeated a graduate of the US Air Force's (USAF) F-16 Weapons Instructor Course in match play.) <|cite_end|>, nuclear power plant optimization <|cite_start|> (Reference: Magnetic control of tokamak plasmas through deep reinforcement learning: ) <|cite_end|>, and ballistic missile guidance <|cite_start|> (Reference: Terminal Adaptive Guidance for Autonomous Hypersonic Strike Weapons via Reinforcement Learning: An adaptive guidance system suitable for the terminal phase trajectory of a hypersonic strike weapon is optimized using reinforcement meta learning. The guidance system maps observations directly to commanded bank angle, angle of attack, and sideslip angle rates. Importantly, the observations are directly measurable from radar seeker outputs with minimal processing. The optimization framework implements a shaping reward that minimizes the line of sight rotation rate, with a terminal reward given if the agent satisfies path constraints and meets terminal accuracy and speed criteria. We show that the guidance system can adapt to off-nominal flight conditions including perturbation of aerodynamic coefficient parameters, actuator failure scenarios, sensor scale factor errors, and actuator lag, while satisfying heating rate, dynamic pressure, and load path constraints, as well as a minimum impact speed constraint. We demonstrate precision strike capability against a maneuvering ground target and the ability to divert to a new target, the latter being important to maximize strike effectiveness for a group of hypersonic strike weapons. Moreover, we demonstrate a threat evasion strategy against interceptors with limited midcourse correction capability, where the hypersonic strike weapon implements multiple diverts to alternate targets, with the last divert to the actual target. Finally, we include preliminary results for an integrated guidance and control system in a six degrees-of-freedom environment.) <|cite_end|>. These use-cases are considered high-risk as even small mistakes can result in large losses of monetary value, equipment, and life. Before DRL models can safely be deployed within real-world safety critical environments, their associated vulnerabilities need to be identified and understood so effective training enhancements and verification guardrails can be implemented. In this paper, we present the ARLIN Toolkit, an open-source research library written in Python that provides explainability outputs and vulnerability detection for DRL models, specifically designed to increase model assurance and identify potential points of failure within a trained model. To our knowledge, ARLIN is the first open-sourced Python toolkit focused on utilizing explainability techniques to assure RL models prior to deployment. ARLIN utilizes \textit{matplotlib} <|cite_start|> (Reference: Matplotlib: A 2d Graphics Environment: Matplotlib is a 2D graphics package used for Python for application development, interactive scripting,and publication-quality image generation across user interfaces and operating systems) <|cite_end|> and \textit{networkx} <|cite_start|> (Reference: Exploring Network Structure, Dynamics, and Function using NetworkX: NetworkX is a Python language package for exploration and analysis of networks and network algorithms. The core package provides data structures for representing many types of networks, or graphs, including simple graphs, directed graphs, and graphs with parallel edges and self-loops. The nodes in NetworkX graphs can be any (hashable) Python object and edges can contain arbitrary data; this flexibility makes NetworkX ideal for representing networks found in many different scientific fields. In addition to the basic data structures many graph algorithms are implemented for calculating network properties and structure measures: shortest paths, betweenness centrality, clustering, and degree distribution and many more. NetworkX can read and write various graph formats for easy exchange with existing data, and provides generators for many classic graphs and popular graph models, such as the Erdos-Renyi, Small World, and Barabasi-Albert models. The ease-of-use and flexibility of the Python programming language together with connection to the SciPy tools make NetworkX a powerful tool for scientific computations. We discuss some of our recent work studying synchronization of coupled oscillators to demonstrate how NetworkX enables research in the field of computational networks.) <|cite_end|> to visualize a trained DRL model's decision making process and provide meaningful vulnerability identification and analysis to researchers. The modular library is structured to support custom architectures, algorithms, DRL frameworks, and analytics; and provides a well-documented and tested API for XRL research development and model assurance. The ARLIN repository is available for download at \texttt{https://github.com/mitre/arlin}. <|paper_end|>
[ "<|reference_start|> Mastering the game of Go with deep neural networks and tree search: <|reference_end|>", "<|reference_start|> Magnetic control of tokamak plasmas through deep reinforcement learning: <|reference_end|>", "<|reference_start|> Terminal Adaptive Guidance for Autonomous Hypersonic Strike Weapons via Reinforcement Learning: An adaptive guidance system suitable for the terminal phase trajectory of a hypersonic strike weapon is optimized using reinforcement meta learning. The guidance system maps observations directly to commanded bank angle, angle of attack, and sideslip angle rates. Importantly, the observations are directly measurable from radar seeker outputs with minimal processing. The optimization framework implements a shaping reward that minimizes the line of sight rotation rate, with a terminal reward given if the agent satisfies path constraints and meets terminal accuracy and speed criteria. We show that the guidance system can adapt to off-nominal flight conditions including perturbation of aerodynamic coefficient parameters, actuator failure scenarios, sensor scale factor errors, and actuator lag, while satisfying heating rate, dynamic pressure, and load path constraints, as well as a minimum impact speed constraint. We demonstrate precision strike capability against a maneuvering ground target and the ability to divert to a new target, the latter being important to maximize strike effectiveness for a group of hypersonic strike weapons. Moreover, we demonstrate a threat evasion strategy against interceptors with limited midcourse correction capability, where the hypersonic strike weapon implements multiple diverts to alternate targets, with the last divert to the actual target. Finally, we include preliminary results for an integrated guidance and control system in a six degrees-of-freedom environment. <|reference_end|>", "<|reference_start|> Exploring Network Structure, Dynamics, and Function using NetworkX: NetworkX is a Python language package for exploration and analysis of networks and network algorithms. The core package provides data structures for representing many types of networks, or graphs, including simple graphs, directed graphs, and graphs with parallel edges and self-loops. The nodes in NetworkX graphs can be any (hashable) Python object and edges can contain arbitrary data; this flexibility makes NetworkX ideal for representing networks found in many different scientific fields. In addition to the basic data structures many graph algorithms are implemented for calculating network properties and structure measures: shortest paths, betweenness centrality, clustering, and degree distribution and many more. NetworkX can read and write various graph formats for easy exchange with existing data, and provides generators for many classic graphs and popular graph models, such as the Erdos-Renyi, Small World, and Barabasi-Albert models. The ease-of-use and flexibility of the Python programming language together with connection to the SciPy tools make NetworkX a powerful tool for scientific computations. We discuss some of our recent work studying synchronization of coupled oscillators to demonstrate how NetworkX enables research in the field of computational networks. <|reference_end|>" ]
[ 0, 3, 4, 6 ]
{"<|cite_1|>": "ss-805362", "<|cite_2|>": "arxiv-54263", "<|cite_3|>": "arxiv-338526", "<|cite_4|>": "ss-737262", "<|cite_5|>": "arxiv-370952", "<|cite_6|>": "ss-972587", "<|cite_7|>": "ss-817053"}
2001.11973
<|paper_start|> Title: Unsatisfiability Proofs for Weight 16 Codewords in Lam's Problem Abstract: Unsatisfiability Proofs for Weight 16 Codewords in Lam's Problem: In the 1970s and 1980s, searches performed by L. Carter, C. Lam, L. Thiel, and S. Swiercz showed that projective planes of order ten with weight 16 codewords do not exist. These searches required highly specialized and optimized computer programs and required about 2,000 hours of computing time on mainframe and supermini computers. In 2011, these searches were verified by D. Roy using an optimized C program and 16,000 hours on a cluster of desktop machines. We performed a verification of these searches by reducing the problem to the Boolean satisfiability problem (SAT). Our verification uses the cube-and-conquer SAT solving paradigm, symmetry breaking techniques using the computer algebra system Maple, and a result of Carter that there are ten nonisomorphic cases to check. Our searches completed in about 30 hours on a desktop machine and produced nonexistence proofs of about 1 terabyte in the DRAT (deletion resolution asymmetric tautology) format. Introduction Geometry is one of the oldest branches of mathematics, being first axiomatically studied by Euclid in the 3rd century BC. Given a line and a point not on it, Euclid's ``parallel postulate'' implies that there exists exactly one line through the point and parallel to the given line. For 2000 years mathematicians tried in vain to prove this axiom but eventually geometries that did not satisfy the parallel postulate were discovered. For example, in the early seventeenth century G. Desargues studied \emph{projective geometry} where parallel lines do not exist. Projective geometry became widely studied in the nineteenth century, leading to the discovery of projective geometries containing a finite number of points. Despite a huge amount of study for over 200 years, some basic questions about finite projective geometries remain open---for example, how many points can a finite projective plane contain? It is known that this number must be of the form $n^2+n+1$ for some natural number~$n$ (known as the \emph{order} of the plane) and certain orders such as $n=6$ have been ruled out by theoretical arguments. For every other~$n$ up to ten a finite projective plane of order~$n$ can be shown to exist through an explicit construction. No theoretical explanation is known that answers the question if a projective plane of order ten exists and answering this question has since become known as \emph{Lam's problem}. In the 1970s and 1980s an enormous amount of computing was used to show that no such plane exists <|cite_start|> (Reference: The Search for a Finite Projective Plane of Order 10: When I was a graduate student looking for a thesis topic, Herbert Ryser advised me not to work on the projective plane of order 10. Even though he was extremely interested in this subject, he believed that it was too difficult and that I might get nowhere with it. I took his advice and chose another problem. Somehow, this problem has a beauty that fascinates me as well as many other mathematicians. Finally in 1980, I succumbed to the temptation and started working on it with some of my colleagues. We eventually managed to get somewhere, but unfortunately, Dr. Ryser is no longer with us to hear of the final result. This is an expository article describing the evolution of the problem and how computers were used to solve it.) <|cite_end|>. The computations were based on the existence of codewords in the error-correcting code generated by a projective plane of order ten. It was shown that such a code must contain codewords of weights 15, 16, or~19---but exhaustive searches showed that such codewords do not exist. Each search required more advanced search techniques and orders of magnitude more computational power than the previous search--- the weight~15 search being the easiest and the weight~19 search being the most challenging. In this paper we focus on the weight~16 search that originally required about 2,000 hours on supercomputers and a VAX-11 supermini machine. Additionally, in 2011, using an optimized C implementation the weight~16 search was verified in 16,000 core hours split across fifteen desktop machines. We provide a reduction of the weight~16 codeword existence problem to the Boolean satisfiability problem (SAT) and a SAT certification that the resulting instances are unsatisfiable. This is done using the cube-and-conquer SAT solving paradigm <|cite_start|> (Reference: Cube and Conquer: Guiding CDCL SAT Solvers by Lookaheads: ) <|cite_end|> and uses functionality from the computer algebra system Maple for the purposes of symmetry breaking. See Section~\ref{sec:background} for background on the cube-and-conquer paradigm and Section~\ref{sec:sat} for a description of our SAT encoding and symmetry breaking methods. Our search completed in about 30 hours on a desktop machine, significantly faster than any previous search. Furthermore, no previous search was able to provide any kind of a certificate following a successful completion. Thus, an independent party had to take on faith that the searches did in fact complete. In contrast, our search produces unsatisfiability certificates that an independent party can use to verify that our searches were successfully run to completion. The proofs of nonexistence generated by the SAT solver amounted to about 1 terabyte in the uncompressed DRAT (deletion resolution asymmetric tautology) format <|cite_start|> (Reference: DRAT-trim: Efficient Checking and Trimming Using Expressive Clausal Proofs: ) <|cite_end|>. See Section~\ref{sec:results} for details on our implementation and results. We do not claim our search is a formal verification because our encoding relies on many mathematical properties that were not derived in a computer-verifiable form, such as the result that there are ten nonisomorphic cases that need to be considered~\cite {carter1974existence} in addition to the correctness of our encoding and implementation. However, we now have a potential method for producing a formal proof: by formally deriving our SAT encoding from the projective plane axioms. This would require expertise in both projective geometry and a formal proof system and would be a significant undertaking. However, the tools to do this already exist and have been used to formally verify other results derived using SAT certificates <|cite_start|> (Reference: Formally Verifying the Solution to the Boolean Pythagorean Triples Problem: ) <|cite_end|> <|cite_start|> (Reference: SMTCoq: Mixing Automatic and Interactive Proof Technologies: ) <|cite_end|>. Related Work \label{sec:background} We now describe the background necessary to understand the nonexistence results of this paper, including the method that we used to solve the SAT instances and the mathematical background on projective planes and their symmetry groups that is necessary to understand our SAT reduction. \paragraph{The cube-and-conquer paradigm.} The cube-and-conquer paradigm was first developed by Heule, Kullmann, Wieringa, and Biere <|cite_start|> (Reference: Cube and Conquer: Guiding CDCL SAT Solvers by Lookaheads: ) <|cite_end|> for computing van der Waerden numbers, a notoriously difficult computational problem from combinatorics. In recent years the cube-and-conquer method has been used to resolve long-standing combinatorial problems such as the Boolean Pythagorean triples problem <|cite_start|> (Reference: Solving Very Hard Problems: Cube-and-Conquer, a Hybrid SAT Solving Method: A recent success of SAT solving has been the solution of the boolean Pythagorean Triples problem [Heule et al., 2016], delivering the largest proof yet, of 200 terabytes in size. We present this and the underlying paradigm Cube-and-Conquer, a powerful general method to solve big SAT problems, based on integrating the “old” and “new” methods of SAT solving.) <|cite_end|> and computing the fifth Schur number <|cite_start|> (Reference: Schur Number Five: We present the solution of a century-old problem known as Schur Number Five: What is the largest (natural) number $n$ such that there exists a five-coloring of the positive numbers up to $n$ without a monochromatic solution of the equation $a + b = c$? We obtained the solution, $n = 160$, by encoding the problem into propositional logic and applying massively parallel satisfiability solving techniques on the resulting formula. We constructed and validated a proof of the solution to increase trust in the correctness of the multi-CPU-year computations. The proof is two petabytes in size and was certified using a formally verified proof checker, demonstrating that any result by satisfiability solvers---no matter how large---can now be validated using highly trustworthy systems.) <|cite_end|>. The idea behind the cube-and-conquer method is to split a SAT instance into subproblems defined by \emph{cubes} (propositional formulae of the form $l_1\land\dotsb\land l_n$ where $l_i$ are literals). Each cube defines a single subproblem---generated by assuming the cube is true---and each subproblem is then solved or ``conquered'' either in parallel or in sequence. \paragraph{Projective planes.} A projective plane is a collection of points and lines that satisfy certain axioms, for example, in a projective plane any two lines intersect at a unique point. Finite projective planes can be defined in terms of \emph{incidence matrices} that have a $1$ in the $(i,j)$th entry exactly when the $j$th point is on the $i$th line. In this framework, a projective plane of order~$n$ is a square $\{0,1\}$-matrix of order $n^2+n+1$ where any two rows or any two columns intersect exactly once (where two rows or columns \emph{intersect} when they share a $1$ in the same position). To avoid degenerate cases we also require that each row contains at least two zeros or equivalently that each row contains exactly $n+1$ ones. Two projective planes are said to be \emph{isomorphic} if one can be transformed into the other via a series of row or column permutations. Projective planes are known to exist in all orders that are primes or prime powers and the \emph{prime power conjecture} is that they exist in no other orders. Some orders such as $n=6$ have been ruled out on theoretical grounds making $n=10$ the first uncertain case. This stimulated a massive computer search for such a plane <|cite_start|> (Reference: The Search for a Finite Projective Plane of Order 10: When I was a graduate student looking for a thesis topic, Herbert Ryser advised me not to work on the projective plane of order 10. Even though he was extremely interested in this subject, he believed that it was too difficult and that I might get nowhere with it. I took his advice and chose another problem. Somehow, this problem has a beauty that fascinates me as well as many other mathematicians. Finally in 1980, I succumbed to the temptation and started working on it with some of my colleagues. We eventually managed to get somewhere, but unfortunately, Dr. Ryser is no longer with us to hear of the final result. This is an expository article describing the evolution of the problem and how computers were used to solve it.) <|cite_end|> based on the form such a plane must have assuming certain codewords exist. A \emph{codeword} is a $\{0,1\}$-vector in the rowspace (mod 2) of a $\{0,1\}$-matrix and the \emph{weight} of a codeword is the number of $1$s that it contains. \begin{table} \centering \begin{tabular}{cccc} Case & Symmetries & Group Size & Initial Cols. \\ 1a & $S_4\wr S_2$ & 1152 & 28 \\ 1b & $S_4\times S_4$ & \0576 & 23 \\ 1c & $S_4\wr S_2$ & 1152 & 18 \\ 2 & $S_4\times S_2$ & \0\048 & 28 \\ 3 & $D_8$ & \0\016 & 28 \\ 4 & $D_4\times S_2$ & \0\016 & 28 \\ 5 & $S_3\times S_2$ & \0\012 & 28 \\ 6a & $S_2\times S_2$ & \0\0\04 & 28 \\ 6b & $S_2$ & \0\0\02 & 26 \\ 6c & $S_2\times S_2$ & \0\0\04 & 24 \end{tabular} \caption{The ten possible cases for the first eight rows of a projective plane of order ten generating a weight 16 codeword and the symmetries in the initial columns (see below). Here $S_n$ denotes the symmetric group of order~$n!$, $D_n$ denotes the dihedral group of order~$2n$, and $\wr$ denotes the wreath product.}\label{tbl:cases} \end{table} It is known <|cite_start|> (Reference: Configurations in a Plane of Order Ten: ) <|cite_end|> that a projective plane of order ten must generate codewords of weight 15, 16, or~19, thus dramatically shrinking the search space and naturally splitting the search into three cases. As shown by, up to isomorphism there are ten possibilities for the first eight rows of the planes that generate weight 16 codewords. Five of these possibilities (cases~2 to~6a in Table~\ref{tbl:cases}) were eliminated by the searches of and the other five were eliminated by the searches of <|cite_start|> (Reference: The nonexistence of code words of weight 16 in a projective plane of order 10: ) <|cite_end|>. \paragraph{Incidence matrix structure.} Carter derived numerous properties that the structure of a projective plane generating a weight 16 codeword must satisfy. In particular, the projective plane can be decomposed into a $3\times2$ grid of submatrices as follows: \[ \begin{matrix} & & 16 & 95 \\ \phantom{0}8\!\!\!\! & \multirow{3}{*}{\rlap{$\left(\rule{0pt}{16pt}\right.$}} & 2 & k & \multirow{3}{*}{\llap{$\left.\rule{0pt}{16pt}\right)$}} \\ 72\!\!\!\! & & 9 & 8-2k \\ 31\!\!\!\! & & 0 & k+3 \end{matrix} \] Here the numbers outside the matrix denote the number of rows or columns in that part of the submatrix. The numbers inside the matrix denote how many $1$s there are in each column in that part of the submatrix; certain columns depend on a parameter~$k$ that differs between columns. Additionally, Carter showed that every entry in the first 16 columns is uniquely specified by the starting case. We call the columns incident with at least two of the first eight rows the \emph{initial} columns and the columns incident with at least one of the first eight rows the \emph{inside} columns. Full starting matrices for each case are available at \href{https://uwaterloo.ca/mathcheck/w16}{uwaterloo.ca/mathcheck/w16}. \paragraph{Symmetry groups.} A projective plane (or partial projective plane) may be symmetric in nontrivial ways, in other words, there may exist row or column permutations that fix the entries of the plane. Such symmetries are important to detect because they can dramatically reduce the search space ---and therefore the running time---of any search that makes use of them <|cite_start|> (Reference: Efficient Symmetry Breaking for Boolean Satisfiability: Identifying and breaking the symmetries of conjunctive normal form (CNF) formulae has been shown to lead to significant reductions in search times. Symmetries in the search space are broken by adding appropriate symmetry-breaking predicates (SBPs) to an SAT instance in CNF. The SBPs prune the search space by acting as a filter that confines the search to nonsymmetric regions of the space without affecting the satisfiability of the CNF formula. For symmetry breaking to be effective in practice, the computational overhead of generating and manipulating SBPs must be significantly less than the runtime savings they yield due to search space pruning. In this paper, we describe a more systematic and efficient construction of SBPs. In particular, we use the cycle structure of symmetry generators, which typically involve very few variables, to drastically reduce the size of SBPs. Furthermore, our new SBP construction grows linearly with the number of relevant variables as opposed to the previous quadratic constructions. Our empirical data suggest that these improvements reduce search runtimes by one to two orders of magnitude on a wide variety of benchmarks with symmetries.) <|cite_end|> <|cite_start|> (Reference: Expressing Symmetry Breaking in DRAT Proofs: ) <|cite_end|>. \begin{figure} \input 1c.tikz \caption{The upper-left $8\times18$ submatrix from case 1c. }\label{fig:tetrahedrons} \end{figure} For example, Figure~\ref{fig:tetrahedrons} shows the \emph{initial configuration} (the first eight rows and initial columns) from case 1c. This matrix is fixed by the permutation that swaps the first two rows and column~$k$ with column~$k+4$ for $1\leq k\leq 4$. The set of all row and column permutations that fix the entries of a matrix forms a group known as the \emph{symmetry group} of the matrix. In the matrix of Figure~\ref{fig:tetrahedrons} any permutation of the first four rows, any permutation of the last four rows, and the permutation that swaps row~$i$ and row~$i+4$ for $1\leq i\leq4$ occur (with appropriate column permutations) in the symmetry group. The size of this permutation group is $4!^2\cdot2=1152$ and the group is isomorphic to the group of symmetries of a pair of tetrahedrons. Up to isomorphism, the symmetry groups for each of the ten possible initial configurations are given in Table~\ref{tbl:cases}. <|paper_end|>
[ "<|reference_start|> The Search for a Finite Projective Plane of Order 10: When I was a graduate student looking for a thesis topic, Herbert Ryser advised me not to work on the projective plane of order 10. Even though he was extremely interested in this subject, he believed that it was too difficult and that I might get nowhere with it. I took his advice and chose another problem. Somehow, this problem has a beauty that fascinates me as well as many other mathematicians. Finally in 1980, I succumbed to the temptation and started working on it with some of my colleagues. We eventually managed to get somewhere, but unfortunately, Dr. Ryser is no longer with us to hear of the final result. This is an expository article describing the evolution of the problem and how computers were used to solve it. <|reference_end|>", "<|reference_start|> DRAT-trim: Efficient Checking and Trimming Using Expressive Clausal Proofs: <|reference_end|>", "<|reference_start|> Solving Very Hard Problems: Cube-and-Conquer, a Hybrid SAT Solving Method: A recent success of SAT solving has been the solution of the boolean Pythagorean Triples problem [Heule et al., 2016], delivering the largest proof yet, of 200 terabytes in size. We present this and the underlying paradigm Cube-and-Conquer, a powerful general method to solve big SAT problems, based on integrating the “old” and “new” methods of SAT solving. <|reference_end|>", "<|reference_start|> The nonexistence of code words of weight 16 in a projective plane of order 10: <|reference_end|>" ]
[ 0, 2, 6, 10 ]
{"<|cite_2|>": "ss-709493", "<|cite_5|>": "ss-758514", "<|cite_6|>": "ss-960736", "<|multi_cite_7_1|>": "ss-1587052", "<|multi_cite_7_2|>": "ss-1051487", "<|cite_8|>": "ss-758514", "<|cite_9|>": "ss-1051484", "<|cite_10|>": "arxiv-140877", "<|cite_11|>": "ss-709493", "<|multi_cite_12_2|>": "ss-2384318", "<|cite_15|>": "ss-2384319", "<|multi_cite_16_1|>": "ss-1841775", "<|multi_cite_16_2|>": "ss-1360006"}
2406.15977
<|paper_start|> Title: A Bayesian framework for spectral reprojection Abstract: A Bayesian framework for spectral reprojection: Fourier partial sum approximations yield exponential accuracy for smooth and periodic functions, but produce the infamous Gibbs phenomenon for non-periodic ones. Spectral reprojection resolves the Gibbs phenomenon by projecting the Fourier partial sum onto a Gibbs complementary basis, often prescribed as the Gegenbauer polynomials. Noise in the Fourier data and the Runge phenomenon both degrade the quality of the Gegenbauer reconstruction solution, however. Motivated by its theoretical convergence properties, this paper proposes a new Bayesian framework for spectral reprojection, which allows a greater understanding of the impact of noise on the reprojection method from a statistical point of view. We are also able to improve the robustness with respect to the Gegenbauer polynomials parameters. Finally, the framework provides a mechanism to quantify the uncertainty of the solution estimate. Introduction \label{sec:introduction} Fourier samples model data acquisitions in applications such as magnetic resonance imaging (MRI) and synthetic aperture radar (SAR). Indeed, recovering images from either MRI $k$-space data or SAR phase history data most often involves recasting the problem as a linear inverse problem with the forward operator given by the discrete (non-uniform) Fourier transform (DFT) matrix (see e.g. <|cite_start|> (Reference: Improving tissue segmentation of human brain MRI through preprocessing by the Gegenbauer reconstruction method: ) <|cite_end|> <|cite_start|> (Reference: {SAR: SAR영상의 가장 큰 문제점은 경계선 부근에서 스패클(Speckle)잡음을 어떻게 줄이느냐 하는 것이다. 본 논문에서는 제안한 방법을 이용하여 경계선을 보존할 수 있는 효과적인 필터를 개발하고자 한다. 스패클 잡음을 줄이면서 에지 영역에 대한 블러링 없는 영상을 추출하기 위하여 웨이브렛 기반의 sigma 필터를 적용하였다. 실험 결과 에지정보에 대한 블러링을 줄인 출력 영상을 구성하였다. 제안한 방법을 미디언 필터와 비교한 결과, 스패클 잡음을 효과적으로 제거한 우수한 영상을 얻을 수 있었다. 【Any classification process using SAR images presupposes the reduction of multiplicative speckle noise, since the variations caused by speckle make it extremely difficult to distinguish between neighboring classes within the feature space. This paper focus an argument of effective filter for preserving the weak boundaries by using the proposed method. To reduce speckle noise without blurring the edges of reconstructed image use wavelet-based sigma filter. As a result, the edge information of reconstructed image reduce blurring. Simulation results show that proposed method gives a better subjective quality than conventional methods for the speckle noise.】) <|cite_end|> <|cite_start|> (Reference: Spotlight-Mode Synthetic Aperture Radar: A Signal Processing Approach: ) <|cite_end|> <|cite_start|> (Reference: Sampling density compensation in MRI: Rationale and an iterative numerical solution: Data collection of MRI which is sampled nonuniformly in k‐space is often interpolated onto a Cartesian grid for fast reconstruction. The collected data must be properly weighted before interpolation, for accurate reconstruction. We propose a criterion for choosing the weighting function necessary to compensate for nonuniform sampling density. A numerical iterative method to find a weighting function that meets that criterion is also given. This method uses only the coordinates of the sampled data; unlike previous methods, it does not require knowledge of the trajectories and can easily handle trajectories that “cross” in k‐space. Moreover, the method can handle sampling patterns that are undersampled in some regions of k‐space and does not require a post‐gridding density correction. Weighting functions for various data collection strategies are shown. Synthesized and collected in vivo data also illustrate aspects of this method. Magn Reson Med 41:179–186, 1999. © 1999 Wiley‐Liss, Inc.) <|cite_end|>). Compressive sensing (CS) algorithms that promote sparse solutions in a known sparse domain <|cite_start|> (Reference: Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform: ) <|cite_end|> <|cite_start|> (Reference: {SAR: SAR영상의 가장 큰 문제점은 경계선 부근에서 스패클(Speckle)잡음을 어떻게 줄이느냐 하는 것이다. 본 논문에서는 제안한 방법을 이용하여 경계선을 보존할 수 있는 효과적인 필터를 개발하고자 한다. 스패클 잡음을 줄이면서 에지 영역에 대한 블러링 없는 영상을 추출하기 위하여 웨이브렛 기반의 sigma 필터를 적용하였다. 실험 결과 에지정보에 대한 블러링을 줄인 출력 영상을 구성하였다. 제안한 방법을 미디언 필터와 비교한 결과, 스패클 잡음을 효과적으로 제거한 우수한 영상을 얻을 수 있었다. 【Any classification process using SAR images presupposes the reduction of multiplicative speckle noise, since the variations caused by speckle make it extremely difficult to distinguish between neighboring classes within the feature space. This paper focus an argument of effective filter for preserving the weak boundaries by using the proposed method. To reduce speckle noise without blurring the edges of reconstructed image use wavelet-based sigma filter. As a result, the edge information of reconstructed image reduce blurring. Simulation results show that proposed method gives a better subjective quality than conventional methods for the speckle noise.】) <|cite_end|> <|cite_start|> (Reference: Sparse MRI: the application of compressed sensing for rapid MR imaging: The sparsity which is implicit in MR images is exploited to significantly undersample k‐space. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain–for example, in terms of spatial finite‐differences or their wavelet coefficients. According to the recently developed mathematical theory of compressed‐sensing, images with a sparse representation can be recovered from randomly undersampled k‐space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise‐like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo‐random variable‐density undersampling of phase‐encodes. The reconstruction is performed by minimizing the ℓ1 norm of a transformed image, subject to data fidelity constraints. Examples demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin‐echo brain imaging and 3D contrast enhanced angiography. Magn Reson Med, 2007. © 2007 Wiley‐Liss, Inc.) <|cite_end|> <|cite_start|> (Reference: Wide-area wide-angle SAR focusing: This study started with a data set that leveraged the latest autofocusing methods to obtain the cleanest radar data set appropriate for generating large SAR imagery over a 5-km spot. The authors intended to spotlight individual smaller nonmoving targets within the larger area; however, the images appeared blurred and varied greatly when generated by different passes of the circular SAR radar system. This study concentrated on using widely dispersed QTs combined with an algorithm to correct for both range and phase errors to improve imaging. The wide-angle QT imaging and vehicle identification experiments showed a significant improvement over all orbits and provided higher quality imagery to more robustly perform image registration. Focusing showed significant improvement in visualizations quad-trihedrals and a vehicle.) <|cite_end|> have become increasingly widespread in providing point estimate image recoveries. More recently, Bayesian inference methods have been developed to also quantify the uncertainty of the estimate. This investigation develops a new Bayesian framework for recovering smooth but non-periodic functions from given noisy Fourier data. Uncertainty quantification can also be achieved when the hyperparameters in the posterior density function are fixed. Importantly, however, rather than use the often employed {\em sparse prior} (or sparse penalty term in CS), here we construct a prior based on {\em spectral reprojection}. The spectral reprojection method is a {\em forward} approach designed to {\em reproject} the observable Fourier data onto a Gibbs complementary basis. It is sometimes referred to as Gegenbauer reconstruction when the Gibbs complementary basis is comprised of Gegenbauer polynomials. The reprojection eliminates the Gibbs phenomenon and restores the exponential convergence (hence the use of {\em spectral} in its name) in the maximum norm. For self-containment, we summarize spectral reprojection in \cref{sec:spectralreprojection}. Although Gegenbauer reconstruction has been successfully used in applications where the observable Fourier data have complex additive Gaussian noise of mean zero <|cite_start|> (Reference: Improving tissue segmentation of human brain MRI through preprocessing by the Gegenbauer reconstruction method: ) <|cite_end|> <|cite_start|> (Reference: On Reconstruction from Non-uniform Spectral Data: ) <|cite_end|>, it was also demonstrated in <|cite_start|> (Reference: Reducing the Effects of Noise in Image Reconstruction: ) <|cite_end|> that while the estimator is unbiased, its variance is spatially dependent. Nevertheless, theoretical results in the seminal work provide key insights that inspire us to develop a new Bayesian inference method for the corresponding {\em inverse} problem. Namely, the derivation of the error terms in the exponential convergence proof naturally motivates the choices of the likelihood and prior terms in the Bayesian method. In particular, the prior used for the construction of the posterior should be designed to favor solutions whose orthogonal polynomial partial sum expansion yields good approximations. Such an assumption is consistent for recovering (discretized) functions that are smooth but not periodic, and is arguably more appropriate than using a sparsifying operator such as first order differencing, which by design assumes that the underlying function is piecewise constant, or Tikhonov regularization, which is even more restrictive. Moreover, when coupled with the likelihood term, the common kernel condition <|cite_start|> (Reference: {Statistical and Computational Inverse Problems: Classification Without Interaction”), and 13 (“Two-Way Crossed Classification With Interaction”). Every chapter contains two or more numerical example with the exception of Chapters 14 (“Three-Way and Higher-Order Crossed Classifications”) and 17 (“General r-Way Nested Classification”), which only contain one example each. Examples appear in the estimation, confidence interval, and hypothesis testing sections. Distribution of estimators is only discussed for the models in Chapters 11 and 15 (“Two-Way Nested Classification”). Chapters 11, 13, 15, and 16 (“Three-Way Nested Classification”) contain information on design considerations involving unbalanced experiments. The appendixes contain basic theoretical and methodological results useful in the development of unbalanced random models as well as information on the capabilities of widely available software. Packages discussed are SAS, SPSS, BMDP, S–PLUS, GENSTAT, and BUGS. The book is well organized and focused. It contains extensive coverage on crossed and nested unbalanced models. Because of the number of topics, the depth of coverage is occasionally limited. This is only a minor issue, since there are always a substantial number of references given. The organization of the book and the presentation of the material make difficult subject matter easier to follow. The main drawback to the book is that it deals only with completely random univariate models. Given the volume of information in the book, however, this is understandable. The authors point out this shortcoming in the Preface and suggest that a future work covering these topics may be forthcoming. For the application-oriented practitioner, a small disadvantage is that a number of the estimation approaches discussed, while interesting, cannot be found in the more commonly used statistical software packages. Regardless, the book makes an excellent resource for anyone working with unbalanced random models.) <|cite_end|> is automatically satisfied so that a unique minimum for the corresponding \emph{maximum a posteriori} (MAP) estimate may be obtained. We call this approach the {\em Bayesian spectral reprojection} (BSR) method, and its point estimate solution, which is consistent with Gegenbauer reconstruction, is the MAP estimate of the corresponding posterior density and is determined through optimization. As already noted, we are also able to provide uncertainty quantification for fixed hyperparameters. We further propose a {\em generalized} Bayesian spectral reprojection (GBSR) method, which modifies the BSR by formulating the likelihood so that the observables are not first transformed to the Gibbs complementary basis. Rather, we directly use the observable Fourier data for this purpose. Removing this restriction allows us to explore a larger space that still assumes that the function is well-approximated by the Gegenbauer partial sum expansion, but also seeks a good fit to the observable data. As our numerical results demonstrate, the point estimate obtained for GSBR is more robust in low SNR environments to different parameter choices. It is also possible to quantify the uncertainty when the posterior density function uses fixed hyperparameters. \subsection*{Paper organization} The rest of this paper is organized as follows. \Cref{sec:preliminaries} describes the underlying problem and summarizes the spectral reprojection method. The problem is then discretized in \Cref{sec:MVformulation} to enable the Bayesian approach used in \Cref{sec:Bayesspectral}. There we describe how the theoretical results proving exponential convergence for spectral reprojection inspire the construction of a new prior, leading to the Bayesian spectral reprojection (BSR) method. These ideas are then modified in \Cref{sec:Bayes} for the {\em generalized} Bayesian spectral reprojection GBSR) method. Numerical examples in \Cref{sec:numerical} show the efficacy of our new methods for recovering 1D smooth functions from noisy Fourier data. \Cref{sec:summary} provides some concluding remarks. <|paper_end|>
[ "<|reference_start|> {SAR: SAR영상의 가장 큰 문제점은 경계선 부근에서 스패클(Speckle)잡음을 어떻게 줄이느냐 하는 것이다. 본 논문에서는 제안한 방법을 이용하여 경계선을 보존할 수 있는 효과적인 필터를 개발하고자 한다. 스패클 잡음을 줄이면서 에지 영역에 대한 블러링 없는 영상을 추출하기 위하여 웨이브렛 기반의 sigma 필터를 적용하였다. 실험 결과 에지정보에 대한 블러링을 줄인 출력 영상을 구성하였다. 제안한 방법을 미디언 필터와 비교한 결과, 스패클 잡음을 효과적으로 제거한 우수한 영상을 얻을 수 있었다. 【Any classification process using SAR images presupposes the reduction of multiplicative speckle noise, since the variations caused by speckle make it extremely difficult to distinguish between neighboring classes within the feature space. This paper focus an argument of effective filter for preserving the weak boundaries by using the proposed method. To reduce speckle noise without blurring the edges of reconstructed image use wavelet-based sigma filter. As a result, the edge information of reconstructed image reduce blurring. Simulation results show that proposed method gives a better subjective quality than conventional methods for the speckle noise.】 <|reference_end|>", "<|reference_start|> Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform: <|reference_end|>", "<|reference_start|> Improving tissue segmentation of human brain MRI through preprocessing by the Gegenbauer reconstruction method: <|reference_end|>", "<|reference_start|> {Statistical and Computational Inverse Problems: Classification Without Interaction”), and 13 (“Two-Way Crossed Classification With Interaction”). Every chapter contains two or more numerical example with the exception of Chapters 14 (“Three-Way and Higher-Order Crossed Classifications”) and 17 (“General r-Way Nested Classification”), which only contain one example each. Examples appear in the estimation, confidence interval, and hypothesis testing sections. Distribution of estimators is only discussed for the models in Chapters 11 and 15 (“Two-Way Nested Classification”). Chapters 11, 13, 15, and 16 (“Three-Way Nested Classification”) contain information on design considerations involving unbalanced experiments. The appendixes contain basic theoretical and methodological results useful in the development of unbalanced random models as well as information on the capabilities of widely available software. Packages discussed are SAS, SPSS, BMDP, S–PLUS, GENSTAT, and BUGS. The book is well organized and focused. It contains extensive coverage on crossed and nested unbalanced models. Because of the number of topics, the depth of coverage is occasionally limited. This is only a minor issue, since there are always a substantial number of references given. The organization of the book and the presentation of the material make difficult subject matter easier to follow. The main drawback to the book is that it deals only with completely random univariate models. Given the volume of information in the book, however, this is understandable. The authors point out this shortcoming in the Preface and suggest that a future work covering these topics may be forthcoming. For the application-oriented practitioner, a small disadvantage is that a number of the estimation approaches discussed, while interesting, cannot be found in the more commonly used statistical software packages. Regardless, the book makes an excellent resource for anyone working with unbalanced random models. <|reference_end|>" ]
[ 1, 4, 8, 11 ]
{"<|multi_cite_1_1|>": "ss-1369515", "<|multi_cite_1_3|>": "ss-1263914", "<|multi_cite_1_4|>": "ss-802764", "<|multi_cite_1_5|>": "ss-1369516", "<|multi_cite_2_1|>": "ss-714880", "<|multi_cite_2_2|>": "ss-1263914", "<|multi_cite_2_3|>": "ss-840165", "<|multi_cite_2_4|>": "ss-1369517", "<|multi_cite_4_1|>": "ss-1369515", "<|multi_cite_4_2|>": "ss-867076", "<|cite_5|>": "ss-1369518", "<|cite_7|>": "ss-1099780"}
1705.07051
<|paper_start|> Title: Speeding up Memory-based Collaborative Filtering with Landmarks Abstract: Speeding up Memory-based Collaborative Filtering with Landmarks: Recommender systems play an important role in many scenarios where users are overwhelmed with too many choices to make. In this context, Collaborative Filtering (CF) arises by providing a simple and widely used approach for personalized recommendation. Memory-based CF algorithms mostly rely on similarities between pairs of users or items, which are posteriorly employed in classifiers like k-Nearest Neighbor (kNN) to generalize for unknown ratings. A major issue regarding this approach is to build the similarity matrix. Depending on the dimensionality of the rating matrix, the similarity computations may become computationally intractable. To overcome this issue, we propose to represent users by their distances to preselected users, namely landmarks. This procedure allows to drastically reduce the computational cost associated with the similarity matrix. We evaluated our proposal on two distinct distinguishing databases, and the results showed our method has consistently and considerably outperformed eight CF algorithms (including both memory-based and model-based) in terms of computational performance. Introduction The continuously improving network technology and the exponential growth of social networks have been connecting the whole world, putting available a huge volume of content, media, goods, services, and many other different kinds of items on Internet <|cite_start|> (Reference: Active Learning Applied to Rating Elicitation for Incentive Purposes: ) <|cite_end|>. However, this phenomenon leads to the paradox of choice. It addresses the problem that people overwhelmed with too many choices tend to be more anxious, and eventually give up to proceed with the order. To tackle this issue, a massive effort has been made towards the development of data mining methods for recommender systems <|cite_start|> (Reference: Introduction to Recommender Systems Handbook: ) <|cite_end|>. This promising technology aims at helping users search and find items that are likely to be consumed, alleviating the burden of choice. In this context, many recommender systems have been designed to provide users with suggested items in a personalized manner. A well-known and widely used approach for this kind of recommendation is Collaborative Filtering (CF) <|cite_start|> (Reference: Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions: This paper presents an overview of the field of recommender systems and describes the current generation of recommendation methods that are usually classified into the following three main categories: content-based, collaborative, and hybrid recommendation approaches. This paper also describes various limitations of current recommendation methods and discusses possible extensions that can improve recommendation capabilities and make recommender systems applicable to an even broader range of applications. These extensions include, among others, an improvement of understanding of users and items, incorporation of the contextual information into the recommendation process, support for multicriteria ratings, and a provision of more flexible and less intrusive types of recommendations.) <|cite_end|>. It consists in considering the history of purchases and users' tastes to identify items that are likely to be acquired. In general, this data is represented by a rating matrix, where each row corresponds to a user, each column is assigned to an item, and each cell holds a rating given by the corresponding user and item. Thus, CF algorithms aim at predicting the missing ratings of the matrix, which are posteriorly used for personalized item recommendations. CF algorithms may be divide into two main classes: \textit{memory-based} and \textit{model-based} algorithms. The former class uses k-Nearest Neighbors (kNN) methods for rating predictions, and therefore relies on computing similarities between pairs of users or items according to their ratings <|cite_start|> (Reference: Advances in Collaborative Filtering: ) <|cite_end|>. The latter class employs matrix factorization techniques so as to obtain an approximation of the rating matrix, in which the unknown cells are filled with rating predictions <|cite_start|> (Reference: MATRIX FACTORIZATION TECHNIQUES FOR RECOMMENDER SYSTEMS: As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.) <|cite_end|>. Both memory-based and model-based algorithms provide advantages and disadvantages. In this work, we are interested in memory-based algorithms. This class of CF algorithms remains widely used in many real systems due to its simplicity. It provides an elegant way for integrating information of users and items beyond the ratings for refining similarities. In addition, memory-based CF algorithms allow \textit{online} recommendations, something required in many practical applications as data is arriving constantly, new users are signing up, and new products are being offered <|cite_start|> (Reference: Google news personalization: scalable online collaborative filtering: Several approaches to collaborative filtering have been studied but seldom have studies been reported for large (several millionusers and items) and dynamic (the underlying item set is continually changing) settings. In this paper we describe our approach to collaborative filtering for generating personalized recommendations for users of Google News. We generate recommendations using three approaches: collaborative filtering using MinHash clustering, Probabilistic Latent Semantic Indexing (PLSI), and covisitation counts. We combine recommendations from different algorithms using a linear model. Our approach is content agnostic and consequently domain independent, making it easily adaptable for other applications and languages with minimal effort. This paper will describe our algorithms and system setup in detail, and report results of running the recommendations engine on Google News.) <|cite_end|>. So, incorporating such information in a \textit{online} fashion is very desired to make up-to-date predictions on the fly by avoiding to re-optimize from scratch with each new piece of data. The major issue regarding to memory-based CF algorithms lies in its computational scalability associated with the growth of the rating matrix. As users are often represented by vectors of items (\textit{i.e.} rows of the rating matrix), it turns out that the larger the number of items is, the higher the computational cost to compute similarities between users. Consequently, memory-based CF may become computationally intractable for a large number of users or items. In this paper, we propose an alternative to improve the computational scalability of memory-based CF algorithms. Our proposal consists in representing users by their distances to preselected users, namely landmarks. Thus, instead of computing similarities between users represented by large vectors (often sparse) of ratings, our method calculates similarities through vectors of distances to fixed landmarks, obtaining an approximate similarity matrix for posterior rating predictions. As the number of landmarks required for a good approximation is mostly much smaller than the number of items, the proposed method drastically alleviates the cost associated with the similarity matrix computation. The results show that our proposal consistently and considerably outperforms the evaluated CF algorithms (including both memory-based and model-based) in terms of computational performance. Interestingly, it achieves accuracy results better than the original memory-based CF algorithms with few landmarks. The main contributions of this work are the following: \begin{itemize} \item A rating matrix reduction method to speed up memory-based CF algorithms. \item The proposal and investigation of 5 landmark selection strategies. \item An extensive comparison between our proposal and 8 CF algorithms, including both memory-based and model-based classes. \end{itemize} The work is organized in five sections, where this is the first one. Section 2 reviews the literature and presents the related work. Section 3 describes the recommendation problem definitions. It also introduces our proposal and presents the landmark selection strategies. Section 4 starts with the description of databases and metrics employed in experiments, follows by detailing the parameter tuning of the proposed method, and finishes by comparing our proposal against other CF algorithms. Finally, Section 5 points out conclusions and future work. Related Work Collaborative Filtering (CF) approach consists in predicting whether a specific user would prefer an item rather than others based on ratings given by users <|cite_start|> (Reference: Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions: This paper presents an overview of the field of recommender systems and describes the current generation of recommendation methods that are usually classified into the following three main categories: content-based, collaborative, and hybrid recommendation approaches. This paper also describes various limitations of current recommendation methods and discusses possible extensions that can improve recommendation capabilities and make recommender systems applicable to an even broader range of applications. These extensions include, among others, an improvement of understanding of users and items, incorporation of the contextual information into the recommendation process, support for multicriteria ratings, and a provision of more flexible and less intrusive types of recommendations.) <|cite_end|>. For this purpose, CF uses only a rating matrix $R$, where rows correspond to users, columns correspond to items, and each cell holds the rating value $r_{uv}$ given by user $u$ to item $v$. Thus, the recommendation problem lies in predicting the missing ratings of $R$, which is often very sparse. Interestingly, although there are many algorithms in Supervised Learning (SL) for data classification and regression, these are not properly suitable to CF, since ratings are not represented in a shared vector space $\mathbb{R}^{d}$. This happens because most users do not consume the same items by preventing their representation in the same vector space $\mathbb{R}^{d}$. Consequently, CF problem is slightly different from SL. To overcome this issue, Braida et al. propose to build a vector space of latent factors to represent all item ratings given by users, and then apply SL techniques to predict unknown ratings. The authors use Singular Value Decomposition (SVD) to obtain user and item latent factors, and then build a vector space which contains all item ratings given by users. Their scheme consistently outperforms many state-of-the-art algorithms <|cite_start|> (Reference: Transforming collaborative filtering into supervised learning: ) <|cite_end|>. Sarwar et al. also apply SVD on the rating matrix to reduce its dimensionality and transform it in a new feature vector space. Thus, predictions are generated by operations between latent factor matrices of users and items <|cite_start|> (Reference: Application of dimensionality reduction in recommender system--a case study: Abstract : We investigate the use of dimensionality reduction to improve performance for a new class of data analysis software called "recommender systems" Recommender systems apply knowledge discovery techniques to the problem of making product recommendations during a live customer interaction. These systems are achieving widespread success in E-commerce nowadays, especially with the advent of the Internet. The tremendous growth of customers and products poses three key challenges for recommender systems in the E-commerce domain. These are: producing high quality recommendations, performing many recommendations per second for millions of customers and products, and achieving high coverage in the face of data sparsity. One successful recommender system technology is collaborative filtering, which works by matching customer preferences to other customers in making recommendations. Collaborative filtering has been shown to produce high quality recommendations, but the performance degrades with the number of customers and products. New recommender system technologies are needed that can quickly produce high quality recommendations, even for very largescale problems. This paper presents two different experiments where we have explored one technology called Singular Value Decomposition (SVD) to reduce the dimensionality of recommender system databases. Each experiment compares the quality of a recommender system using SVD with the quality of a recommender system using collaborative filtering. The first experiment compares the effectiveness of the two recommender systems at predicting consumer preferences based on a database of explicit ratings of products. The second experiment compares the effectiveness of the two recommender systems at producing Top-N lists based on a real-life customer purchase database from an E-Commerce site. Our experience suggests that SVD has the potential to meet many of the challenges of recommender systems, under certain conditions.) <|cite_end|>. Generally, dimensionality reduction techniques based on Matrix Factorization (MF) for CF are more efficient than other techniques, for instance Regularized SVD <|cite_start|> (Reference: Improving regularized singular value decomposition for collaborative filtering: A key part of a recommender system is a collaborative filtering algorithm predicting users’ preferences for items. In this paper we describe different efficient collaborative filtering techniques and a framework for combining them to obtain a good prediction. The methods described in this paper are the most important parts of a solution predicting users’ preferences for movies with error rate 7.04% better on the Netflix Prize dataset than the reference algorithm Netflix Cinematch. The set of predictors used includes algorithms suggested by Netflix Prize contestants: regularized singular value decomposition of data with missing values, K-means, postprocessing SVD with KNN. We propose extending the set of predictors with the following methods: addition of biases to the regularized SVD, postprocessing SVD with kernel ridge regression, using a separate linear model for each movie, and using methods similar to the regularized SVD, but with fewer parameters. All predictors and selected 2-way interactions between them are combined using linear regression on a holdout set.) <|cite_end|>, Improved Regularized SVD <|cite_start|> (Reference: Improving regularized singular value decomposition for collaborative filtering: A key part of a recommender system is a collaborative filtering algorithm predicting users’ preferences for items. In this paper we describe different efficient collaborative filtering techniques and a framework for combining them to obtain a good prediction. The methods described in this paper are the most important parts of a solution predicting users’ preferences for movies with error rate 7.04% better on the Netflix Prize dataset than the reference algorithm Netflix Cinematch. The set of predictors used includes algorithms suggested by Netflix Prize contestants: regularized singular value decomposition of data with missing values, K-means, postprocessing SVD with KNN. We propose extending the set of predictors with the following methods: addition of biases to the regularized SVD, postprocessing SVD with kernel ridge regression, using a separate linear model for each movie, and using methods similar to the regularized SVD, but with fewer parameters. All predictors and selected 2-way interactions between them are combined using linear regression on a holdout set.) <|cite_end|>, Probabilistic MF <|cite_start|> (Reference: Probabilistic {Matrix} {Factorization}: Many existing approaches to collaborative filtering can neither handle very large datasets nor easily deal with users who have very few ratings. In this paper we present the Probabilistic Matrix Factorization (PMF) model which scales linearly with the number of observations and, more importantly, performs well on the large, sparse, and very imbalanced Netflix dataset. We further extend the PMF model to include an adaptive prior on the model parameters and show how the model capacity can be controlled automatically. Finally, we introduce a constrained version of the PMF model that is based on the assumption that users who have rated similar sets of movies are likely to have similar preferences. The resulting model is able to generalize considerably better for users with very few ratings. When the predictions of multiple PMF models are linearly combined with the predictions of Restricted Boltzmann Machines models, we achieve an error rate of 0.8861, that is nearly 7% better than the score of Netflix's own system.) <|cite_end|> and Bayesian Probabilistic MF <|cite_start|> (Reference: Bayesian probabilistic matrix factorization using {Markov chain Monte Carlo: Low-rank matrix approximation methods provide one of the simplest and most effective approaches to collaborative filtering. Such models are usually fitted to data by finding a MAP estimate of the model parameters, a procedure that can be performed efficiently even on very large datasets. However, unless the regularization parameters are tuned carefully, this approach is prone to overfitting because it finds a single point estimate of the parameters. In this paper we present a fully Bayesian treatment of the Probabilistic Matrix Factorization (PMF) model in which model capacity is controlled automatically by integrating over all model parameters and hyperparameters. We show that Bayesian PMF models can be efficiently trained using Markov chain Monte Carlo methods by applying them to the Netflix dataset, which consists of over 100 million movie ratings. The resulting models achieve significantly higher prediction accuracy than PMF models trained using MAP estimation.) <|cite_end|>. They have received great attention after Netflix Prize and are known as model-based CF algorithms <|cite_start|> (Reference: Empirical Analysis of Predictive Algorithms for Collaborative Filtering: Collaborative filtering or recommender systems use a database about user preferences to predict additional topics or products a new user might like. In this paper we describe several algorithms designed for this task, including techniques based on correlation coefficients, vector-based similarity calculations, and statistical Bayesian methods. We compare the predictive accuracy of the various methods in a set of representative problem domains. We use two basic classes of evaluation metrics. The first characterizes accuracy over a set of individual predictions in terms of average absolute deviation. The second estimates the utility of a ranked list of suggested items. This metric uses an estimate of the probability that a user will see a recommendation in an ordered list. Experiments were run for datasets associated with 3 application areas, 4 experimental protocols, and the 2 evaluation metrics for the various algorithms. Results indicate that for a wide range of conditions, Bayesian networks with decision trees at each node and correlation methods outperform Bayesian-clustering and vector-similarity methods. Between correlation and Bayesian networks, the preferred method depends on the nature of the dataset, nature of the application (ranked versus one-by-one presentation), and the availability of votes with which to make predictions. Other considerations include the size of database, speed of predictions, and learning time.) <|cite_end|>. In contrast, memory-based CF algorithms are an adapted k-Nearest Neighbors (kNN) method, in which similarity is computed considering only co-rated items between users, \textit{i.e.} the similarity between users are computed only for the vectors of co-rated items <|cite_start|> (Reference: Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions: This paper presents an overview of the field of recommender systems and describes the current generation of recommendation methods that are usually classified into the following three main categories: content-based, collaborative, and hybrid recommendation approaches. This paper also describes various limitations of current recommendation methods and discusses possible extensions that can improve recommendation capabilities and make recommender systems applicable to an even broader range of applications. These extensions include, among others, an improvement of understanding of users and items, incorporation of the contextual information into the recommendation process, support for multicriteria ratings, and a provision of more flexible and less intrusive types of recommendations.) <|cite_end|>. Although model-based CF algorithms usually provide higher accuracy than the memory-based ones, the latter has been widely used <|cite_start|> (Reference: Recommender Systems for Product Bundling: ) <|cite_end|> <|cite_start|> (Reference: User-Specific Feature-Based Similarity Models for Top-n Recommendation of New Items: Recommending new items for suitable users is an important yet challenging problem due to the lack of preference history for the new items. Noncollaborative user modeling techniques that rely on the item features can be used to recommend new items. However, they only use the past preferences of each user to provide recommendations for that user. They do not utilize information from the past preferences of other users, which can potentially be ignoring useful information. More recent factor models transfer knowledge across users using their preference information in order to provide more accurate recommendations. These methods learn a low-rank approximation for the preference matrix, which can lead to loss of information. Moreover, they might not be able to learn useful patterns given very sparse datasets. In this work, we present UFSM, a method for top-n recommendation of new items given binary user preferences. UFSM learns User-specific Feature-based item-Similarity Models, and its strength lies in combining two points: (1) exploiting preference information across all users to learn multiple global item similarity functions and (2) learning user-specific weights that determine the contribution of each global similarity function in generating recommendations for each user. UFSM can be considered as a sparse high-dimensional factor model where the previous preferences of each user are incorporated within his or her latent representation. This way, UFSM combines the merits of item similarity models that capture local relations among items and factor models that learn global preference patterns. A comprehensive set of experiments was conduced to compare UFSM against state-of-the-art collaborative factor models and noncollaborative user modeling techniques. Results show that UFSM outperforms other techniques in terms of recommendation quality. UFSM manages to yield better recommendations even with very sparse datasets. Results also show that UFSM can efficiently handle high-dimensional as well as low-dimensional item feature spaces.) <|cite_end|> <|cite_start|> (Reference: Ranking-order case-based reasoning for financial distress prediction: ) <|cite_end|> <|cite_start|> (Reference: CenKNN: a scalable and effective text classifier: ) <|cite_end|> <|cite_start|> (Reference: Promoting the performance of vertical recommendation systems by applying new classification techniques: ) <|cite_end|>. This is due to its simplicity in providing an elegant way for integrating information of users and items beyond the ratings for refining similarities. Additionally, memory-based algorithms allow \textit{online} recommendations, making up-to-date predictions on the fly, which avoids to re-optimize from scratch with each new piece of data <|cite_start|> (Reference: Google news personalization: scalable online collaborative filtering: Several approaches to collaborative filtering have been studied but seldom have studies been reported for large (several millionusers and items) and dynamic (the underlying item set is continually changing) settings. In this paper we describe our approach to collaborative filtering for generating personalized recommendations for users of Google News. We generate recommendations using three approaches: collaborative filtering using MinHash clustering, Probabilistic Latent Semantic Indexing (PLSI), and covisitation counts. We combine recommendations from different algorithms using a linear model. Our approach is content agnostic and consequently domain independent, making it easily adaptable for other applications and languages with minimal effort. This paper will describe our algorithms and system setup in detail, and report results of running the recommendations engine on Google News.) <|cite_end|>. For these reasons, many authors seek to improve memory-based CF accuracy and performance, for example in <|cite_start|> (Reference: A similarity metric designed to speed up, using hardware, the recommender systems k-nearest neighbors algorithm: ) <|cite_end|> <|cite_start|> (Reference: A novel two-level nearest neighbor classification algorithm using an adaptive distance metric: ) <|cite_end|> <|cite_start|> (Reference: Boosting the K-Nearest-Neighborhood based incremental collaborative filtering: ) <|cite_end|>. A well-known problem present in memory-based CF algorithms lies in applying distance functions to users for calculating their similarities, which are computationally expensive. Often, the algorithm runtime increases with the number of users/items, becoming prohibitive to apply it on very large databases. Furthermore, finding a sub-matrix of $R$ which contains all users and also is not empty might be impossible due to data sparsity, \textit{i.e.} it is difficult to find an item vector subspace in which all users are represented. To tackle these issues, we propose a method to reduce the size rating matrix via landmarks. It consists in selecting $n$ users as landmarks, and then representing all users by their similarities to these landmarks. Thus, instead of representing users in item vector space, we propose to locate users in landmark vector space whose dimensionality is much smaller. The landmark technique is useful to improve algorithm runtime and it was proposed by Silva and Tenenbaum in Multidimensional Scaling (MDS) context <|cite_start|> (Reference: Global versus Local Methods in Nonlinear Dimensionality Reduction: Recently proposed algorithms for nonlinear dimensionality reduction fall broadly into two categories which have different advantages and disadvantages: global (Isomap [1]), and local (Locally Linear Embedding [2], Laplacian Eigenmaps [3]). We present two variants of Isomap which combine the advantages of the global approach with what have previously been exclusive advantages of local methods: computational sparsity and the ability to invert conformal maps.) <|cite_end|>. In this case, the authors propose a Landmark MDS (LMDS) algorithm, which uses landmarks to reduce the computational costs of traditional MDS. LMDS builds a landmark set by selecting few observations from data -- the landmark set represents all observations. Then, it computes the similarity matrix for this set to obtain a suitable landmark representation in d-dimensional vector space. Finally, the other observations are mapped to this new space, considering their similarities to the landmarks. The main advantage of using LMDS instead of other techniques is to adjust accuracy and runtime. If one needs to decrease runtime, it is possible to sacrifice accuracy by reducing the size of the landmark set. Otherwise, if one needs to improve the algorithm's accuracy, it is also possible to increase the number of landmarks up to the database limit. Therefore, a good LMDS characteristic is to manage this trade off between runtime and accuracy <|cite_start|> (Reference: Fast embedding of sparse music similarity graphs: This paper applies fast sparse multidimensional scaling (MDS) to a large graph of music similarity, with 267K vertices that represent artists, albums, and tracks; and 3.22M edges that represent similarity between those entities. Once vertices are assigned locations in a Euclidean space, the locations can be used to browse music and to generate playlists. MDS on very large sparse graphs can be effectively performed by a family of algorithms called Rectangular Dijsktra (RD) MDS algorithms. These RD algorithms operate on a dense rectangular slice of the distance matrix, created by calling Dijsktra a constant number of times. Two RD algorithms are compared: Landmark MDS, which uses the Nystrom approximation to perform MDS; and a new algorithm called Fast Sparse Embedding, which uses FastMap. These algorithms compare favorably to Laplacian Eigenmaps, both in terms of speed and embedding quality.) <|cite_end|>. Lee and Choi <|cite_start|> (Reference: Landmark MDS ensemble: ) <|cite_end|> argue that noise in database harms LMDS accuracy, and then propose an adaptation for this algorithm, namely Landmark MDS Ensemble (LMDS Ensemble). They propose applying LMDS to different data partitions, and then combine individual solutions in the same coordinate system. Their algorithm is less noise-sensitive but maintains computational performance of LMDS. Another pitfall of landmark approach is to choose the most representative observation as landmarks, once the data representation depends on the similarity to these points. Several selection strategies are proposed in literature <|cite_start|> (Reference: Improved nonlinear manifold learning for land cover classification via intelligent landmark selection: Nonlinear manifold learning algorithms, mainly isometric feature mapping (Isomap) and local linear embedding (LLE), determine the low-dimensional embedding of the original high dimensional data by finding the geometric distances between samples. Researchers in the remote sensing community have successfully applied Isomap to hyperspectral data to extract useful information. Although results are promising, computational requirements of the local search process are exhorbitant. Landmark-Isomap, which utilizes randomly selected sample points to perform the search, mitigates these problems, but samples of some classes are located in spatially disjointed clusters in the embedded space. We propose an alternative approach to selecting landmark points which focuses on the boundaries of the clusters, rather than randomly selected points or cluster centers. The unique Isomap is evaluated by SStress, a good- of-fit measure, and reconstructed with reduced computation, which makes implementation with other classifiers plausible for large data sets. The new method is implemented and applied to Hyperion hyperspectral data collected over the Okavango Delta of Botswana.) <|cite_end|> <|cite_start|> (Reference: Selection of landmark points on nonlinear manifolds for spectral unmixing using local homogeneity: Endmember extraction and unmixing methods that exploit nonlinearity in hyperspectral data are receiving increased attention, but they have significant challenges. Global feature extraction methods such as isometric feature mapping have significant computational overhead, which is often addressed for the classification problem via landmark-based methods. Because landmark approaches are approximation methods, experimental results are often highly variable. We propose a new robust landmark selection method for the purpose of pixel unmixing that exploits spectral and spatial homogeneity in a local window kernel. We compare the performance of the method to several landmark selection methods in terms of reconstruction error and processing time.) <|cite_end|> <|cite_start|> (Reference: Active landmark sampling for manifold learning based spectral unmixing: Nonlinear manifold learning based spectral unmixing provides an alternative to direct nonlinear unmixing methods for accommodating nonlinearities inherent in hyperspectral data. Although manifolds can effectively capture nonlinear features in the dimensionality reduction stage of unmixing, the computational overhead is excessive for large remotely sensed data sets. Manifold approximation using a set of distinguishing points is commonly utilized to mitigate the computational burden, but selection of these landmark points is important for adequately representing the topology of the manifold. This study proposes an active landmark sampling framework for manifold learning based spectral unmixing using a small initial landmark set and a computationally efficient backbone-based strategy for constructing the manifold. The active landmark sampling strategy selects the best additional landmarks to develop a more representative manifold and to increase unmixing accuracy.) <|cite_end|> <|cite_start|> (Reference: An improved set covering problem for Isomap supervised landmark selection: ) <|cite_end|> <|cite_start|> (Reference: A novel landmark point selection method for l-isomap: Isometric feature mapping (ISOMAP) presents remarkable performance for nonlinear dimensionality reduction in diversified research domains. Landmark-ISOMAP(L-ISOMAP) has been proposed to improve the scalability of ISOMAP by performing the most complicated computations on a subset of points referred as to landmarks. In this paper, we present a novel landmark point selection method for L-ISOMAP. The approach first attempts to find a minimum set cover of the neighbourhood sets and get the corresponding data points, referred as to landmark candidate points. After that, it removes the points which belong to neighbour sets of other points from the candidate point set and then the remaining candidate points are the landmarks. We run several experiments on synthetic and physical data sets and the experiment results validate the effectiveness of our proposed method.) <|cite_end|> <|cite_start|> (Reference: A landmark selection method for l-isomap based on greedy algorithm and its application: Isometric feature mapping (Isomap) is a widely-used nonlinear dimensionality reduction method, but it suffers from high computational complexity. L-Isomap is a variant of Isomap which is faster than Isomap. In this algorithm, a subset of points are chosen out of the total data points as landmark points so as to simplify the embedding computation. In this paper, we propose a novel landmark selection method for L-Isomap based on a greedy algorithm. Experiments performed on synthetic and physical data sets validate the effectiveness of the proposed method. Internet traffic matrix has been an effective model to analyzing the Internet. However, the Internet traffic matrix data usually possesses high dimensionality. In this paper, we apply the improved L-Isomap to the real Internet traffic matrix data to investigate its low-dimensional features. The experiment results show that the Internet traffic matrix has a small intrinsic dimension and there indeed exists a low-dimensional manifold structure.) <|cite_end|> <|cite_start|> (Reference: Selecting landmark points for sparse manifold learning: There has been a surge of interest in learning non-linear manifold models to approximate high-dimensional data. Both for computational complexity reasons and for generalization capability, sparsity is a desired feature in such models. This usually means dimensionality reduction, which naturally implies estimating the intrinsic dimension, but it can also mean selecting a subset of the data to use as landmarks, which is especially important because many existing algorithms have quadratic complexity in the number of observations. This paper presents an algorithm for selecting landmarks, based on LASSO regression, which is well known to favor sparse approximations because it uses regularization with an l1 norm. As an added benefit, a continuous manifold parameterization, based on the landmarks, is also found. Experimental results with synthetic and real data illustrate the algorithm.) <|cite_end|>, most of them related to select landmarks for Landmark Isomap, which is a nonlinear reduction method variation to improve scalability <|cite_start|> (Reference: Nonlinear subspace clustering using curvature constrained distances: ) <|cite_end|> <|cite_start|> (Reference: Robust Positive semidefinite L-Isomap Ensemble: ) <|cite_end|> <|cite_start|> (Reference: Global versus Local Methods in Nonlinear Dimensionality Reduction: Recently proposed algorithms for nonlinear dimensionality reduction fall broadly into two categories which have different advantages and disadvantages: global (Isomap [1]), and local (Locally Linear Embedding [2], Laplacian Eigenmaps [3]). We present two variants of Isomap which combine the advantages of the global approach with what have previously been exclusive advantages of local methods: computational sparsity and the ability to invert conformal maps.) <|cite_end|> <|cite_start|> (Reference: UL-Isomap based nonlinear dimensionality reduction for hyperspectral imagery classification: ) <|cite_end|>. Finally, Hu et al. <|cite_start|> (Reference: An incremental dimensionality reduction method on discriminant information for pattern classification: ) <|cite_end|> tackle the problem of applying Linear Discriminant Analysis (LDA) on databases where the number of samples is smaller than the data dimensionality. They propose joining MDS and LDA in an algorithm, named as Discriminant Multidimensional Mapping (DMM), and also employ landmarks in DMM (LDMM) to improve scalability and turn it feasible to very large databases. <|paper_end|>
[ "<|reference_start|> Fast embedding of sparse music similarity graphs: This paper applies fast sparse multidimensional scaling (MDS) to a large graph of music similarity, with 267K vertices that represent artists, albums, and tracks; and 3.22M edges that represent similarity between those entities. Once vertices are assigned locations in a Euclidean space, the locations can be used to browse music and to generate playlists. \n \nMDS on very large sparse graphs can be effectively performed by a family of algorithms called Rectangular Dijsktra (RD) MDS algorithms. These RD algorithms operate on a dense rectangular slice of the distance matrix, created by calling Dijsktra a constant number of times. Two RD algorithms are compared: Landmark MDS, which uses the Nystrom approximation to perform MDS; and a new algorithm called Fast Sparse Embedding, which uses FastMap. These algorithms compare favorably to Laplacian Eigenmaps, both in terms of speed and embedding quality. <|reference_end|>", "<|reference_start|> Active landmark sampling for manifold learning based spectral unmixing: Nonlinear manifold learning based spectral unmixing provides an alternative to direct nonlinear unmixing methods for accommodating nonlinearities inherent in hyperspectral data. Although manifolds can effectively capture nonlinear features in the dimensionality reduction stage of unmixing, the computational overhead is excessive for large remotely sensed data sets. Manifold approximation using a set of distinguishing points is commonly utilized to mitigate the computational burden, but selection of these landmark points is important for adequately representing the topology of the manifold. This study proposes an active landmark sampling framework for manifold learning based spectral unmixing using a small initial landmark set and a computationally efficient backbone-based strategy for constructing the manifold. The active landmark sampling strategy selects the best additional landmarks to develop a more representative manifold and to increase unmixing accuracy. <|reference_end|>", "<|reference_start|> Selecting landmark points for sparse manifold learning: There has been a surge of interest in learning non-linear manifold models to approximate high-dimensional data. Both for computational complexity reasons and for generalization capability, sparsity is a desired feature in such models. This usually means dimensionality reduction, which naturally implies estimating the intrinsic dimension, but it can also mean selecting a subset of the data to use as landmarks, which is especially important because many existing algorithms have quadratic complexity in the number of observations. This paper presents an algorithm for selecting landmarks, based on LASSO regression, which is well known to favor sparse approximations because it uses regularization with an l1 norm. As an added benefit, a continuous manifold parameterization, based on the landmarks, is also found. Experimental results with synthetic and real data illustrate the algorithm. <|reference_end|>", "<|reference_start|> An incremental dimensionality reduction method on discriminant information for pattern classification: <|reference_end|>" ]
[ 25, 29, 33, 38 ]
{"<|cite_1|>": "ss-1704554", "<|cite_3|>": "ss-692526", "<|cite_4|>": "ss-1230149", "<|cite_5|>": "ss-1262630", "<|cite_6|>": "ss-678252", "<|cite_8|>": "ss-1051886", "<|cite_10|>": "ss-1230149", "<|cite_11|>": "ss-1704555", "<|cite_12|>": "ss-1148490", "<|cite_13|>": "ss-1266104", "<|cite_14|>": "ss-1266104", "<|cite_15|>": "ss-1062039", "<|cite_16|>": "ss-772742", "<|cite_17|>": "arxiv-41054", "<|cite_18|>": "ss-1230149", "<|multi_cite_19_1|>": "ss-967647", "<|multi_cite_19_2|>": "ss-1037377", "<|multi_cite_19_3|>": "ss-1110689", "<|multi_cite_19_4|>": "ss-1704556", "<|multi_cite_19_5|>": "ss-1704557", "<|cite_21|>": "ss-1051886", "<|multi_cite_22_1|>": "ss-1704558", "<|multi_cite_22_2|>": "ss-1704559", "<|multi_cite_22_3|>": "ss-1704560", "<|cite_23|>": "ss-805978", "<|cite_25|>": "ss-1704561", "<|cite_26|>": "ss-1704562", "<|multi_cite_27_1|>": "ss-1704563", "<|multi_cite_27_2|>": "ss-1704564", "<|multi_cite_27_3|>": "ss-1704565", "<|multi_cite_27_4|>": "ss-1704566", "<|multi_cite_27_5|>": "ss-1704567", "<|multi_cite_27_6|>": "ss-1704568", "<|multi_cite_27_7|>": "ss-1066747", "<|multi_cite_28_1|>": "ss-1704569", "<|multi_cite_28_2|>": "ss-1704570", "<|multi_cite_28_3|>": "ss-805978", "<|multi_cite_28_4|>": "ss-1124869", "<|cite_29|>": "ss-1704571"}
2209.13822
<|paper_start|> Title: TokenFlow: Rethinking Fine-grained Cross-modal Alignment in Vision-Language Retrieval Abstract: TokenFlow: Rethinking Fine-grained Cross-modal Alignment in Vision-Language Retrieval: Most existing methods in vision-language retrieval match two modalities by either comparing their global feature vectors which misses sufficient information and lacks interpretability, detecting objects in images or videos and aligning the text with fine-grained features which relies on complicated model designs, or modeling fine-grained interaction via cross-attention upon visual and textual tokens which suffers from inferior efficiency. To address these limitations, some recent works simply aggregate the token-wise similarities to achieve fine-grained alignment, but they lack intuitive explanations as well as neglect the relationships between token-level features and global representations with high-level semantics. In this work, we rethink fine-grained cross-modal alignment and devise a new model-agnostic formulation for it. We additionally demystify the recent popular works and subsume them into our scheme. Furthermore, inspired by optimal transport theory, we introduce TokenFlow, an instantiation of the proposed scheme. By modifying only the similarity function, the performance of our method is comparable to the SoTA algorithms with heavy model designs on major video-text retrieval benchmarks. The visualization further indicates that TokenFlow successfully leverages the fine-grained information and achieves better interpretability. Introduction \begin{figure} \centering \includegraphics[width=0.34\textwidth]{imgs/compare.pdf} \caption{A comparison of (a) the coarse-grained methods aligning global representations, (b) the methods relying on object detectors, (c) the methods using cross-attention layers for cross-modal interaction, and (d) our \emph{TokenFlow}.} \label{fig:compare} \end{figure} Cross-modal retrieval between images (or videos) and text has become a fundamental downstream task for vision-language understanding, which aims at searching the semantic similar images or videos for a given textual query. With the rapid emergence of multimedia data on the internet, vision-language retrieval has attracted increasing attention and brought great challenges, since both visual media and text contain rich and structured details. A variety of methods have been proposed and have shown strong superiority in learning similarities between generalizable visual and textual representations across many benchmarks. The main idea of them is to encode visual and textual inputs into the shared feature space, followed by the cross-modal alignment with global features <|cite_start|> (Reference: A Joint Sequence Fusion Model for Video Question Answering and Retrieval: We present an approach named JSFusion (Joint Sequence Fusion) that can measure semantic similarity between any pairs of multimodal sequence data (e.g. a video clip and a language sentence). Our multimodal matching network consists of two key components. First, the Joint Semantic Tensor composes a dense pairwise representation of two sequence data into a 3D tensor. Then, the Convolutional Hierarchical Decoder computes their similarity score by discovering hidden hierarchical matches between the two sequence modalities. Both modules leverage hierarchical attention mechanisms that learn to promote well-matched representation patterns while prune out misaligned ones in a bottom-up manner. Although the JSFusion is a universal model to be applicable to any multimodal sequence data, this work focuses on video-language tasks including multimodal retrieval and video QA. We evaluate the JSFusion model in three retrieval and VQA tasks in LSMDC, for which our model achieves the best performance reported so far. We also perform multiple-choice and movie retrieval tasks for the MSR-VTT dataset, on which our approach outperforms many state-of-the-art methods.) <|cite_end|> <|cite_start|> (Reference: Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval: Our objective in this work is video-text retrieval - in particular a joint embedding that enables efficient text-to-video retrieval. The challenges in this area include the design of the visual architecture and the nature of the training data, in that the available large scale video-text training datasets, such as HowTo100M, are noisy and hence competitive performance is achieved only at scale through large amounts of compute. We address both these challenges in this paper. We propose an end-to-end trainable model that is designed to take advantage of both large-scale image and video captioning datasets. Our model is an adaptation and extension of the recent ViT and Timesformer architectures, and consists of attention in both space and time. The model is flexible and can be trained on both image and video text datasets, either independently or in conjunction. It is trained with a curriculum learning schedule that begins by treating images as 'frozen' snapshots of video, and then gradually learns to attend to increasing temporal context when trained on video datasets. We also provide a new video-text pretraining dataset WebVid-2M, comprised of over two million videos with weak captions scraped from the internet. Despite training on datasets that are an order of magnitude smaller, we show that this approach yields state-of-the-art results on standard downstream video-retrieval benchmarks including MSR-VTT, MSVD, DiDeMo and LSMDC.) <|cite_end|> <|cite_start|> (Reference: CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval: Video-text retrieval plays an essential role in multi-modal research and has been widely used in many real-world web applications. The CLIP (Contrastive Language-Image Pre-training), an image-language pre-training model, has demonstrated the power of visual concepts learning from web collected image-text datasets. In this paper, we propose a CLIP4Clip model to transfer the knowledge of the CLIP model to video-language retrieval in an end-to-end manner. Several questions are investigated via empirical studies: 1) Whether image feature is enough for video-text retrieval? 2) How a post-pretraining on a large-scale video-text dataset based on the CLIP affect the performance? 3) What is the practical mechanism to model temporal dependency between video frames? And 4) The Hyper-parameters sensitivity of the model on video-text retrieval task. Extensive experimental results present that the CLIP4Clip model transferred from the CLIP can achieve SOTA results on various video-text retrieval datasets, including MSR-VTT, MSVC, LSMDC, ActivityNet, and DiDeMo. We release our code at https://github.com/ArrowLuo/CLIP4Clip.) <|cite_end|>. Despite the superior performance in matching visual and textual features, such a line of work lacks the ability to leverage fine-level information like the relationship between visual objects and textual words. To model fine-grained cross-modal interaction, existing methods fall into three kinds of approaches, as illustrated in Figure \ref{fig:compare}. 1) Some of them utilize pre-trained object detectors to extract region-based visual features and then fuse them with text embeddings for cross-modal training <|cite_start|> (Reference: Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks: Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use self-attention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks.) <|cite_end|> <|cite_start|> (Reference: Large-Scale Adversarial Training for Vision-and-Language Representation Learning: We present VILLA, the first known effort on large-scale adversarial training for vision-and-language (V+L) representation learning. VILLA consists of two training stages: (i) task-agnostic adversarial pre-training; followed by (ii) task-specific adversarial finetuning. Instead of adding adversarial perturbations on image pixels and textual tokens, we propose to perform adversarial training in the embedding space of each modality. To enable large-scale training, we adopt the "free" adversarial training strategy, and combine it with KL-divergence-based regularization to promote higher invariance in the embedding space. We apply VILLA to current best-performing V+L models, and achieve new state of the art on a wide range of tasks, including Visual Question Answering, Visual Commonsense Reasoning, Image-Text Retrieval, Referring Expression Comprehension, Visual Entailment, and NLVR2.) <|cite_end|> <|cite_start|> (Reference: UNITER: UNiversal Image-TExt Representation Learning: Joint image-text embedding is the bedrock for most Vision-and-Language (V+L) tasks, where multimodality inputs are simultaneously processed for joint visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design four pre-training tasks: Masked Language Modeling (MLM), Masked Region Modeling (MRM, with three variants), Image-Text Matching (ITM), and Word-Region Alignment (WRA). Different from previous work that applies joint random masking to both modalities, we use conditional masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). In addition to ITM for global image-text alignment, we also propose WRA via the use of Optimal Transport (OT) to explicitly encourage fine-grained alignment between words and image regions during pre-training. Comprehensive analysis shows that both conditional masking and OT-based WRA contribute to better pre-training. We also conduct a thorough ablation study to find an optimal combination of pre-training tasks. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks (over nine datasets), including Visual Question Answering, Image-Text Retrieval, Referring Expression Comprehension, Visual Commonsense Reasoning, Visual Entailment, and NLVR$^2$. Code is available at https://github.com/ChenRocks/UNITER.) <|cite_end|>. These works usually suffer from time-consuming region features extracting stage and require complicated architecture designs and training processes. Moreover, their ability may be limited when the object detection model fails to capture certain important information in the downstream tasks. 2) Some other works investigate fine-grained cross-modal interaction methods based on different attention mechanisms, to align the semantic space between token-wise or patch-wise representations from both modalities <|cite_start|> (Reference: Align before Fuse: Vision and Language Representation Learning with Momentum Distillation: Large-scale vision and language representation learning has shown promising improvements on various vision-language tasks. Most existing methods employ a transformer-based multimodal encoder to jointly model visual tokens (region-based image features) and word tokens. Because the visual tokens and word tokens are unaligned, it is challenging for the multimodal encoder to learn image-text interactions. In this paper, we introduce a contrastive loss to ALign the image and text representations BEfore Fusing (ALBEF) them through cross-modal attention, which enables more grounded vision and language representation learning. Unlike most existing methods, our method does not require bounding box annotations nor high-resolution images. In order to improve learning from noisy web data, we propose momentum distillation, a self-training method which learns from pseudo-targets produced by a momentum model. We provide a theoretical analysis of ALBEF from a mutual information maximization perspective, showing that different training tasks can be interpreted as different ways to generate views for an image-text pair. ALBEF achieves state-of-the-art performance on multiple downstream vision-language tasks. On image-text retrieval, ALBEF outperforms methods that are pre-trained on orders of magnitude larger datasets. On VQA and NLVR$^2$, ALBEF achieves absolute improvements of 2.37% and 3.84% compared to the state-of-the-art, while enjoying faster inference speed. Code and pre-trained models are available at https://github.com/salesforce/ALBEF/.) <|cite_end|> <|cite_start|> (Reference: ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision: Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance. Our code and pre-trained weights are available at https://github.com/dandelin/vilt.) <|cite_end|>. This line of work usually requires the cross-attention to be performed in an encoder-decoder architecture in both the training and inference stage and thus becomes less efficient in practice. 3) A few works achieve fine-grained cross-modal interaction by leveraging token-wise or region-word similarities in the contrastive loss, instead of using cross-attention <|cite_start|> (Reference: Stacked Cross Attention for Image-Text Matching: In this paper, we study the problem of image-text matching. Inferring the latent semantic alignment between objects or other salient stuff (e.g. snow, sky, lawn) and the corresponding words in sentences allows to capture fine-grained interplay between vision and language, and makes image-text matching more interpretable. Prior work either simply aggregates the similarity of all possible pairs of regions and words without attending differentially to more and less important words or regions, or uses a multi-step attentional process to capture limited number of semantic alignments which is less interpretable. In this paper, we present Stacked Cross Attention to discover the full latent alignments using both image regions and words in a sentence as context and infer image-text similarity. Our approach achieves the state-of-the-art results on the MS-COCO and Flickr30K datasets. On Flickr30K, our approach outperforms the current best methods by 22.1% relatively in text retrieval from image query, and 18.2% relatively in image retrieval with text query (based on Recall@1). On MS-COCO, our approach improves sentence retrieval by 17.8% relatively and image retrieval by 16.6% relatively (based on Recall@1 using the 5K test set). Code has been made available at: https://github.com/kuanghuei/SCAN.) <|cite_end|> <|cite_start|> (Reference: Fine-grained Visual Textual Alignment for Cross-Modal Retrieval using Transformer Encoders: Despite the evolution of deep-learning-based visual-textual processing systems, precise multi-modal matching remains a challenging task. In this work, we tackle the task of cross-modal retrieval through image-sentence matching based on word-region alignments, using supervision only at the global image-sentence level. Specifically, we present a novel approach called Transformer Encoder Reasoning and Alignment Network (TERAN). TERAN enforces a fine-grained match between the underlying components of images and sentences, i.e., image regions and words, respectively, in order to preserve the informative richness of both modalities. TERAN obtains state-of-the-art results on the image retrieval task on both MS-COCO and Flickr30k datasets. Moreover, on MS-COCO, it also outperforms current approaches on the sentence retrieval task. Focusing on scalable cross-modal information retrieval, TERAN is designed to keep the visual and textual data pipelines well separated. Cross-attention links invalidate any chance to separately extract visual and textual features needed for the online search and the offline indexing steps in large-scale retrieval systems. In this respect, TERAN merges the information from the two domains only during the final alignment phase, immediately before the loss computation. We argue that the fine-grained alignments produced by TERAN pave the way towards the research for effective and efficient methods for large-scale cross-modal information retrieval. We compare the effectiveness of our approach against relevant state-of-the-art methods. On the MS-COCO 1K test set, we obtain an improvement of 5.7% and 3.5% respectively on the image and the sentence retrieval tasks on the Recall@1 metric. The code used for the experiments is publicly available on GitHub at https://github.com/mesnico/TERAN.) <|cite_end|> <|cite_start|> (Reference: FILIP: Fine-grained Interactive Language-Image Pre-Training: Unsupervised large-scale vision-language pre-training has shown promising advances on various downstream tasks. Existing methods often model the cross-modal interaction either via the similarity of the global feature of each modality which misses sufficient information, or finer-grained interactions using cross/self-attention upon visual and textual tokens. However, cross/self-attention suffers from inferior efficiency in both training and inference. In this paper, we introduce a large-scale Fine-grained Interactive Language-Image Pre-training (FILIP) to achieve finer-level alignment through a cross-modal late interaction mechanism, which uses a token-wise maximum similarity between visual and textual tokens to guide the contrastive objective. FILIP successfully leverages the finer-grained expressiveness between image patches and textual words by modifying only contrastive loss, while simultaneously gaining the ability to pre-compute image and text representations offline at inference, keeping both large-scale training and inference efficient. Furthermore, we construct a new large-scale image-text pair dataset called FILIP300M for pre-training. Experiments show that FILIP achieves state-of-the-art performance on multiple downstream vision-language tasks including zero-shot image classification and image-text retrieval. The visualization on word-patch alignment further shows that FILIP can learn meaningful fine-grained features with promising localization ability.) <|cite_end|>. Although these methods based on fine-level similarities are shown to be capable of learning fine-grained representations, they directly drop the global representations that contain sufficient information, and it makes them hard to be adapted to downstream tasks with methods that use a pre-trained transformer backbones like CLIP <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|>, of which the pre-training objective is aligning the global representations (classification tokens) rather than patch representations. They also neglect the fact that the relationships between single token and global statistics are also related to overall similarity. Moreover, these approaches are described in a less intuitive way which doesn’t clearly explain how they work. In this paper, we firstly rethink the cross-modal fine-grained alignment and introduce a universal formulation for it. Then, we subsume the recent popular works which learn fine-grained interaction through token-wise or region-word similarities into our scheme and explain how they work in a clearer way. Furthermore, based on the proposed scheme, we try to model the matching problem as an optimal transport problem and define the distance of two modalities as the Earth Mover's Distance (EMD) <|cite_start|> (Reference: The Earth Mover's Distance as a Metric for Image Retrieval: ) <|cite_end|> between their structured representations. Specifically, we use spatial cross-correlation between an image (or a video) and a text as the marginal distributions when computing the optimal transport plan, where elements with larger weights generate more matching flows and thus contribute more to the overall similarity, which alleviates the issues mentioned above. However, the optimal transport problem is complicated and not friendly with time and memory consumption. Moreover, most of the existing EMD implementations don’t guarantee correctness and convergence, thus hurt the model performance. To address the aforementioned issues, inspired by optimal transport theory, we present \emph{TokenFlow}, a more efficient and effective instantiation of the proposed scheme which achieves promising performance. \emph{TokenFlow} computes a matching flow between the token-level features and decomposes overall similarity into several token-wise similarities with different contributions. \emph{TokenFlow} develops a very simple aligning mechanism, built of simple dot products and summations without including complex object detectors or cross-attention layers. We conduct extensive experiments to compare with other instantiations on multiple benchmarks to demonstrate the effectiveness of our algorithm. Our main contributions are summarized as follows: \begin{itemize} \item We introduce a new perspective of fine-grained cross-modal alignment with a model-agnostic formulation. \item We subsume the recent popular works into our formulation and demystify them in a clearer way. \item We propose \emph{TokenFlow}, a novel fine-grained alignment function. Experimental results show that by learning fine-grained alignment, the performance of \emph{TokenFlow} is comparable to the SoTA algorithms with heavy model designs by only altering the similarity function, on major video-text retrieval benchmarks. Visualizations further illustrate that \emph{TokenFlow} learns meaningful fine-grained representations with promising matching ability. \end{itemize} Related Work \subsection{Vision-Language Retrieval} Existing representative works on vision-language retrieval follow the trend of learning a joint embedding space to measure the distance between visual and textual representations, which can be divided into two categories: coarse-grained and fine-grained. Coarse-grained methods typically encode images <|cite_start|> (Reference: An Empirical Study of Training End-to-End Vision-and-Language Transformers: Vision-and-language (VL) pre-training has proven to be highly effective on various VL downstream tasks. While recent work has shown that fully transformer-based VL models can be more efficient than previous region-feature-based methods, their performance on downstream tasks often degrades significantly. In this paper, we present METER, a Multimodal End-to-end TransformER framework, through which we investigate how to design and pre-train a fully transformer-based VL model in an end-to-end manner. Specifically, we dissect the model designs along multiple dimensions: vision encoders (e.g., CLIP-ViT, Swin transformer), text encoders (e.g., RoBERTa, DeBERTa), multimodal fusion module (e.g., merged attention vs. co-attention), architectural design (e.g., encoder-only vs. encoder-decoder), and pre-training objectives (e.g., masked image modeling). We conduct comprehensive experiments and provide insights on how to train a performant VL transformer. METER achieves an accuracy of 77.64% on the VQAv2 test-std set using only 4M images for pre-training, surpassing the state-of-the-art region-feature-based model by 1.04%, and outperforming the previous best fully transformer-based model by 1.6%. Notably, when further scaled up, our best VQA model achieves an accuracy of 80.54%. Code and pre-trained models are released at https://github.com/zdou0830/METER.) <|cite_end|> <|cite_start|> (Reference: Align before Fuse: Vision and Language Representation Learning with Momentum Distillation: Large-scale vision and language representation learning has shown promising improvements on various vision-language tasks. Most existing methods employ a transformer-based multimodal encoder to jointly model visual tokens (region-based image features) and word tokens. Because the visual tokens and word tokens are unaligned, it is challenging for the multimodal encoder to learn image-text interactions. In this paper, we introduce a contrastive loss to ALign the image and text representations BEfore Fusing (ALBEF) them through cross-modal attention, which enables more grounded vision and language representation learning. Unlike most existing methods, our method does not require bounding box annotations nor high-resolution images. In order to improve learning from noisy web data, we propose momentum distillation, a self-training method which learns from pseudo-targets produced by a momentum model. We provide a theoretical analysis of ALBEF from a mutual information maximization perspective, showing that different training tasks can be interpreted as different ways to generate views for an image-text pair. ALBEF achieves state-of-the-art performance on multiple downstream vision-language tasks. On image-text retrieval, ALBEF outperforms methods that are pre-trained on orders of magnitude larger datasets. On VQA and NLVR$^2$, ALBEF achieves absolute improvements of 2.37% and 3.84% compared to the state-of-the-art, while enjoying faster inference speed. Code and pre-trained models are available at https://github.com/salesforce/ALBEF/.) <|cite_end|> <|cite_start|> (Reference: Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision: Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as ImageNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated cross-attention models. The representations also enable cross-modality search with complex text and text + image queries.) <|cite_end|> or videos <|cite_start|> (Reference: Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling: The canonical approach to video-and-language learning (e.g., video question answering) dictates a neural model to learn from offline-extracted dense video features from vision models and text features from language models. These feature extractors are trained independently and usually on tasks different from the target domains, rendering these fixed features sub-optimal for downstream tasks. Moreover, due to the high computational overload of dense video features, it is often difficult (or infeasible) to plug feature extractors directly into existing approaches for easy finetuning. To provide a remedy to this dilemma, we propose a generic framework ClipBERT that enables affordable end-to-end learning for video-and-language tasks, by employing sparse sampling, where only a single or a few sparsely sampled short clips from a video are used at each training step. Experiments on text-to-video retrieval and video question answering on six datasets demonstrate that ClipBERT outperforms (or is on par with) existing methods that exploit full-length videos, suggesting that end-to-end learning with just a few sparsely sampled clips is often more accurate than using densely extracted offline features from full-length videos, proving the proverbial less-is-more principle. Videos in the datasets are from considerably different domains and lengths, ranging from 3-second generic domain GIF videos to 180-second YouTube human activity videos, showing the generalization ability of our approach. Comprehensive ablation studies and thorough analyses are provided to dissect what factors lead to this success. Our code is publicly available at https://github.com/jayleicn/ClipBERT) <|cite_end|> <|cite_start|> (Reference: CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval: Video-text retrieval plays an essential role in multi-modal research and has been widely used in many real-world web applications. The CLIP (Contrastive Language-Image Pre-training), an image-language pre-training model, has demonstrated the power of visual concepts learning from web collected image-text datasets. In this paper, we propose a CLIP4Clip model to transfer the knowledge of the CLIP model to video-language retrieval in an end-to-end manner. Several questions are investigated via empirical studies: 1) Whether image feature is enough for video-text retrieval? 2) How a post-pretraining on a large-scale video-text dataset based on the CLIP affect the performance? 3) What is the practical mechanism to model temporal dependency between video frames? And 4) The Hyper-parameters sensitivity of the model on video-text retrieval task. Extensive experimental results present that the CLIP4Clip model transferred from the CLIP can achieve SOTA results on various video-text retrieval datasets, including MSR-VTT, MSVC, LSMDC, ActivityNet, and DiDeMo. We release our code at https://github.com/ArrowLuo/CLIP4Clip.) <|cite_end|> <|cite_start|> (Reference: CLIP2Video: Mastering Video-Text Retrieval via Image CLIP: We present CLIP2Video network to transfer the image-language pre-training model to video-text retrieval in an end-to-end manner. Leading approaches in the domain of video-and-language learning try to distill the spatio-temporal video features and multi-modal interaction between videos and languages from a large-scale video-text dataset. Different from them, we leverage pretrained image-language model, simplify it as a two-stage framework with co-learning of image-text and enhancing temporal relations between video frames and video-text respectively, make it able to train on comparatively small datasets. Specifically, based on the spatial semantics captured by Contrastive Language-Image Pretraining (CLIP) model, our model involves a Temporal Difference Block to capture motions at fine temporal video frames, and a Temporal Alignment Block to re-align the tokens of video clips and phrases and enhance the multi-modal correlation. We conduct thorough ablation studies, and achieve state-of-the-art performance on major text-to-video and video-to-text retrieval benchmarks, including new records of retrieval accuracy on MSR-VTT, MSVD and VATEX.) <|cite_end|> <|cite_start|> (Reference: Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss: Employing large-scale pre-trained model CLIP to conduct video-text retrieval task (VTR) has become a new trend, which exceeds previous VTR methods. Though, due to the heterogeneity of structures and contents between video and text, previous CLIP-based models are prone to overfitting in the training phase, resulting in relatively poor retrieval performance. In this paper, we propose a multi-stream Corpus Alignment network with single gate Mixture-of-Experts (CAMoE) and a novel Dual Softmax Loss (DSL) to solve the two heterogeneity. The CAMoE employs Mixture-of-Experts (MoE) to extract multi-perspective video representations, including action, entity, scene, etc., then align them with the corresponding part of the text. In this stage, we conduct massive explorations towards the feature extraction module and feature alignment module. DSL is proposed to avoid the one-way optimum-match which occurs in previous contrastive methods. Introducing the intrinsic prior of each pair in a batch, DSL serves as a reviser to correct the similarity matrix and achieves the dual optimal match. DSL is easy to implement with only one-line code but improves significantly. The results show that the proposed CAMoE and DSL are of strong efficiency, and each of them is capable of achieving State-of-The-Art (SOTA) individually on various benchmarks such as MSR-VTT, MSVD, and LSMDC. Further, with both of them, the performance is advanced to a big extend, surpassing the previous SOTA methods for around 4.6\% R@1 in MSR-VTT.) <|cite_end|> and textual queries to global features and accordingly map them into a common latent space, where the similarity can be measured directly with ranking loss variants. For video-text retrieval, recent methods based on pre-trained transformer CLIP <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|> have achieved noticeable results and drawn increasing attention. CLIP4Clip <|cite_start|> (Reference: CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval: Video-text retrieval plays an essential role in multi-modal research and has been widely used in many real-world web applications. The CLIP (Contrastive Language-Image Pre-training), an image-language pre-training model, has demonstrated the power of visual concepts learning from web collected image-text datasets. In this paper, we propose a CLIP4Clip model to transfer the knowledge of the CLIP model to video-language retrieval in an end-to-end manner. Several questions are investigated via empirical studies: 1) Whether image feature is enough for video-text retrieval? 2) How a post-pretraining on a large-scale video-text dataset based on the CLIP affect the performance? 3) What is the practical mechanism to model temporal dependency between video frames? And 4) The Hyper-parameters sensitivity of the model on video-text retrieval task. Extensive experimental results present that the CLIP4Clip model transferred from the CLIP can achieve SOTA results on various video-text retrieval datasets, including MSR-VTT, MSVC, LSMDC, ActivityNet, and DiDeMo. We release our code at https://github.com/ArrowLuo/CLIP4Clip.) <|cite_end|> is the first to apply CLIP for video-text retrieval which also proposes three different ways of aggregating video frames. CLIP2Video <|cite_start|> (Reference: CLIP2Video: Mastering Video-Text Retrieval via Image CLIP: We present CLIP2Video network to transfer the image-language pre-training model to video-text retrieval in an end-to-end manner. Leading approaches in the domain of video-and-language learning try to distill the spatio-temporal video features and multi-modal interaction between videos and languages from a large-scale video-text dataset. Different from them, we leverage pretrained image-language model, simplify it as a two-stage framework with co-learning of image-text and enhancing temporal relations between video frames and video-text respectively, make it able to train on comparatively small datasets. Specifically, based on the spatial semantics captured by Contrastive Language-Image Pretraining (CLIP) model, our model involves a Temporal Difference Block to capture motions at fine temporal video frames, and a Temporal Alignment Block to re-align the tokens of video clips and phrases and enhance the multi-modal correlation. We conduct thorough ablation studies, and achieve state-of-the-art performance on major text-to-video and video-to-text retrieval benchmarks, including new records of retrieval accuracy on MSR-VTT, MSVD and VATEX.) <|cite_end|> captures temporal relationships of video frames and re-aligns the tokens of video clips and phrases. CAMoE <|cite_start|> (Reference: Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss: Employing large-scale pre-trained model CLIP to conduct video-text retrieval task (VTR) has become a new trend, which exceeds previous VTR methods. Though, due to the heterogeneity of structures and contents between video and text, previous CLIP-based models are prone to overfitting in the training phase, resulting in relatively poor retrieval performance. In this paper, we propose a multi-stream Corpus Alignment network with single gate Mixture-of-Experts (CAMoE) and a novel Dual Softmax Loss (DSL) to solve the two heterogeneity. The CAMoE employs Mixture-of-Experts (MoE) to extract multi-perspective video representations, including action, entity, scene, etc., then align them with the corresponding part of the text. In this stage, we conduct massive explorations towards the feature extraction module and feature alignment module. DSL is proposed to avoid the one-way optimum-match which occurs in previous contrastive methods. Introducing the intrinsic prior of each pair in a batch, DSL serves as a reviser to correct the similarity matrix and achieves the dual optimal match. DSL is easy to implement with only one-line code but improves significantly. The results show that the proposed CAMoE and DSL are of strong efficiency, and each of them is capable of achieving State-of-The-Art (SOTA) individually on various benchmarks such as MSR-VTT, MSVD, and LSMDC. Further, with both of them, the performance is advanced to a big extend, surpassing the previous SOTA methods for around 4.6\% R@1 in MSR-VTT.) <|cite_end|> extracts multi-perspective video representations including action, entity, and scene. Extremely summarized global visual and textual descriptions may lose a lot of useful fine-grained information. For these reasons, many works try to utilize fine-level features and achieve fine-grained alignments between modalities. One line of work relies on objection detection to represent the visual input by dozens of object-centric features and then combines them with the paired text as the input of the model <|cite_start|> (Reference: Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks: Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use self-attention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks.) <|cite_end|> <|cite_start|> (Reference: Large-Scale Adversarial Training for Vision-and-Language Representation Learning: We present VILLA, the first known effort on large-scale adversarial training for vision-and-language (V+L) representation learning. VILLA consists of two training stages: (i) task-agnostic adversarial pre-training; followed by (ii) task-specific adversarial finetuning. Instead of adding adversarial perturbations on image pixels and textual tokens, we propose to perform adversarial training in the embedding space of each modality. To enable large-scale training, we adopt the "free" adversarial training strategy, and combine it with KL-divergence-based regularization to promote higher invariance in the embedding space. We apply VILLA to current best-performing V+L models, and achieve new state of the art on a wide range of tasks, including Visual Question Answering, Visual Commonsense Reasoning, Image-Text Retrieval, Referring Expression Comprehension, Visual Entailment, and NLVR2.) <|cite_end|> <|cite_start|> (Reference: UNITER: UNiversal Image-TExt Representation Learning: Joint image-text embedding is the bedrock for most Vision-and-Language (V+L) tasks, where multimodality inputs are simultaneously processed for joint visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design four pre-training tasks: Masked Language Modeling (MLM), Masked Region Modeling (MRM, with three variants), Image-Text Matching (ITM), and Word-Region Alignment (WRA). Different from previous work that applies joint random masking to both modalities, we use conditional masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). In addition to ITM for global image-text alignment, we also propose WRA via the use of Optimal Transport (OT) to explicitly encourage fine-grained alignment between words and image regions during pre-training. Comprehensive analysis shows that both conditional masking and OT-based WRA contribute to better pre-training. We also conduct a thorough ablation study to find an optimal combination of pre-training tasks. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks (over nine datasets), including Visual Question Answering, Image-Text Retrieval, Referring Expression Comprehension, Visual Commonsense Reasoning, Visual Entailment, and NLVR$^2$. Code is available at https://github.com/ChenRocks/UNITER.) <|cite_end|>. Another line of work utilizes a bunch of cross-modal transformers to learn fine-grained interaction between token-wise representations of two modalities <|cite_start|> (Reference: UNITER: UNiversal Image-TExt Representation Learning: Joint image-text embedding is the bedrock for most Vision-and-Language (V+L) tasks, where multimodality inputs are simultaneously processed for joint visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design four pre-training tasks: Masked Language Modeling (MLM), Masked Region Modeling (MRM, with three variants), Image-Text Matching (ITM), and Word-Region Alignment (WRA). Different from previous work that applies joint random masking to both modalities, we use conditional masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). In addition to ITM for global image-text alignment, we also propose WRA via the use of Optimal Transport (OT) to explicitly encourage fine-grained alignment between words and image regions during pre-training. Comprehensive analysis shows that both conditional masking and OT-based WRA contribute to better pre-training. We also conduct a thorough ablation study to find an optimal combination of pre-training tasks. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks (over nine datasets), including Visual Question Answering, Image-Text Retrieval, Referring Expression Comprehension, Visual Commonsense Reasoning, Visual Entailment, and NLVR$^2$. Code is available at https://github.com/ChenRocks/UNITER.) <|cite_end|> <|cite_start|> (Reference: ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision: Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance. Our code and pre-trained weights are available at https://github.com/dandelin/vilt.) <|cite_end|>. These methods either require a pre-trained object detector to perform time-consuming region features extracting or cross-modal transformer layers to align the features, which significantly hinders their efficiency and scalability. In contrast, we employ a simple but effective way to align the representations of two modalities via token-level similarity matrices. \subsection{Token-Wise/Region-Word Cross-modal Alignment} Some efforts have been made to learn fine-grained cross-modal interaction between two modalities by leveraging token-wise or region-word similarities in the contrastive loss. TERAN <|cite_start|> (Reference: Fine-grained Visual Textual Alignment for Cross-Modal Retrieval using Transformer Encoders: Despite the evolution of deep-learning-based visual-textual processing systems, precise multi-modal matching remains a challenging task. In this work, we tackle the task of cross-modal retrieval through image-sentence matching based on word-region alignments, using supervision only at the global image-sentence level. Specifically, we present a novel approach called Transformer Encoder Reasoning and Alignment Network (TERAN). TERAN enforces a fine-grained match between the underlying components of images and sentences, i.e., image regions and words, respectively, in order to preserve the informative richness of both modalities. TERAN obtains state-of-the-art results on the image retrieval task on both MS-COCO and Flickr30k datasets. Moreover, on MS-COCO, it also outperforms current approaches on the sentence retrieval task. Focusing on scalable cross-modal information retrieval, TERAN is designed to keep the visual and textual data pipelines well separated. Cross-attention links invalidate any chance to separately extract visual and textual features needed for the online search and the offline indexing steps in large-scale retrieval systems. In this respect, TERAN merges the information from the two domains only during the final alignment phase, immediately before the loss computation. We argue that the fine-grained alignments produced by TERAN pave the way towards the research for effective and efficient methods for large-scale cross-modal information retrieval. We compare the effectiveness of our approach against relevant state-of-the-art methods. On the MS-COCO 1K test set, we obtain an improvement of 5.7% and 3.5% respectively on the image and the sentence retrieval tasks on the Recall@1 metric. The code used for the experiments is publicly available on GitHub at https://github.com/mesnico/TERAN.) <|cite_end|> detects and encodes image regions at the object level with Faster-RCNN <|cite_start|> (Reference: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.) <|cite_end|> and sums the maximum of the region-word similarity scores with respect to each word or region. Similar to TERAN, FILIP <|cite_start|> (Reference: FILIP: Fine-grained Interactive Language-Image Pre-Training: Unsupervised large-scale vision-language pre-training has shown promising advances on various downstream tasks. Existing methods often model the cross-modal interaction either via the similarity of the global feature of each modality which misses sufficient information, or finer-grained interactions using cross/self-attention upon visual and textual tokens. However, cross/self-attention suffers from inferior efficiency in both training and inference. In this paper, we introduce a large-scale Fine-grained Interactive Language-Image Pre-training (FILIP) to achieve finer-level alignment through a cross-modal late interaction mechanism, which uses a token-wise maximum similarity between visual and textual tokens to guide the contrastive objective. FILIP successfully leverages the finer-grained expressiveness between image patches and textual words by modifying only contrastive loss, while simultaneously gaining the ability to pre-compute image and text representations offline at inference, keeping both large-scale training and inference efficient. Furthermore, we construct a new large-scale image-text pair dataset called FILIP300M for pre-training. Experiments show that FILIP achieves state-of-the-art performance on multiple downstream vision-language tasks including zero-shot image classification and image-text retrieval. The visualization on word-patch alignment further shows that FILIP can learn meaningful fine-grained features with promising localization ability.) <|cite_end|> also aggregates the maximum token-wise similarity scores according to every single feature, but it tries to directly localize fine-grained objects from visual patches, instead of using object detectors. SCAN <|cite_start|> (Reference: Stacked Cross Attention for Image-Text Matching: In this paper, we study the problem of image-text matching. Inferring the latent semantic alignment between objects or other salient stuff (e.g. snow, sky, lawn) and the corresponding words in sentences allows to capture fine-grained interplay between vision and language, and makes image-text matching more interpretable. Prior work either simply aggregates the similarity of all possible pairs of regions and words without attending differentially to more and less important words or regions, or uses a multi-step attentional process to capture limited number of semantic alignments which is less interpretable. In this paper, we present Stacked Cross Attention to discover the full latent alignments using both image regions and words in a sentence as context and infer image-text similarity. Our approach achieves the state-of-the-art results on the MS-COCO and Flickr30K datasets. On Flickr30K, our approach outperforms the current best methods by 22.1% relatively in text retrieval from image query, and 18.2% relatively in image retrieval with text query (based on Recall@1). On MS-COCO, our approach improves sentence retrieval by 17.8% relatively and image retrieval by 16.6% relatively (based on Recall@1 using the 5K test set). Code has been made available at: https://github.com/kuanghuei/SCAN.) <|cite_end|> attends differentially to important words or regions. All these works drop the global representations that contain sufficient information and neglect the relationships between fine-level features and global statistics. <|paper_end|>
[ "<|reference_start|> Large-Scale Adversarial Training for Vision-and-Language Representation Learning: We present VILLA, the first known effort on large-scale adversarial training for vision-and-language (V+L) representation learning. VILLA consists of two training stages: (i) task-agnostic adversarial pre-training; followed by (ii) task-specific adversarial finetuning. Instead of adding adversarial perturbations on image pixels and textual tokens, we propose to perform adversarial training in the embedding space of each modality. To enable large-scale training, we adopt the \"free\" adversarial training strategy, and combine it with KL-divergence-based regularization to promote higher invariance in the embedding space. We apply VILLA to current best-performing V+L models, and achieve new state of the art on a wide range of tasks, including Visual Question Answering, Visual Commonsense Reasoning, Image-Text Retrieval, Referring Expression Comprehension, Visual Entailment, and NLVR2. <|reference_end|>", "<|reference_start|> Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP. <|reference_end|>", "<|reference_start|> Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss: Employing large-scale pre-trained model CLIP to conduct video-text retrieval task (VTR) has become a new trend, which exceeds previous VTR methods. Though, due to the heterogeneity of structures and contents between video and text, previous CLIP-based models are prone to overfitting in the training phase, resulting in relatively poor retrieval performance. In this paper, we propose a multi-stream Corpus Alignment network with single gate Mixture-of-Experts (CAMoE) and a novel Dual Softmax Loss (DSL) to solve the two heterogeneity. The CAMoE employs Mixture-of-Experts (MoE) to extract multi-perspective video representations, including action, entity, scene, etc., then align them with the corresponding part of the text. In this stage, we conduct massive explorations towards the feature extraction module and feature alignment module. DSL is proposed to avoid the one-way optimum-match which occurs in previous contrastive methods. Introducing the intrinsic prior of each pair in a batch, DSL serves as a reviser to correct the similarity matrix and achieves the dual optimal match. DSL is easy to implement with only one-line code but improves significantly. The results show that the proposed CAMoE and DSL are of strong efficiency, and each of them is capable of achieving State-of-The-Art (SOTA) individually on various benchmarks such as MSR-VTT, MSVD, and LSMDC. Further, with both of them, the performance is advanced to a big extend, surpassing the previous SOTA methods for around 4.6\\% R@1 in MSR-VTT. <|reference_end|>", "<|reference_start|> ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision: Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance. Our code and pre-trained weights are available at https://github.com/dandelin/vilt. <|reference_end|>" ]
[ 4, 20, 23, 28 ]
{"<|multi_cite_1_1|>": "arxiv-168581", "<|multi_cite_1_2|>": "arxiv-331663", "<|multi_cite_1_3|>": "arxiv-335405", "<|multi_cite_2_1|>": "arxiv-259146", "<|multi_cite_2_2|>": "arxiv-270990", "<|multi_cite_2_3|>": "arxiv-225610", "<|multi_cite_3_1|>": "arxiv-355417", "<|multi_cite_3_2|>": "arxiv-319372", "<|multi_cite_4_1|>": "arxiv-152364", "<|multi_cite_4_2|>": "arxiv-284127", "<|multi_cite_4_3|>": "arxiv-381075", "<|cite_5|>": "arxiv-323919", "<|cite_6|>": "ss-792770", "<|multi_cite_7_1|>": "arxiv-378909", "<|multi_cite_7_2|>": "arxiv-355417", "<|multi_cite_7_3|>": "arxiv-320496", "<|multi_cite_8_1|>": "arxiv-320605", "<|multi_cite_8_2|>": "arxiv-335405", "<|multi_cite_8_3|>": "arxiv-349895", "<|multi_cite_8_4|>": "arxiv-365810", "<|cite_9|>": "arxiv-323919", "<|cite_10|>": "arxiv-335405", "<|cite_11|>": "arxiv-349895", "<|cite_12|>": "arxiv-365810", "<|multi_cite_13_1|>": "arxiv-259146", "<|multi_cite_13_2|>": "arxiv-270990", "<|multi_cite_13_3|>": "arxiv-225610", "<|multi_cite_14_1|>": "arxiv-225610", "<|multi_cite_14_2|>": "arxiv-319372", "<|cite_15|>": "arxiv-284127", "<|cite_16|>": "arxiv-78819", "<|cite_17|>": "arxiv-381075", "<|cite_18|>": "arxiv-152364"}
1311.6647
<|paper_start|> Title: DoF Analysis of the K-user MISO Broadcast Channel with Alternating CSIT Abstract: DoF Analysis of the K-user MISO Broadcast Channel with Alternating CSIT: We consider a $K$-user multiple-input single-output (MISO) broadcast channel (BC) where the channel state information (CSI) of user $i(i=1,2,\ldots,K)$ may be either perfect (P), delayed (D) or not known (N) at the transmitter with probabilities $\lambda_P^i$, $\lambda_D^i$ and $\lambda_N^i$, respectively. In this channel, according to the three possible CSIT for each user, joint CSIT of the $K$ users could have at most $3^K$ realizations. Although the results by Tandon et al. show that the Degrees of Freedom (DoF) region for the two user MISO BC with symmetric marginal probabilities (i.e., $\lambda_Q^i=\lambda_Q \forall i\in \{1,2,\ldots,K\}, Q\in \{P,D,N\}$) depends only on the marginal probabilities, we show that this interesting result does not hold in general when the number of users is more than two. In other words, the DoF region is a function of the \textit{CSIT pattern}, or equivalently, all the joint probabilities. In this paper, given the marginal probabilities of CSIT, we derive an outer bound for the DoF region of the $K$-user MISO BC. Subsequently, the achievability of these outer bounds are considered in certain scenarios. Finally, we show the dependence of the DoF region on the joint probabilities. Introduction In contrast to the point to point multiple-input multiple-output (MIMO) communication where the channel state information at the transmitter (CSIT) does not affect the multiplexing gain, in a multiple-input single-output (MISO) broadcast channel (BC), knowledge of CSIT is crucial for interference mitigation and beamforming purposes. However, the assumption of perfect CSIT may not always be true in practice due to channel estimation and feedback latency. Therefore, the idea of communication under some sort of imperfection in CSIT has gained more attention recently. The so called MAT algorithm was presented in <|cite_start|> (Reference: Completely Stale Transmitter Channel State Information is Still Very Useful: Transmitter channel state information (CSIT) is crucial for the multiplexing gains offered by advanced interference management techniques such as multiuser MIMO and interference alignment. Such CSIT is usually obtained by feedback from the receivers, but the feedback is subject to delays. The usual approach is to use the fed back information to predict the current channel state and then apply a scheme designed assuming perfect CSIT. When the feedback delay is large compared to the channel coherence time, such a prediction approach completely fails to achieve any multiplexing gain. In this paper, we show that even in this case, the completely stale CSI is still very useful. More concretely, we show that in a MIMO broadcast channel with $K$ transmit antennas and $K$ receivers each with 1 receive antenna, $\frac{K}{1+1/2+ ...+ \frac{1}{K}} (> 1) $ degrees of freedom is achievable even when the fed back channel state is completely independent of the current channel state. Moreover, we establish that if all receivers have independent and identically distributed channels, then this is the optimal number of degrees of freedom achievable. In the optimal scheme, the transmitter uses the fed back CSI to learn the side information that the receivers receive from previous transmissions rather than to predict the current channel state. Our result can be viewed as the first example of feedback providing a degree-of-freedom gain in memoryless channels.) <|cite_end|> where it was shown that in terms of the degrees of freedom, even an outdated CSIT can result in significant performance improvement in comparison to the case with no CSIT. Assuming correlation between the feedback information and current channel state (e.g., when the feedback latency is smaller than the coherence time of the channel), the authors in <|cite_start|> (Reference: Degrees of Freedom of Time Correlated MISO Broadcast Channel with Delayed CSIT: We consider the time correlated multiple-input single-output (MISO) broadcast channel where the transmitter has imperfect knowledge on the current channel state, in addition to delayed channel state information. By representing the quality of the current channel state information as P^-{\alpha} for the signal-to-noise ratio P and some constant {\alpha} \geq 0, we characterize the optimal degree of freedom region for this more general two-user MISO broadcast correlated channel. The essential ingredients of the proposed scheme lie in the quantization and multicasting of the overheard interferences, while broadcasting new private messages. Our proposed scheme smoothly bridges between the scheme recently proposed by Maddah-Ali and Tse with no current state information and a simple zero-forcing beamforming with perfect current state information.) <|cite_end|> and <|cite_start|> (Reference: Optimal use of current and outdated channel state information: Degrees of freedom of the miso bc with mixed csit: We consider a multiple-input-single-output (MISO) broadcast channel with mixed channel state information at the transmitter (CSIT) that consists of imperfect current CSIT and perfect outdated CSIT. Recent work by Kobayashi et al. presented a scheme that exploits both imperfect current CSIT and perfect outdated CSIT and achieves higher degrees of freedom (DoF) than possible with only imperfect current CSIT or only outdated CSIT individually. In this work, we further improve the achievable DoF in this setting by incorporating additional private messages, and provide a tight information theoretic DoF outer bound, thereby identifying the DoF optimal use of mixed CSIT. The new result is stronger even in the original setting of only delayed CSIT, because it allows us to remove the restricting assumption of statistically equivalent fading for all users.) <|cite_end|> consider the degrees of freedom in a time correlated MISO BC which is shown to be a combination of zero forcing beamforming (ZFBF) and MAT algorithm. Following these works, the general case of mixed CSIT and the $K$-user MISO BC with time correlated delayed CSIT are discussed in <|cite_start|> (Reference: Degrees-of-Freedom Region of the MISO Broadcast Channel with General Mixed-CSIT: In the setting of the two-user broadcast channel, recent work by Maddah-Ali and Tse has shown that knowledge of prior channel state information at the transmitter (CSIT) can be useful, even in the absence of any knowledge of current CSIT. Very recent work by Kobayashi et al., Yang et al., and Gou and Jafar, extended this to the case where, instead of no current CSIT knowledge, the transmitter has partial knowledge, and where under a symmetry assumption, the quality of this knowledge is identical for the different users' channels. Motivated by the fact that in multiuser settings, the quality of CSIT feedback may vary across different links, we here generalize the above results to the natural setting where the current CSIT quality varies for different users' channels. For this setting we derive the optimal degrees-of-freedom (DoF) region, and provide novel multi-phase broadcast schemes that achieve this optimal region. Finally this generalization incorporates and generalizes the corresponding result in Maleki et al. which considered the broadcast channel with one user having perfect CSIT and the other only having prior CSIT.) <|cite_end|> and <|cite_start|> (Reference: On the Degrees of Freedom of the K-User Time Correlated Broadcast Channel with Delayed CSIT: The Degrees of Freedom (DoF) of a K-User MISO Broadcast Channel (BC) is studied when the Transmitter (TX) has access to a delayed channel estimate in addition to an imperfect estimate of the current channel. The current estimate could be for example obtained from prediction applied on past estimates, in the case where feedback delay is within the coherence time. Building on previous recent works on this setting with two users, the estimation error of the current channel is characterized by its scaling as P at the exponent \alpha, where \alpha=1 (resp. \alpha=0) corresponds to an estimate being essentially perfect (resp. useless) in terms of DoF. In this work, we contribute to the characterization of the DoF region in such a setting by deriving an outerbound for the DoF region and by providing an achievable DoF region. The achievable DoF is obtained by developing a new alignment scheme, called the K\alpha-MAT scheme, which builds upon both the principle of the MAT alignment scheme from Maddah-Ali and Tse and Zero-Forcing to achieve a larger DoF when the delayed CSIT received is correlated with the instantaneous channel state.) <|cite_end|>, respectively. While all these works consider the concept of delayed CSIT in time domain, <|cite_start|> (Reference: Imperfect and Unmatched CSIT is Still Useful for the Frequency Correlated MISO Broadcast Channel: Since Maddah-Ali and Tse showed that the completely stale transmitter-side channel state information (CSIT) still benefits the Degrees of Freedom (DoF) of the Multiple-Input-Multiple-Output (MISO) Broadcast Channel (BC), there has been much interest in the academic literature to investigate the impact of imperfect CSIT on \emph{DoF} region of time correlated broadcast channel. Even though the research focus has been on time correlated channels so far, a similar but different problem concerns the frequency correlated channels. Indeed, the imperfect CSIT also impacts the DoF region of frequency correlated channels, as exemplified by current multi-carrier wireless systems. This contribution, for the first time in the literature, investigates a general frequency correlated setting where a two-antenna transmitter has imperfect knowledge of CSI of two single-antenna users on two adjacent subbands. A new scheme is derived as an integration of Zero-Forcing Beamforming (ZFBF) and the scheme proposed by Maddah-Ali and Tse. The achievable DoF region resulted by this scheme is expressed as a function of the qualities of CSIT.) <|cite_end|> and <|cite_start|> (Reference: MISO Broadcast Channel with Imperfect and (Un)matched CSIT in the Frequency Domain: DoF Region and Transmission Strategies: In this contribution, we focus on a frequency domain two-user Multiple-Input-Single-Output Broadcast Channel (MISO BC) where the transmitter has imperfect and (un)matched Channel State Information (CSI) of the two users in two subbands. We provide an upper-bound to the Degrees-of-Freedom (DoF) region, which is tight compared to the state of the art. By decomposing the subbands into subchannels according to the CSI feedback qualities, we interpret the DoF region as the weighted-sum of that in each subchannel. Moreover, we study the sum \emph{DoF} loss when employing sub-optimal schemes, namely Frequency Division Multiple Access (FDMA), Zero-Forcing Beamforming (ZFBF) and the $S_3^{3/2}$ scheme proposed by Tandon et al. The results show that by switching among the sub-optimal strategies, we can obtain at least 80% and 66.7% of the optimal sum \emph{DoF} performance for the unmatched and matched CSIT scenario respectively.) <|cite_end|> deal with the DoF region and its achievable schemes in a frequency correlated MISO BC where there is no delayed CSIT but imperfect CSIT across subbands, which is more inline with practical systems as Long Term Evolution (LTE). The most relevant article to this paper is the work done in <|cite_start|> (Reference: On the synergistic benefits of alternating csit for the miso broadcast channel: The degrees of freedom (DoFs) of the two-user multiple-input single-output (MISO) broadcast channel (BC) are studied under the assumption that the form, <i>Ii</i>, <i>i</i>=1, 2, of the channel state information at the transmitter (CSIT) for each user's channel can be either perfect (<i>P</i>), delayed (<i>D</i>), or not available (<i>N</i>), i.e., <i>I</i><sub>1</sub>,<i>I</i><sub>2</sub> ∈ {<i>P</i>,<i>N</i>,<i>D</i>} , and therefore, the overall CSIT can alternate between the nine resulting states <i>I</i><sub>1</sub><i>I</i><sub>2</sub>. The fraction of time associated with CSIT state <i>I</i><sub>1</sub><i>I</i><sub>2</sub> is denoted by the parameter λ<i>I</i><sub>1</sub><i>I</i><sub>2</sub> and it is assumed throughout that λ<i>I</i><sub>1</sub><i>I</i><sub>2</sub> = λ<i>I</i><sub>2</sub><i>I</i><sub>1</sub>, i.e., λ<i>PN</i> = λ<i>NP</i>, λ<i>PD</i>=λ<i>DP</i>, λ<i>DN</i>=λ<i>ND</i> . Under this assumption of symmetry, the main contribution of this paper is a complete characterization of the DoF region of the two-user MISO BC with alternating CSIT. Surprisingly, the DoF region is found to depend only on the marginal probabilities (λ<i>P</i>, λ<i>D</i>,λ<i>N</i>) = (Σ<i>I</i><sub>2</sub> λ<i>PI</i><sub>2</sub>, Σ<i>I</i><sub>2</sub> λ<i>DI</i><sub>2</sub>, Σ<i>I</i><sub>2</sub> λ<i>NI</i><sub>2</sub>), <i>I</i><sub>2</sub> ∈ {<i>P</i>, <i>D</i>, <i>N</i>}, which represent the fraction of time that any given user (e.g., user 1) is associated with perfect, delayed, or no CSIT, respectively. As a consequence, the DoF region with all nine CSIT states, <i>D</i>(λ<i>I</i><sub>1</sub><i>I</i><sub>2</sub>:<i>I</i><sub>1</sub>,<i>I</i><sub>2</sub> ∈ {<i>P</i>,<i>D</i>,<i>N</i>}) , is the same as the DoF region with only three CSIT states <i>D</i>(λ<i>PP</i>, λ<i>DD</i>, λ<i>NN</i>), under the same marginal distribution of CSIT states, i.e., (λ<i>PP</i>, λ<i>DD</i>,λ<i>NN</i>)=(λ<i>P</i>,λ<i>D</i>,λ<i>N</i>). The sum-DoF value can be expressed as DoF=min([(4+2λ<i>P</i>)/3], 1+λ<i>P</i>+λ<i>D</i>), from which one can uniquely identify the minimum required marginal CSIT fractions to achieve any target DoF value as (λ<i>P</i>,λ<i>D</i>)<sub>min</sub>=([3/2] DoF-2,1- [1/2] DoF) when DoF ∈ [[4/3],2] and (λ<i>P</i>,λ<i>D</i>)<sub>min</sub>=(0,(DoF-1)<sup>+</sup>) when DoF ∈ [0, [4/3]). The results highlight the synergistic benefits of alternating CSIT and the tradeoffs between various forms of CSIT for any given DoF value. Partial results are also presented for the multiuser MISO BC with <i>M</i> transmit antennas and <i>K</i> single antenna users. For this problem, the minimum amount of perfect CSIT required per user to achieve the maximum DoFs of min(<i>M</i>,<i>K</i>) is characterized. By the minimum amount of CSIT per user, we refer to the minimum fraction of time that the transmitter has access to perfect and instantaneous CSIT from a user. Through a novel converse proof and an achievable scheme, it is shown that the minimum fraction of time perfect CSIT is required per user in order to achieve the DoF of min(<i>M</i>,<i>K</i>) is given by min(<i>M</i>,<i>K</i>)/<i>K</i>.) <|cite_end|> where the synergistic benefits of alternating CSIT over fixed CSIT was presented in a two user MISO BC with two transmit antennas. The converse in <|cite_start|> (Reference: On the synergistic benefits of alternating csit for the miso broadcast channel: The degrees of freedom (DoFs) of the two-user multiple-input single-output (MISO) broadcast channel (BC) are studied under the assumption that the form, <i>Ii</i>, <i>i</i>=1, 2, of the channel state information at the transmitter (CSIT) for each user's channel can be either perfect (<i>P</i>), delayed (<i>D</i>), or not available (<i>N</i>), i.e., <i>I</i><sub>1</sub>,<i>I</i><sub>2</sub> ∈ {<i>P</i>,<i>N</i>,<i>D</i>} , and therefore, the overall CSIT can alternate between the nine resulting states <i>I</i><sub>1</sub><i>I</i><sub>2</sub>. The fraction of time associated with CSIT state <i>I</i><sub>1</sub><i>I</i><sub>2</sub> is denoted by the parameter λ<i>I</i><sub>1</sub><i>I</i><sub>2</sub> and it is assumed throughout that λ<i>I</i><sub>1</sub><i>I</i><sub>2</sub> = λ<i>I</i><sub>2</sub><i>I</i><sub>1</sub>, i.e., λ<i>PN</i> = λ<i>NP</i>, λ<i>PD</i>=λ<i>DP</i>, λ<i>DN</i>=λ<i>ND</i> . Under this assumption of symmetry, the main contribution of this paper is a complete characterization of the DoF region of the two-user MISO BC with alternating CSIT. Surprisingly, the DoF region is found to depend only on the marginal probabilities (λ<i>P</i>, λ<i>D</i>,λ<i>N</i>) = (Σ<i>I</i><sub>2</sub> λ<i>PI</i><sub>2</sub>, Σ<i>I</i><sub>2</sub> λ<i>DI</i><sub>2</sub>, Σ<i>I</i><sub>2</sub> λ<i>NI</i><sub>2</sub>), <i>I</i><sub>2</sub> ∈ {<i>P</i>, <i>D</i>, <i>N</i>}, which represent the fraction of time that any given user (e.g., user 1) is associated with perfect, delayed, or no CSIT, respectively. As a consequence, the DoF region with all nine CSIT states, <i>D</i>(λ<i>I</i><sub>1</sub><i>I</i><sub>2</sub>:<i>I</i><sub>1</sub>,<i>I</i><sub>2</sub> ∈ {<i>P</i>,<i>D</i>,<i>N</i>}) , is the same as the DoF region with only three CSIT states <i>D</i>(λ<i>PP</i>, λ<i>DD</i>, λ<i>NN</i>), under the same marginal distribution of CSIT states, i.e., (λ<i>PP</i>, λ<i>DD</i>,λ<i>NN</i>)=(λ<i>P</i>,λ<i>D</i>,λ<i>N</i>). The sum-DoF value can be expressed as DoF=min([(4+2λ<i>P</i>)/3], 1+λ<i>P</i>+λ<i>D</i>), from which one can uniquely identify the minimum required marginal CSIT fractions to achieve any target DoF value as (λ<i>P</i>,λ<i>D</i>)<sub>min</sub>=([3/2] DoF-2,1- [1/2] DoF) when DoF ∈ [[4/3],2] and (λ<i>P</i>,λ<i>D</i>)<sub>min</sub>=(0,(DoF-1)<sup>+</sup>) when DoF ∈ [0, [4/3]). The results highlight the synergistic benefits of alternating CSIT and the tradeoffs between various forms of CSIT for any given DoF value. Partial results are also presented for the multiuser MISO BC with <i>M</i> transmit antennas and <i>K</i> single antenna users. For this problem, the minimum amount of perfect CSIT required per user to achieve the maximum DoFs of min(<i>M</i>,<i>K</i>) is characterized. By the minimum amount of CSIT per user, we refer to the minimum fraction of time that the transmitter has access to perfect and instantaneous CSIT from a user. Through a novel converse proof and an achievable scheme, it is shown that the minimum fraction of time perfect CSIT is required per user in order to achieve the DoF of min(<i>M</i>,<i>K</i>) is given by min(<i>M</i>,<i>K</i>)/<i>K</i>.) <|cite_end|> is based on the idea of assigning artificial receivers to the users whose observations are (statistically) equivalent to the corresponding user when CSIT is (not) perfect. However, whether this brilliant approach could be generalized to the scenarios with more than two transmit antennas and two users is unknown. Therefore, for such scenarios, it becomes necessary to check other ways to find the fundamental limits of the system. To the best of our knowledge, this is the first paper in the literature addressing the general $K$-user MISO BC with alternating CSIT. To this end, our contributions are as follows. \begin{itemize} \item Given the marginal probabilities of CSIT in a $K$-user MISO BC, we derive an outer bound for the DoF region where the proof is based on finding upper bounds for a certain difference between entropies and is inspired by <|cite_start|> (Reference: MISO Broadcast Channel with Imperfect and (Un)matched CSIT in the Frequency Domain: DoF Region and Transmission Strategies: In this contribution, we focus on a frequency domain two-user Multiple-Input-Single-Output Broadcast Channel (MISO BC) where the transmitter has imperfect and (un)matched Channel State Information (CSI) of the two users in two subbands. We provide an upper-bound to the Degrees-of-Freedom (DoF) region, which is tight compared to the state of the art. By decomposing the subbands into subchannels according to the CSI feedback qualities, we interpret the DoF region as the weighted-sum of that in each subchannel. Moreover, we study the sum \emph{DoF} loss when employing sub-optimal schemes, namely Frequency Division Multiple Access (FDMA), Zero-Forcing Beamforming (ZFBF) and the $S_3^{3/2}$ scheme proposed by Tandon et al. The results show that by switching among the sub-optimal strategies, we can obtain at least 80% and 66.7% of the optimal sum \emph{DoF} performance for the unmatched and matched CSIT scenario respectively.) <|cite_end|> and the results in <|cite_start|> (Reference: An Extremal Inequality Motivated by Multiterminal Information-Theoretic Problems: We prove a new extremal inequality, motivated by the vector Gaussian broadcast channel and the distributed source coding with a single quadratic distortion constraint problem. As a corollary, this inequality yields a generalization of the classical vector entropy-power inequality (EPI). As another corollary, this inequality sheds insight into maximizing differential entropy of a sum of jointly distributed random variables, generalizing a classical result of Cover and Zhang) <|cite_end|>. \item We investigate the achievability and tightness of the outer bounds. Several achievable schemes are introduced and shown to achieve the corner points of the DoF region in some scenarios, therefore proving that the outer bounds are optimal bounds in those scenarios. \item Finally, we provide an example which proves that in contrast to the results of <|cite_start|> (Reference: On the synergistic benefits of alternating csit for the miso broadcast channel: The degrees of freedom (DoFs) of the two-user multiple-input single-output (MISO) broadcast channel (BC) are studied under the assumption that the form, <i>Ii</i>, <i>i</i>=1, 2, of the channel state information at the transmitter (CSIT) for each user's channel can be either perfect (<i>P</i>), delayed (<i>D</i>), or not available (<i>N</i>), i.e., <i>I</i><sub>1</sub>,<i>I</i><sub>2</sub> ∈ {<i>P</i>,<i>N</i>,<i>D</i>} , and therefore, the overall CSIT can alternate between the nine resulting states <i>I</i><sub>1</sub><i>I</i><sub>2</sub>. The fraction of time associated with CSIT state <i>I</i><sub>1</sub><i>I</i><sub>2</sub> is denoted by the parameter λ<i>I</i><sub>1</sub><i>I</i><sub>2</sub> and it is assumed throughout that λ<i>I</i><sub>1</sub><i>I</i><sub>2</sub> = λ<i>I</i><sub>2</sub><i>I</i><sub>1</sub>, i.e., λ<i>PN</i> = λ<i>NP</i>, λ<i>PD</i>=λ<i>DP</i>, λ<i>DN</i>=λ<i>ND</i> . Under this assumption of symmetry, the main contribution of this paper is a complete characterization of the DoF region of the two-user MISO BC with alternating CSIT. Surprisingly, the DoF region is found to depend only on the marginal probabilities (λ<i>P</i>, λ<i>D</i>,λ<i>N</i>) = (Σ<i>I</i><sub>2</sub> λ<i>PI</i><sub>2</sub>, Σ<i>I</i><sub>2</sub> λ<i>DI</i><sub>2</sub>, Σ<i>I</i><sub>2</sub> λ<i>NI</i><sub>2</sub>), <i>I</i><sub>2</sub> ∈ {<i>P</i>, <i>D</i>, <i>N</i>}, which represent the fraction of time that any given user (e.g., user 1) is associated with perfect, delayed, or no CSIT, respectively. As a consequence, the DoF region with all nine CSIT states, <i>D</i>(λ<i>I</i><sub>1</sub><i>I</i><sub>2</sub>:<i>I</i><sub>1</sub>,<i>I</i><sub>2</sub> ∈ {<i>P</i>,<i>D</i>,<i>N</i>}) , is the same as the DoF region with only three CSIT states <i>D</i>(λ<i>PP</i>, λ<i>DD</i>, λ<i>NN</i>), under the same marginal distribution of CSIT states, i.e., (λ<i>PP</i>, λ<i>DD</i>,λ<i>NN</i>)=(λ<i>P</i>,λ<i>D</i>,λ<i>N</i>). The sum-DoF value can be expressed as DoF=min([(4+2λ<i>P</i>)/3], 1+λ<i>P</i>+λ<i>D</i>), from which one can uniquely identify the minimum required marginal CSIT fractions to achieve any target DoF value as (λ<i>P</i>,λ<i>D</i>)<sub>min</sub>=([3/2] DoF-2,1- [1/2] DoF) when DoF ∈ [[4/3],2] and (λ<i>P</i>,λ<i>D</i>)<sub>min</sub>=(0,(DoF-1)<sup>+</sup>) when DoF ∈ [0, [4/3]). The results highlight the synergistic benefits of alternating CSIT and the tradeoffs between various forms of CSIT for any given DoF value. Partial results are also presented for the multiuser MISO BC with <i>M</i> transmit antennas and <i>K</i> single antenna users. For this problem, the minimum amount of perfect CSIT required per user to achieve the maximum DoFs of min(<i>M</i>,<i>K</i>) is characterized. By the minimum amount of CSIT per user, we refer to the minimum fraction of time that the transmitter has access to perfect and instantaneous CSIT from a user. Through a novel converse proof and an achievable scheme, it is shown that the minimum fraction of time perfect CSIT is required per user in order to achieve the DoF of min(<i>M</i>,<i>K</i>) is given by min(<i>M</i>,<i>K</i>)/<i>K</i>.) <|cite_end|> for the two user BC, the DoF region of the $K$-user MISO BC ($K\geq3$) is not only a function of marginal probabilities in general. \end{itemize} The paper is organized as follows. In section \ref{s2} the system model and preliminaries are presented. The main result of this paper is provided in section \ref{s3} as a theorem. The proof and tightness of the outerbounds will be discussed in section \ref{ss4} and \ref{s55} , respectively. Section \ref{sh} shows that the DoF region depends on the joint CSIT probabilities in general, and section \ref{s7} concludes the paper. Throughout the paper, vectors are shown in bold lower case while matrices are written in upper case. $CN(\textbf{0},\mathbf{\Sigma})$ is the circularly symmetric complex Gaussian distribution with covariance matrix $\mathbf{\Sigma}$. $f\sim O(\log P)$ is equivalent to $\lim_{P\to \infty}\frac{f}{\log P}=0$. $X_i^n=\{X(i),X(i+1),\ldots,X(n)\}$ is the time extension of random variable $X$ and when $i=1$, it is dropped for simplicity (i.e., written as $X^n$). $(.)^T$ and $(.)^H$ denote the transpose and conjugate transpose, respectively. Both of the terms upper bound and outer bound, used in this paper, have almost similar meanings with a slight difference; while the former is only used for scalars, the latter is a more general term used for multidimensional regions and could be defined by (in)finite number of upper bounds. Finally, Let $S_1$ and $S_2$ be two sets of inequalities defining the regions $D_1$ and $D_2$, respectively, and assume the region $D$ is defined by the set of inequalities $S=S_1\cup S_2$ or equivalently $D=D_1\cap D_2$. The set of inequalities $S_1$ is called inactive (or redundant) in defining $D$ when $D_2\subset D_1$. <|paper_end|>
[ "<|reference_start|> Degrees of Freedom of Time Correlated MISO Broadcast Channel with Delayed CSIT: We consider the time correlated multiple-input single-output (MISO) broadcast channel where the transmitter has imperfect knowledge on the current channel state, in addition to delayed channel state information. By representing the quality of the current channel state information as P^-{\\alpha} for the signal-to-noise ratio P and some constant {\\alpha} \\geq 0, we characterize the optimal degree of freedom region for this more general two-user MISO broadcast correlated channel. The essential ingredients of the proposed scheme lie in the quantization and multicasting of the overheard interferences, while broadcasting new private messages. Our proposed scheme smoothly bridges between the scheme recently proposed by Maddah-Ali and Tse with no current state information and a simple zero-forcing beamforming with perfect current state information. <|reference_end|>", "<|reference_start|> Degrees-of-Freedom Region of the MISO Broadcast Channel with General Mixed-CSIT: In the setting of the two-user broadcast channel, recent work by Maddah-Ali and Tse has shown that knowledge of prior channel state information at the transmitter (CSIT) can be useful, even in the absence of any knowledge of current CSIT. Very recent work by Kobayashi et al., Yang et al., and Gou and Jafar, extended this to the case where, instead of no current CSIT knowledge, the transmitter has partial knowledge, and where under a symmetry assumption, the quality of this knowledge is identical for the different users' channels. Motivated by the fact that in multiuser settings, the quality of CSIT feedback may vary across different links, we here generalize the above results to the natural setting where the current CSIT quality varies for different users' channels. For this setting we derive the optimal degrees-of-freedom (DoF) region, and provide novel multi-phase broadcast schemes that achieve this optimal region. Finally this generalization incorporates and generalizes the corresponding result in Maleki et al. which considered the broadcast channel with one user having perfect CSIT and the other only having prior CSIT. <|reference_end|>", "<|reference_start|> On the synergistic benefits of alternating csit for the miso broadcast channel: The degrees of freedom (DoFs) of the two-user multiple-input single-output (MISO) broadcast channel (BC) are studied under the assumption that the form, <i>Ii</i>, <i>i</i>=1, 2, of the channel state information at the transmitter (CSIT) for each user's channel can be either perfect (<i>P</i>), delayed (<i>D</i>), or not available (<i>N</i>), i.e., <i>I</i><sub>1</sub>,<i>I</i><sub>2</sub> ∈ {<i>P</i>,<i>N</i>,<i>D</i>} , and therefore, the overall CSIT can alternate between the nine resulting states <i>I</i><sub>1</sub><i>I</i><sub>2</sub>. The fraction of time associated with CSIT state <i>I</i><sub>1</sub><i>I</i><sub>2</sub> is denoted by the parameter λ<i>I</i><sub>1</sub><i>I</i><sub>2</sub> and it is assumed throughout that λ<i>I</i><sub>1</sub><i>I</i><sub>2</sub> = λ<i>I</i><sub>2</sub><i>I</i><sub>1</sub>, i.e., λ<i>PN</i> = λ<i>NP</i>, λ<i>PD</i>=λ<i>DP</i>, λ<i>DN</i>=λ<i>ND</i> . Under this assumption of symmetry, the main contribution of this paper is a complete characterization of the DoF region of the two-user MISO BC with alternating CSIT. Surprisingly, the DoF region is found to depend only on the marginal probabilities (λ<i>P</i>, λ<i>D</i>,λ<i>N</i>) = (Σ<i>I</i><sub>2</sub> λ<i>PI</i><sub>2</sub>, Σ<i>I</i><sub>2</sub> λ<i>DI</i><sub>2</sub>, Σ<i>I</i><sub>2</sub> λ<i>NI</i><sub>2</sub>), <i>I</i><sub>2</sub> ∈ {<i>P</i>, <i>D</i>, <i>N</i>}, which represent the fraction of time that any given user (e.g., user 1) is associated with perfect, delayed, or no CSIT, respectively. As a consequence, the DoF region with all nine CSIT states, <i>D</i>(λ<i>I</i><sub>1</sub><i>I</i><sub>2</sub>:<i>I</i><sub>1</sub>,<i>I</i><sub>2</sub> ∈ {<i>P</i>,<i>D</i>,<i>N</i>}) , is the same as the DoF region with only three CSIT states <i>D</i>(λ<i>PP</i>, λ<i>DD</i>, λ<i>NN</i>), under the same marginal distribution of CSIT states, i.e., (λ<i>PP</i>, λ<i>DD</i>,λ<i>NN</i>)=(λ<i>P</i>,λ<i>D</i>,λ<i>N</i>). The sum-DoF value can be expressed as DoF=min([(4+2λ<i>P</i>)/3], 1+λ<i>P</i>+λ<i>D</i>), from which one can uniquely identify the minimum required marginal CSIT fractions to achieve any target DoF value as (λ<i>P</i>,λ<i>D</i>)<sub>min</sub>=([3/2] DoF-2,1- [1/2] DoF) when DoF ∈ [[4/3],2] and (λ<i>P</i>,λ<i>D</i>)<sub>min</sub>=(0,(DoF-1)<sup>+</sup>) when DoF ∈ [0, [4/3]). The results highlight the synergistic benefits of alternating CSIT and the tradeoffs between various forms of CSIT for any given DoF value. Partial results are also presented for the multiuser MISO BC with <i>M</i> transmit antennas and <i>K</i> single antenna users. For this problem, the minimum amount of perfect CSIT required per user to achieve the maximum DoFs of min(<i>M</i>,<i>K</i>) is characterized. By the minimum amount of CSIT per user, we refer to the minimum fraction of time that the transmitter has access to perfect and instantaneous CSIT from a user. Through a novel converse proof and an achievable scheme, it is shown that the minimum fraction of time perfect CSIT is required per user in order to achieve the DoF of min(<i>M</i>,<i>K</i>) is given by min(<i>M</i>,<i>K</i>)/<i>K</i>. <|reference_end|>", "<|reference_start|> On the synergistic benefits of alternating csit for the miso broadcast channel: The degrees of freedom (DoFs) of the two-user multiple-input single-output (MISO) broadcast channel (BC) are studied under the assumption that the form, <i>Ii</i>, <i>i</i>=1, 2, of the channel state information at the transmitter (CSIT) for each user's channel can be either perfect (<i>P</i>), delayed (<i>D</i>), or not available (<i>N</i>), i.e., <i>I</i><sub>1</sub>,<i>I</i><sub>2</sub> ∈ {<i>P</i>,<i>N</i>,<i>D</i>} , and therefore, the overall CSIT can alternate between the nine resulting states <i>I</i><sub>1</sub><i>I</i><sub>2</sub>. The fraction of time associated with CSIT state <i>I</i><sub>1</sub><i>I</i><sub>2</sub> is denoted by the parameter λ<i>I</i><sub>1</sub><i>I</i><sub>2</sub> and it is assumed throughout that λ<i>I</i><sub>1</sub><i>I</i><sub>2</sub> = λ<i>I</i><sub>2</sub><i>I</i><sub>1</sub>, i.e., λ<i>PN</i> = λ<i>NP</i>, λ<i>PD</i>=λ<i>DP</i>, λ<i>DN</i>=λ<i>ND</i> . Under this assumption of symmetry, the main contribution of this paper is a complete characterization of the DoF region of the two-user MISO BC with alternating CSIT. Surprisingly, the DoF region is found to depend only on the marginal probabilities (λ<i>P</i>, λ<i>D</i>,λ<i>N</i>) = (Σ<i>I</i><sub>2</sub> λ<i>PI</i><sub>2</sub>, Σ<i>I</i><sub>2</sub> λ<i>DI</i><sub>2</sub>, Σ<i>I</i><sub>2</sub> λ<i>NI</i><sub>2</sub>), <i>I</i><sub>2</sub> ∈ {<i>P</i>, <i>D</i>, <i>N</i>}, which represent the fraction of time that any given user (e.g., user 1) is associated with perfect, delayed, or no CSIT, respectively. As a consequence, the DoF region with all nine CSIT states, <i>D</i>(λ<i>I</i><sub>1</sub><i>I</i><sub>2</sub>:<i>I</i><sub>1</sub>,<i>I</i><sub>2</sub> ∈ {<i>P</i>,<i>D</i>,<i>N</i>}) , is the same as the DoF region with only three CSIT states <i>D</i>(λ<i>PP</i>, λ<i>DD</i>, λ<i>NN</i>), under the same marginal distribution of CSIT states, i.e., (λ<i>PP</i>, λ<i>DD</i>,λ<i>NN</i>)=(λ<i>P</i>,λ<i>D</i>,λ<i>N</i>). The sum-DoF value can be expressed as DoF=min([(4+2λ<i>P</i>)/3], 1+λ<i>P</i>+λ<i>D</i>), from which one can uniquely identify the minimum required marginal CSIT fractions to achieve any target DoF value as (λ<i>P</i>,λ<i>D</i>)<sub>min</sub>=([3/2] DoF-2,1- [1/2] DoF) when DoF ∈ [[4/3],2] and (λ<i>P</i>,λ<i>D</i>)<sub>min</sub>=(0,(DoF-1)<sup>+</sup>) when DoF ∈ [0, [4/3]). The results highlight the synergistic benefits of alternating CSIT and the tradeoffs between various forms of CSIT for any given DoF value. Partial results are also presented for the multiuser MISO BC with <i>M</i> transmit antennas and <i>K</i> single antenna users. For this problem, the minimum amount of perfect CSIT required per user to achieve the maximum DoFs of min(<i>M</i>,<i>K</i>) is characterized. By the minimum amount of CSIT per user, we refer to the minimum fraction of time that the transmitter has access to perfect and instantaneous CSIT from a user. Through a novel converse proof and an achievable scheme, it is shown that the minimum fraction of time perfect CSIT is required per user in order to achieve the DoF of min(<i>M</i>,<i>K</i>) is given by min(<i>M</i>,<i>K</i>)/<i>K</i>. <|reference_end|>" ]
[ 1, 3, 7, 8 ]
{"<|cite_2|>": "arxiv-16539", "<|cite_3|>": "arxiv-29681", "<|cite_4|>": "ss-1436014", "<|cite_5|>": "arxiv-31840", "<|cite_6|>": "arxiv-40084", "<|cite_7|>": "arxiv-42269", "<|cite_8|>": "arxiv-51931", "<|cite_10|>": "ss-1436013", "<|cite_11|>": "ss-1436013", "<|cite_12|>": "arxiv-51931", "<|cite_13|>": "ss-1518833", "<|cite_14|>": "ss-1436013"}
1712.09708-1
<|cite_start|> (Reference: The developing visual brain: 1. Background context 2. Paediatric vision testing 3. Models of visual development 4. Newborn vision 5. Development optics - refraction and focusing or accommodation 6. Functional onset of specific cortical modules 7. Development of integration ('binding') and segmentation processes leading to object perception 8. The interlinked approach to development of attention and action 9. Plasticity in visual development 10. Concluding remarks References Index) <|cite_end|>. In contrast to our transfer-learning scheme, their scheme do not consider the terms ``re-used later in life'' which means that \textit{multiple} and \textit{different} problems should be solved by a universal representation. Moreover, in addition to their criterion of significance and the coherent aggregation considered in <|cite_start|> (Reference: Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning: A lot of the recent success in natural language processing (NLP) has been driven by distributed vector representations of words trained on large amounts of text in an unsupervised manner. These representations are typically used as general purpose features for words across a range of NLP problems. However, extending this success to learning representations of sequences of words, such as sentences, remains an open problem. Recent work has explored unsupervised as well as supervised learning techniques with different training objectives to learn general purpose fixed-length sentence representations. In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model. We train this model on several data sources with multiple training objectives on over 100 million sentences. Extensive experiments demonstrate that sharing a single recurrent sentence encoder across weakly related tasks leads to consistent improvements over previous methods. We present substantial improvements in the context of transfer learning and low-resource settings using our learned general-purpose representations.) <|cite_end|>, we proposed a total of six additional criteria important for universality evaluation, as well as two metrics that respect almost all the criteria. Nevertheless, while such TL scheme has been already used in the NLP community <|cite_start|> (Reference: Very Deep Convolutional Networks for Text Classification: The dominant approach for many NLP tasks are recurrent neural networks, in particular LSTMs, and convolutional neural networks. However, these architectures are rather shallow in comparison to the deep convolutional networks which have pushed the state-of-the-art in computer vision. We present a new architecture (VDCNN) for text processing which operates directly at the character level and uses only small convolutions and pooling operations. We are able to show that the performance of this model increases with depth: using up to 29 convolutional layers, we report improvements over the state-of-the-art on several public text classification tasks. To the best of our knowledge, this is the first time that very deep convolutional nets have been applied to text processing.) <|cite_end|> <|cite_start|> (Reference: Supervised Learning of Universal Sentence Representations from Natural Language Inference Data: Many modern NLP systems rely on word embeddings, previously trained in an unsupervised manner on large corpora, as base features. Efforts to obtain embeddings for larger chunks of text, such as sentences, have however not been so successful. Several attempts at learning unsupervised representations of sentences have not reached satisfactory enough performance to be widely adopted. In this paper, we show how universal sentence representations trained using the supervised data of the Stanford Natural Language Inference datasets can consistently outperform unsupervised methods like SkipThought vectors on a wide range of transfer tasks. Much like how computer vision uses ImageNet to obtain features, which can then be transferred to other tasks, our work tends to indicate the suitability of natural language inference for transfer learning to other NLP tasks. Our encoder is publicly available.) <|cite_end|> <|cite_start|> (Reference: SentEval: An Evaluation Toolkit for Universal Sentence Representations: We introduce SentEval, a toolkit for evaluating the quality of universal sentence representations. SentEval encompasses a variety of tasks, including binary and multi-class classification, natural language inference and sentence similarity. The set of tasks was selected based on what appears to be the community consensus regarding the appropriate evaluations for universal sentence representations. The toolkit comes with scripts to download and preprocess datasets, and an easy interface to evaluate sentence encoders. The aim is to provide a fairer, less cumbersome and more centralized way for evaluating sentence representations.) <|cite_end|> <|cite_start|> (Reference: Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning: A lot of the recent success in natural language processing (NLP) has been driven by distributed vector representations of words trained on large amounts of text in an unsupervised manner. These representations are typically used as general purpose features for words across a range of NLP problems. However, extending this success to learning representations of sequences of words, such as sentences, remains an open problem. Recent work has explored unsupervised as well as supervised learning techniques with different training objectives to learn general purpose fixed-length sentence representations. In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model. We train this model on several data sources with multiple training objectives on over 100 million sentences. Extensive experiments demonstrate that sharing a single recurrent sentence encoder across weakly related tasks leads to consistent improvements over previous methods. We present substantial improvements in the context of transfer learning and low-resource settings using our learned general-purpose representations.) <|cite_end|>for universal representations, to the best of our knowledge, we are the first to propose it in the vision community and more importantly, to link it to the claim of <|cite_start|> (Reference: The developing visual brain: 1. Background context 2. Paediatric vision testing 3. Models of visual development 4. Newborn vision 5. Development optics - refraction and focusing or accommodation 6. Functional onset of specific cortical modules 7. Development of integration ('binding') and segmentation processes leading to object perception 8. The interlinked approach to development of attention and action 9. Plasticity in visual development 10. Concluding remarks References Index) <|cite_end|>. \subsection{Cognitive Studies in Computer Vision} \label{sec:sota_solution_cognitive} A last line of work deals with the inspiration from cognitive studies in computer vision <|cite_start|> (Reference: {Hedging your bets: Optimizing accuracy-specificity trade-offs in large scale visual recognition: As visual recognition scales up to ever larger numbers of categories, maintaining high accuracy is increasingly difficult. In this work, we study the problem of optimizing accuracy-specificity trade-offs in large scale recognition, motivated by the observation that object categories form a semantic hierarchy consisting of many levels of abstraction. A classifier can select the appropriate level, trading off specificity for accuracy in case of uncertainty. By optimizing this trade-off, we obtain classifiers that try to be as specific as possible while guaranteeing an arbitrarily high accuracy. We formulate the problem as maximizing information gain while ensuring a fixed, arbitrarily small error rate with a semantic hierarchy. We propose the Dual Accuracy Reward Trade-off Search (DARTS) algorithm and prove that, under practical conditions, it converges to an optimal solution. Experiments demonstrate the effectiveness of our algorithm on datasets ranging from 65 to over 10,000 categories.) <|cite_end|> <|cite_start|> (Reference: Choosing basic-level concept names using visual and language context: We study basic-level categories for describing visual concepts, and empirically observe context-dependant basic level names across thousands of concepts. We propose methods for predicting basic-level names using a series of classification and ranking tasks, producing the first large scale catalogue of basic-level names for hundreds of thousands of images depicting thousands of visual concepts. We also demonstrate the usefulness of our method with a picture-to-word task, showing strong improvement over recent work by Ordonez et al, by modeling of both visual and language context. Our study suggests that a model for naming visual concepts is an important part of any automatic image/video captioning and visual story-telling system.) <|cite_end|> <|cite_start|> (Reference: Predicting Entry-Level Categories: ) <|cite_end|> <|cite_start|> (Reference: Diverse concept-level features for multi-object classification: We consider the problem of image classification with semantic features that are built from a set of base classifier outputs, each reflecting visual concepts. However, existing approaches consider visual concepts independently from each other whereas they are often linked together. When those relations are considered, existing models strongly rely on image low-level features, yielding in irrelevant relations when the low-level representation fails. On the contrary, the approach we propose, uses existing human knowledge, the application context itself and the human categorization mechanism to reflect complex relations between concepts. By nesting this human knowledge and the application context in the concept detection and selection processes, our final semantic feature captures the most useful information for an effective categorization. Thus, it enables to give good representation, even if some important concepts failed to be recognized. Experimental validation is conducted on three publicly available benchmarks of multi-class object classification and leads to results that outperforms comparable approaches.) <|cite_end|>. Generally, their goal is to output basic-level concepts of an image from a set of predicted finer ones. An exception is the work of <|cite_start|> (Reference: Diverse concept-level features for multi-object classification: We consider the problem of image classification with semantic features that are built from a set of base classifier outputs, each reflecting visual concepts. However, existing approaches consider visual concepts independently from each other whereas they are often linked together. When those relations are considered, existing models strongly rely on image low-level features, yielding in irrelevant relations when the low-level representation fails. On the contrary, the approach we propose, uses existing human knowledge, the application context itself and the human categorization mechanism to reflect complex relations between concepts. By nesting this human knowledge and the application context in the concept detection and selection processes, our final semantic feature captures the most useful information for an effective categorization. Thus, it enables to give good representation, even if some important concepts failed to be recognized. Experimental validation is conducted on three publicly available benchmarks of multi-class object classification and leads to results that outperforms comparable approaches.) <|cite_end|>, that is closest to ours since they consider categorical-levels in their representation. As in our work, their system reflects the psychological hint stating that, even if humans tend to categorize objects at the subordinate-level, they are still aware of the other categorical-levels <|cite_start|> (Reference: Principles of Categorization: ) <|cite_end|>. However, the key difference is how we integrate that hint as well as the purpose of its consideration. Indeed, our goal is to diversify the features learned in CNNs, while they aim at solving the problem of generic categories that output low scores because of their high intra-class variance, in order to force their beneficial consideration. Moreover, we opt for an integration at three levels (data, learning and representation), while they do it only after the computation of their semantic representation <|cite_start|> (Reference: Large-Scale Image Mining with Flickr Groups: ) <|cite_end|> <|cite_start|> (Reference: Constrained local enhancement of semantic features by content-based sparsity: Semantic features represent images by the outputs of a set of visual concept classifiers and have shown interesting performances in image classification and retrieval. All classifier outputs are usually exploited but it was recently shown that feature sparsification improves both performance and scalability. However, existing approaches consider a fixed sparsity level which disregards the actual content of individual images. In this paper, we propose a method to determine automatically a level of sparsity for the semantic features that is adapted to each image content. This method takes into account the amount of information contained by the image through a modeling of the semantic feature entropy and the confidence of individual dimensions of the feature. We also investigate the use of local regions of the image to further improve the quality of semantic features. Experimental validation is conducted on three benchmarks (Pascal VOC 2007, VOC 2012 and MIT Indoor) for image classification and two of them for image retrieval. Our method obtains competitive results on image classification and achieves state-of-the-art performances on image retrieval.) <|cite_end|>. <|paper_end|>
[ "<|reference_start|> Supervised Learning of Universal Sentence Representations from Natural Language Inference Data: Many modern NLP systems rely on word embeddings, previously trained in an unsupervised manner on large corpora, as base features. Efforts to obtain embeddings for larger chunks of text, such as sentences, have however not been so successful. Several attempts at learning unsupervised representations of sentences have not reached satisfactory enough performance to be widely adopted. In this paper, we show how universal sentence representations trained using the supervised data of the Stanford Natural Language Inference datasets can consistently outperform unsupervised methods like SkipThought vectors on a wide range of transfer tasks. Much like how computer vision uses ImageNet to obtain features, which can then be transferred to other tasks, our work tends to indicate the suitability of natural language inference for transfer learning to other NLP tasks. Our encoder is publicly available. <|reference_end|>", "<|reference_start|> Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning: A lot of the recent success in natural language processing (NLP) has been driven by distributed vector representations of words trained on large amounts of text in an unsupervised manner. These representations are typically used as general purpose features for words across a range of NLP problems. However, extending this success to learning representations of sequences of words, such as sentences, remains an open problem. Recent work has explored unsupervised as well as supervised learning techniques with different training objectives to learn general purpose fixed-length sentence representations. In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model. We train this model on several data sources with multiple training objectives on over 100 million sentences. Extensive experiments demonstrate that sharing a single recurrent sentence encoder across weakly related tasks leads to consistent improvements over previous methods. We present substantial improvements in the context of transfer learning and low-resource settings using our learned general-purpose representations. <|reference_end|>", "<|reference_start|> Predicting Entry-Level Categories: <|reference_end|>", "<|reference_start|> Diverse concept-level features for multi-object classification: We consider the problem of image classification with semantic features that are built from a set of base classifier outputs, each reflecting visual concepts. However, existing approaches consider visual concepts independently from each other whereas they are often linked together. When those relations are considered, existing models strongly rely on image low-level features, yielding in irrelevant relations when the low-level representation fails. On the contrary, the approach we propose, uses existing human knowledge, the application context itself and the human categorization mechanism to reflect complex relations between concepts. By nesting this human knowledge and the application context in the concept detection and selection processes, our final semantic feature captures the most useful information for an effective categorization. Thus, it enables to give good representation, even if some important concepts failed to be recognized. Experimental validation is conducted on three publicly available benchmarks of multi-class object classification and leads to results that outperforms comparable approaches. <|reference_end|>" ]
[ 3, 5, 9, 10 ]
{"<|cite_1|>": "ss-972908", "<|multi_cite_2_1|>": "ss-1016684", "<|multi_cite_2_2|>": "ss-972909", "<|multi_cite_3_1|>": "ss-773172", "<|multi_cite_3_2|>": "arxiv-124825", "<|multi_cite_3_3|>": "arxiv-152969", "<|multi_cite_4_1|>": "ss-773172", "<|multi_cite_4_2|>": "arxiv-124825", "<|multi_cite_4_3|>": "arxiv-152969", "<|cite_5|>": "arxiv-153390", "<|multi_cite_6_1|>": "arxiv-99469", "<|multi_cite_6_2|>": "arxiv-123398", "<|multi_cite_6_3|>": "ss-703279", "<|cite_7|>": "ss-972910", "<|multi_cite_8_1|>": "ss-972911", "<|multi_cite_8_2|>": "ss-1006955", "<|cite_9|>": "arxiv-68419", "<|cite_10|>": "arxiv-124825", "<|cite_11|>": "ss-972908", "<|multi_cite_12_1|>": "arxiv-68419", "<|multi_cite_12_2|>": "arxiv-62580", "<|cite_13|>": "ss-972908", "<|cite_14|>": "ss-972910", "<|cite_15|>": "arxiv-68419", "<|cite_16|>": "ss-972910", "<|cite_17|>": "ss-972908", "<|cite_18|>": "ss-1016684", "<|cite_19|>": "arxiv-153390", "<|cite_20|>": "ss-773172", "<|multi_cite_21_1|>": "arxiv-124825", "<|multi_cite_21_2|>": "arxiv-152969", "<|multi_cite_22_1|>": "arxiv-99469", "<|multi_cite_22_2|>": "arxiv-123398", "<|cite_23|>": "arxiv-153390", "<|cite_24|>": "arxiv-124825", "<|cite_25|>": "ss-972909", "<|multi_cite_26_1|>": "arxiv-62580", "<|multi_cite_26_2|>": "ss-773172", "<|multi_cite_26_3|>": "arxiv-124825", "<|multi_cite_26_4|>": "ss-1022003", "<|multi_cite_27_1|>": "ss-690198", "<|multi_cite_27_2|>": "arxiv-92740", "<|multi_cite_27_3|>": "ss-972912", "<|multi_cite_28_1|>": "ss-972913", "<|multi_cite_28_2|>": "arxiv-86717", "<|multi_cite_29_1|>": "ss-773172", "<|multi_cite_29_2|>": "arxiv-124825", "<|multi_cite_30_1|>": "arxiv-104857", "<|multi_cite_30_2|>": "ss-972914", "<|multi_cite_31_1|>": "arxiv-96401", "<|multi_cite_31_2|>": "arxiv-74282", "<|multi_cite_31_3|>": "ss-1265049", "<|multi_cite_31_4|>": "ss-1460880", "<|multi_cite_31_5|>": "ss-972915", "<|multi_cite_31_6|>": "ss-1100951", "<|multi_cite_32_1|>": "arxiv-96401", "<|multi_cite_32_2|>": "arxiv-74282", "<|multi_cite_32_3|>": "ss-1460880", "<|multi_cite_32_4|>": "ss-1100951", "<|cite_33|>": "ss-972914", "<|cite_34|>": "ss-972908", "<|multi_cite_35_1|>": "ss-773172", "<|multi_cite_35_2|>": "arxiv-124825", "<|multi_cite_35_3|>": "arxiv-152969", "<|cite_36|>": "ss-972908", "<|cite_37|>": "arxiv-153390", "<|multi_cite_38_1|>": "arxiv-99469", "<|multi_cite_38_2|>": "arxiv-123398", "<|multi_cite_38_3|>": "arxiv-151603", "<|multi_cite_38_4|>": "arxiv-153390", "<|cite_39|>": "ss-972908", "<|multi_cite_40_1|>": "ss-920638", "<|multi_cite_40_2|>": "ss-972916", "<|multi_cite_40_3|>": "ss-1300110", "<|multi_cite_40_4|>": "ss-972917", "<|cite_41|>": "ss-972917", "<|cite_42|>": "ss-1006955", "<|multi_cite_43_1|>": "ss-972918", "<|multi_cite_43_2|>": "ss-972919"}
1902.02823
<|paper_start|> Title: Compatible Natural Gradient Policy Search Abstract: Compatible Natural Gradient Policy Search: Trust-region methods have yielded state-of-the-art results in policy search. A common approach is to use KL-divergence to bound the region of trust resulting in a natural gradient policy update. We show that the natural gradient and trust region optimization are equivalent if we use the natural parameterization of a standard exponential policy distribution in combination with compatible value function approximation. Moreover, we show that standard natural gradient updates may reduce the entropy of the policy according to a wrong schedule leading to premature convergence. To control entropy reduction we introduce a new policy search method called compatible policy search (COPOS) which bounds entropy loss. The experimental results show that COPOS yields state-of-the-art results in challenging continuous control tasks and in discrete partially observable tasks. Introduction The natural gradient <|cite_start|> (Reference: Natural gradient works efficiently in learning: When a parameter space has a certain underlying structure, the ordinary gradient of a function does not represent its steepest direction, but the natural gradient does. Information geometry is used for calculating the natural gradients in the parameter space of perceptrons, the space of matrices (for blind source separation), and the space of linear dynamical systems (for blind source deconvolution). The dynamical behavior of natural gradient online learning is analyzed and is proved to be Fisher efficient, implying that it has asymptotically the same performance as the optimal batch estimation of parameters. This suggests that the plateau phenomenon, which appears in the backpropagation learning algorithm of multilayer perceptrons, might disappear or might not be so serious when the natural gradient is used. An adaptive method of updating the learning rate is proposed and analyzed.) <|cite_end|> is an integral part of many reinforcement learning <|cite_start|> (Reference: A natural policy gradient: We provide a natural gradient method that represents the steepest descent direction based on the underlying structure of the parameter space. Although gradient methods cannot make large changes in the values of the parameters, we show that the natural gradient is moving toward choosing a greedy optimal action rather than just a better action. These greedy optimal actions are those that would be chosen under one improvement step of policy iteration with approximate, compatible value functions, as defined by Sutton et al. [9]. We then show drastic performance improvements in simple MDPs and in the more challenging MDP of Tetris.) <|cite_end|> <|cite_start|> (Reference: Covariant Policy Search: We investigate the problem of non-covariant behavior of policy gradient reinforcement learning algorithms. The policy gradient approach is amenable to analysis by information geometric methods. This leads us to propose a natural metric on controller parameterization that results from considering the manifold of probability distributions over paths induced by a stochastic controller. Investigation of this approach leads to a covariant gradient ascent rule. Interesting properties of this rule are discussed, including its relation with actor-critic style reinforcement learning algorithms. The algorithms discussed here are computationally quite efficient and on some interesting problems lead to dramatic performance improvement over noncovariant rules.) <|cite_end|> <|cite_start|> (Reference: Natural Actor-critic: 近年,CO2排出量の抑制や,化石燃料の将来的な枯渇を背景に再生可能エネルギーの利用が注目されている.しかし,既存の中央集権的な電力ネットワークは系統末端での非定常発電を前提とする再生可能エネルギーとは必ずしも相性がよくないと言われる.このため,マイクログリッドを始め分散型の電力ネットワークが研究されている.本研究ではその一つである自律分散型の電力ネットワークであるECOネットを取り上げ,ECOネット内の発電消費の末端ノードであるミニマル・クラスター間での電力売買を通じた電力融通の自動化の為の機構について検討する.電力売買の自動化の為には,各ミニマル・クラスターにおける電力ロスの発電・消費における諸条件に基づき電力売買の方策を最適化する事が望まれる.本稿では,電力売買を行うエージェントの学習に強化学習を用いる事で電力ロスを低減し,収益を最大化させるような適応的取引エージェントの構築を目指す.ただし,そのような系ではマルチエージェント強化学習の系となるために,不完全知覚問題や同時学習問題が発生することが指摘されている.本稿ではそれらの問題に強いとされる方策勾配法,特にその一種である Natural Actor-Critic を用いて適応的取引エージェントを構築する.また,提案手法の有効性を示すために,6個のミニマル・クラスターにより構成されるローカルクラスターを対象にシミュレーション実験を行った.シミュレーション実験では,Natural Actor-Criticによりエージェントが適切な取引を学習する事が出来る事が示されたのと同時に,マルチエージェント強化学習環境下においても少なくとも一体固定的な取引を行うエージェントの居る条件下では良好な学習結果を生むことが示された.) <|cite_end|> <|cite_start|> (Reference: Revisiting Natural Actor-Critics with Value Function Approximation: ) <|cite_end|> and optimization <|cite_start|> (Reference: Natural Evolution Strategies: This paper presents Natural Evolution Strategies (NES), a recent family of algorithms that constitute a more principled approach to black-box optimization than established evolutionary algorithms. NES maintains a parameterized distribution on the set of solution candidates, and the natural gradient is used to update the distribution's parameters in the direction of higher expected fitness. We introduce a collection of techniques that address issues of convergence, robustness, sample complexity, computational complexity and sensitivity to hyperparameters. This paper explores a number of implementations of the NES family, ranging from general-purpose multi-variate normal distributions to heavy-tailed and separable distributions tailored towards global optimization and search in high dimensional spaces, respectively. Experimental results show best published performance on various standard benchmarks, as well as competitive performance on others.) <|cite_end|> algorithms. Due to the natural gradient, gradient updates become invariant to affine transformations of the parameter space and the natural gradient is also often used to define a trust-region for the policy update. The trust-region is defined by a bound of the Kullback-Leibler (KL) <|cite_start|> (Reference: Relative entropy policy search: This technical report describes a cute idea of how to create new policy search approaches. It directly relates to the Natural Actor-Critic methods but allows the derivation of one shot solutions. Future work may include the application to interesting problems. 1 Problem Statement In reinforcement learning, we have an agent which is in a state s and draws actions a from a policy π. Upon an action, it received a reward r (s, a) = Rsa and transfers to a next state s′ where it will do a next action a′. In most cases, we have Markovian environments and policies, where s′ ∼ p(s′|s, a) = Ps sa and a ∼ π(a|s). The goal of all reinforcement learning methods is the maximization of the expected return J̄(π) = E {∑T t=0 r(st, at) } . (1) We are generally interested in two cases, i.e., (i) the episodic open loop case where the system is always restarted from initial state distribution p(s0), and (ii) the stationary infinite horizon case where T → ∞. Both have substantial differences in their mathematical treatment as well as their optimal solution. 1.1 Episodic Open-Loop Case In the episodic open-loop case, a distribution p(τ) over trajectories τ is assumed and a return R(τ) of a trajectory τ , both are given by p(τ) = p(s0) ∏T t=1 p(st+1|st, at)π(at|t), (2) R(τ) = ∑T t=0 r(st, at). (3) The expected return can now be given as J̄(π) = ∑ τ p(τ)R(τ). Note, that all approximations to the optimal policy depend on the initial state distribution p(s0). This case has been predominant in our previous work.) <|cite_end|> <|cite_start|> (Reference: Trust Region Policy Optimization: We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.) <|cite_end|> divergence between new and old policy and it is well known that the Fisher information matrix, used to compute the natural gradient is a second order approximation of the KL divergence. Such trust-region optimization is common in policy search and has been successfully used to optimize neural network policies. However, many properties of the natural gradient are still under-explored, such as compatible value function approximation <|cite_start|> (Reference: Policy gradient methods for reinforcement learning with function approximation: Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy.) <|cite_end|> for neural networks, the approximation quality of the KL-divergence and the online performance of the natural gradient. We analyze the convergence of the natural gradient analytically and empirically and show that the natural gradient does not give fast convergence properties if we do not add an entropy regularization term. This entropy regularization term results in a new update rule which ensures that the policy looses entropy at the correct pace, leading to convergence to a good policy. We further show that the natural gradient is the optimal (and not the approximate) solution to a trust region optimization problem for log-linear models if the natural parameters of the distribution are optimized and we use compatible value function approximation. We analyze compatible value function approximation for neural networks and show that the components of this approximation are composed of two terms, a state value function which is subtracted from a state-action value function. While it is well known that the compatible function approximation denotes an advantage function, the exact structure was unclear. We show that using compatible value function approximation, we can derive similar algorithms to Trust Region Policy Search that obtain the policy update in closed form. A summary of our contributions is as follows: \begin{itemize} \item It is well known that the second-order Taylor approximation to trust-region optimization with a KL-divergence bound leads to an update direction identical to the natural gradient. However, what is not known is that when using the natural parameterization for an exponential policy and using compatible features we can compute the step-size for the natural gradient that solves the trust-region update exactly for the log-linear parameters. \item When using an entropy bound in addition to the common KL-divergence bound, the compatible features allow us to compute the exact update for the trust-region problem in the log-linear case and for a Gaussian policy with a state independent covariance we can compute the exact update for the covariance also in the non-linear case. \item Our new algorithm called Compatible Policy Search (COPOS), based on the above insights, outperforms comparison methods in both continuous control and partially observable discrete action experiments due to entropy control allowing for principled exploration. \end{itemize} Related Work \label{sec:related_work} Similar to classical reinforcement learning the leading contenders in deep reinforcement learning can be divided into value based-function methods such as Q-learning with deep Q-Network (DQN) <|cite_start|> (Reference: Human-level control through deep reinforcement learning: ) <|cite_end|>, actor-critic methods <|cite_start|> (Reference: Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation: In this work, we propose to apply trust region optimization to deep reinforcement learning using a recently proposed Kronecker-factored approximation to the curvature. We extend the framework of natural policy gradient and propose to optimize both the actor and the critic using Kronecker-factored approximate curvature (K-FAC) with trust region; hence we call our method Actor Critic using Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this is the first scalable trust region natural gradient method for actor-critic methods. It is also a method that learns non-trivial tasks in continuous control as well as discrete control policies directly from raw pixel inputs. We tested our approach across discrete domains in Atari games as well as continuous domains in the MuJoCo environment. With the proposed methods, we are able to achieve higher rewards and a 2- to 3-fold improvement in sample efficiency on average, compared to previous state-of-the-art on-policy actor-critic methods. Code is available at https://github.com/openai/baselines) <|cite_end|> <|cite_start|> (Reference: Guide actor-critic for continuous control: Actor-critic methods solve reinforcement learning problems by updating a parameterized policy known as an actor in a direction that increases an estimate of the expected return known as a critic. However, existing actor-critic methods only use values or gradients of the critic to update the policy parameter. In this paper, we propose a novel actor-critic method called the guide actor-critic (GAC). GAC firstly learns a guide actor that locally maximizes the critic and then it updates the policy parameter based on the guide actor by supervised learning. Our main theoretical contributions are two folds. First, we show that GAC updates the guide actor by performing second-order optimization in the action space where the curvature matrix is based on the Hessians of the critic. Second, we show that the deterministic policy gradient method is a special case of GAC when the Hessians are ignored. Through experiments, we show that our method is a promising reinforcement learning method for continuous controls.) <|cite_end|> <|cite_start|> (Reference: Maximum a Posteriori Policy Optimisation: We introduce a new algorithm for reinforcement learning called Maximum aposteriori Policy Optimisation (MPO) based on coordinate ascent on a relative entropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings while achieving similar or better final performance.) <|cite_end|>, policy gradient methods such as deep deterministic policy gradient (DDPG) <|cite_start|> (Reference: Deterministic Policy Gradient Algorithms: In this paper we consider deterministic policy gradient algorithms for reinforcement learning with continuous actions. The deterministic policy gradient has a particularly appealing form: it is the expected gradient of the action-value function. This simple form means that the deterministic policy gradient can be estimated much more efficiently than the usual stochastic policy gradient. To ensure adequate exploration, we introduce an off-policy actor-critic algorithm that learns a deterministic target policy from an exploratory behaviour policy. We demonstrate that deterministic policy gradient algorithms can significantly outperform their stochastic counterparts in high-dimensional action spaces.) <|cite_end|> <|cite_start|> (Reference: Continuous control with deep reinforcement learning: We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.) <|cite_end|> and policy search methods based on information theoretic / trust region methods, such as proximal policy optimization (PPO) <|cite_start|> (Reference: Proximal Policy Optimization Algorithms: We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.) <|cite_end|> and trust region policy optimization (TRPO) <|cite_start|> (Reference: Trust Region Policy Optimization: We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.) <|cite_end|>. Trust region optimization was introduced in the relative entropy policy search (REPS) method <|cite_start|> (Reference: Relative entropy policy search: This technical report describes a cute idea of how to create new policy search approaches. It directly relates to the Natural Actor-Critic methods but allows the derivation of one shot solutions. Future work may include the application to interesting problems. 1 Problem Statement In reinforcement learning, we have an agent which is in a state s and draws actions a from a policy π. Upon an action, it received a reward r (s, a) = Rsa and transfers to a next state s′ where it will do a next action a′. In most cases, we have Markovian environments and policies, where s′ ∼ p(s′|s, a) = Ps sa and a ∼ π(a|s). The goal of all reinforcement learning methods is the maximization of the expected return J̄(π) = E {∑T t=0 r(st, at) } . (1) We are generally interested in two cases, i.e., (i) the episodic open loop case where the system is always restarted from initial state distribution p(s0), and (ii) the stationary infinite horizon case where T → ∞. Both have substantial differences in their mathematical treatment as well as their optimal solution. 1.1 Episodic Open-Loop Case In the episodic open-loop case, a distribution p(τ) over trajectories τ is assumed and a return R(τ) of a trajectory τ , both are given by p(τ) = p(s0) ∏T t=1 p(st+1|st, at)π(at|t), (2) R(τ) = ∑T t=0 r(st, at). (3) The expected return can now be given as J̄(π) = ∑ τ p(τ)R(τ). Note, that all approximations to the optimal policy depend on the initial state distribution p(s0). This case has been predominant in our previous work.) <|cite_end|>. TRPO and TNPG <|cite_start|> (Reference: Trust Region Policy Optimization: We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.) <|cite_end|> are the first methods to apply trust region optimization successfully to neural networks. In contrast to TRPO and TNPG, we derive our method from the compatible value function approximation perspective. TRPO and TNPG differ from our approach, in that they do not use an entropy constraint and do not consider the difference between the log-linear and non-linear parameters for their update. On the technical level, compared to TRPO, we can update the log-linear parameters (output layer of neural network and the covariance) with an exact update step while TRPO does a line search to find the update step. Moreover, for the covariance we can find an exact update to enforce a specific entropy and thus control exploration while TRPO does not bound the entropy, only the KL-divergence. PPO also applies an adaptive KL penalty term. <|cite_start|> (Reference: A natural policy gradient: We provide a natural gradient method that represents the steepest descent direction based on the underlying structure of the parameter space. Although gradient methods cannot make large changes in the values of the parameters, we show that the natural gradient is moving toward choosing a greedy optimal action rather than just a better action. These greedy optimal actions are those that would be chosen under one improvement step of policy iteration with approximate, compatible value functions, as defined by Sutton et al. [9]. We then show drastic performance improvements in simple MDPs and in the more challenging MDP of Tetris.) <|cite_end|> <|cite_start|> (Reference: Covariant Policy Search: We investigate the problem of non-covariant behavior of policy gradient reinforcement learning algorithms. The policy gradient approach is amenable to analysis by information geometric methods. This leads us to propose a natural metric on controller parameterization that results from considering the manifold of probability distributions over paths induced by a stochastic controller. Investigation of this approach leads to a covariant gradient ascent rule. Interesting properties of this rule are discussed, including its relation with actor-critic style reinforcement learning algorithms. The algorithms discussed here are computationally quite efficient and on some interesting problems lead to dramatic performance improvement over noncovariant rules.) <|cite_end|> <|cite_start|> (Reference: Natural Actor-critic: 近年,CO2排出量の抑制や,化石燃料の将来的な枯渇を背景に再生可能エネルギーの利用が注目されている.しかし,既存の中央集権的な電力ネットワークは系統末端での非定常発電を前提とする再生可能エネルギーとは必ずしも相性がよくないと言われる.このため,マイクログリッドを始め分散型の電力ネットワークが研究されている.本研究ではその一つである自律分散型の電力ネットワークであるECOネットを取り上げ,ECOネット内の発電消費の末端ノードであるミニマル・クラスター間での電力売買を通じた電力融通の自動化の為の機構について検討する.電力売買の自動化の為には,各ミニマル・クラスターにおける電力ロスの発電・消費における諸条件に基づき電力売買の方策を最適化する事が望まれる.本稿では,電力売買を行うエージェントの学習に強化学習を用いる事で電力ロスを低減し,収益を最大化させるような適応的取引エージェントの構築を目指す.ただし,そのような系ではマルチエージェント強化学習の系となるために,不完全知覚問題や同時学習問題が発生することが指摘されている.本稿ではそれらの問題に強いとされる方策勾配法,特にその一種である Natural Actor-Critic を用いて適応的取引エージェントを構築する.また,提案手法の有効性を示すために,6個のミニマル・クラスターにより構成されるローカルクラスターを対象にシミュレーション実験を行った.シミュレーション実験では,Natural Actor-Criticによりエージェントが適切な取引を学習する事が出来る事が示されたのと同時に,マルチエージェント強化学習環境下においても少なくとも一体固定的な取引を行うエージェントの居る条件下では良好な学習結果を生むことが示された.) <|cite_end|> <|cite_start|> (Reference: Revisiting Natural Actor-Critics with Value Function Approximation: ) <|cite_end|> have also suggested similar update rules based on the natural gradient for the policy gradient framework. <|cite_start|> (Reference: Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation: In this work, we propose to apply trust region optimization to deep reinforcement learning using a recently proposed Kronecker-factored approximation to the curvature. We extend the framework of natural policy gradient and propose to optimize both the actor and the critic using Kronecker-factored approximate curvature (K-FAC) with trust region; hence we call our method Actor Critic using Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this is the first scalable trust region natural gradient method for actor-critic methods. It is also a method that learns non-trivial tasks in continuous control as well as discrete control policies directly from raw pixel inputs. We tested our approach across discrete domains in Atari games as well as continuous domains in the MuJoCo environment. With the proposed methods, we are able to achieve higher rewards and a 2- to 3-fold improvement in sample efficiency on average, compared to previous state-of-the-art on-policy actor-critic methods. Code is available at https://github.com/openai/baselines) <|cite_end|> applied approximate natural gradient updates to both the actor and critic in an actor-critic framework but did not utilize compatible value functions or an entropy bound. <|cite_start|> (Reference: Natural Actor-critic: 近年,CO2排出量の抑制や,化石燃料の将来的な枯渇を背景に再生可能エネルギーの利用が注目されている.しかし,既存の中央集権的な電力ネットワークは系統末端での非定常発電を前提とする再生可能エネルギーとは必ずしも相性がよくないと言われる.このため,マイクログリッドを始め分散型の電力ネットワークが研究されている.本研究ではその一つである自律分散型の電力ネットワークであるECOネットを取り上げ,ECOネット内の発電消費の末端ノードであるミニマル・クラスター間での電力売買を通じた電力融通の自動化の為の機構について検討する.電力売買の自動化の為には,各ミニマル・クラスターにおける電力ロスの発電・消費における諸条件に基づき電力売買の方策を最適化する事が望まれる.本稿では,電力売買を行うエージェントの学習に強化学習を用いる事で電力ロスを低減し,収益を最大化させるような適応的取引エージェントの構築を目指す.ただし,そのような系ではマルチエージェント強化学習の系となるために,不完全知覚問題や同時学習問題が発生することが指摘されている.本稿ではそれらの問題に強いとされる方策勾配法,特にその一種である Natural Actor-Critic を用いて適応的取引エージェントを構築する.また,提案手法の有効性を示すために,6個のミニマル・クラスターにより構成されるローカルクラスターを対象にシミュレーション実験を行った.シミュレーション実験では,Natural Actor-Criticによりエージェントが適切な取引を学習する事が出来る事が示されたのと同時に,マルチエージェント強化学習環境下においても少なくとも一体固定的な取引を行うエージェントの居る条件下では良好な学習結果を生むことが示された.) <|cite_end|> <|cite_start|> (Reference: Revisiting Natural Actor-Critics with Value Function Approximation: ) <|cite_end|> investigated the idea of compatible value functions in combination with the natural gradient but used manual learning rates instead of trust region optimization. The approaches in <|cite_start|> (Reference: Model-Based Relative Entropy Stochastic Search: Stochastic search algorithms are general black-box optimizers. Due to their ease of use and their generality, they have recently also gained a lot of attention in operations research, machine learning and policy search. Yet, these algorithms require a lot of evaluations of the objective, scale poorly with the problem dimension, are affected by highly noisy objective functions and may converge prematurely. To alleviate these problems, we introduce a new surrogate-based stochastic search approach. We learn simple, quadratic surrogate models of the objective function. As the quality of such a quadratic approximation is limited, we do not greedily exploit the learned models. The algorithm can be misled by an inaccurate optimum introduced by the surrogate. Instead, we use information theoretic constraints to bound the 'distance' between the new and old data distribution while maximizing the objective function. Additionally the new method is able to sustain the exploration of the search distribution to avoid premature convergence. We compare our method with state of art black-box optimization methods on standard uni-modal and multi-modal optimization functions, on simulated planar robot tasks and a complex robot ball throwing task. The proposed method considerably outperforms the existing approaches.) <|cite_end|> <|cite_start|> (Reference: Model-free trajectory optimization for reinforcement learning: Many of the recent Trajectory Optimization algorithms alternate between local approximation of the dynamics and conservative policy update. However, linearly approximating the dynamics in order to derive the new policy can bias the update and prevent convergence to the optimal policy. In this article, we propose a new model-free algorithm that backpropagates a local quadratic time-dependent Q-Function, allowing the derivation of the policy update in closed form. Our policy update ensures exact KL-constraint satisfaction without simplifying assumptions on the system dynamics demonstrating improved performance in comparison to related Trajectory Optimization algorithms linearizing the dynamics.) <|cite_end|> use an entropy bound similar to ours. However, the approach in <|cite_start|> (Reference: Model-Based Relative Entropy Stochastic Search: Stochastic search algorithms are general black-box optimizers. Due to their ease of use and their generality, they have recently also gained a lot of attention in operations research, machine learning and policy search. Yet, these algorithms require a lot of evaluations of the objective, scale poorly with the problem dimension, are affected by highly noisy objective functions and may converge prematurely. To alleviate these problems, we introduce a new surrogate-based stochastic search approach. We learn simple, quadratic surrogate models of the objective function. As the quality of such a quadratic approximation is limited, we do not greedily exploit the learned models. The algorithm can be misled by an inaccurate optimum introduced by the surrogate. Instead, we use information theoretic constraints to bound the 'distance' between the new and old data distribution while maximizing the objective function. Additionally the new method is able to sustain the exploration of the search distribution to avoid premature convergence. We compare our method with state of art black-box optimization methods on standard uni-modal and multi-modal optimization functions, on simulated planar robot tasks and a complex robot ball throwing task. The proposed method considerably outperforms the existing approaches.) <|cite_end|> is a stochastic search method, that is, it ignores sequential decisions and views the problem as black-box optimization, and the approach in <|cite_start|> (Reference: Model-free trajectory optimization for reinforcement learning: Many of the recent Trajectory Optimization algorithms alternate between local approximation of the dynamics and conservative policy update. However, linearly approximating the dynamics in order to derive the new policy can bias the update and prevent convergence to the optimal policy. In this article, we propose a new model-free algorithm that backpropagates a local quadratic time-dependent Q-Function, allowing the derivation of the policy update in closed form. Our policy update ensures exact KL-constraint satisfaction without simplifying assumptions on the system dynamics demonstrating improved performance in comparison to related Trajectory Optimization algorithms linearizing the dynamics.) <|cite_end|> is restricted to trajectory optimization. Moreover, both of these approaches do not explicitly handle non-linear parameters such as those found in neural networks. The entropy bound used in <|cite_start|> (Reference: Guide actor-critic for continuous control: Actor-critic methods solve reinforcement learning problems by updating a parameterized policy known as an actor in a direction that increases an estimate of the expected return known as a critic. However, existing actor-critic methods only use values or gradients of the critic to update the policy parameter. In this paper, we propose a novel actor-critic method called the guide actor-critic (GAC). GAC firstly learns a guide actor that locally maximizes the critic and then it updates the policy parameter based on the guide actor by supervised learning. Our main theoretical contributions are two folds. First, we show that GAC updates the guide actor by performing second-order optimization in the action space where the curvature matrix is based on the Hessians of the critic. Second, we show that the deterministic policy gradient method is a special case of GAC when the Hessians are ignored. Through experiments, we show that our method is a promising reinforcement learning method for continuous controls.) <|cite_end|> is similar to ours, however, their method depends on second order approximations of a deep Q-function, resulting in a much more complex policy update that can suffer from the instabilities of learning a non-linear Q-function. For exploration one can in general add an entropy term to the objective. In the experiments, we compare against TRPO with this additive entropy term. In preliminary experiments, to control entropy in TRPO, we also combined the entropy and KL-divergence constraints into a single constraint without success. <|paper_end|>
[ "<|reference_start|> Trust Region Policy Optimization: We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters. <|reference_end|>", "<|reference_start|> Policy gradient methods for reinforcement learning with function\napproximation: Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy. <|reference_end|>", "<|reference_start|> Maximum a Posteriori Policy Optimisation: We introduce a new algorithm for reinforcement learning called Maximum aposteriori Policy Optimisation (MPO) based on coordinate ascent on a relative entropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings while achieving similar or better final performance. <|reference_end|>", "<|reference_start|> Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation: In this work, we propose to apply trust region optimization to deep reinforcement learning using a recently proposed Kronecker-factored approximation to the curvature. We extend the framework of natural policy gradient and propose to optimize both the actor and the critic using Kronecker-factored approximate curvature (K-FAC) with trust region; hence we call our method Actor Critic using Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this is the first scalable trust region natural gradient method for actor-critic methods. It is also a method that learns non-trivial tasks in continuous control as well as discrete control policies directly from raw pixel inputs. We tested our approach across discrete domains in Atari games as well as continuous domains in the MuJoCo environment. With the proposed methods, we are able to achieve higher rewards and a 2- to 3-fold improvement in sample efficiency on average, compared to previous state-of-the-art on-policy actor-critic methods. Code is available at https://github.com/openai/baselines <|reference_end|>" ]
[ 7, 8, 12, 23 ]
{"<|cite_1|>": "ss-690072", "<|multi_cite_2_1|>": "ss-1516973", "<|multi_cite_2_2|>": "ss-1340510", "<|multi_cite_2_3|>": "ss-1918357", "<|multi_cite_2_4|>": "ss-1876906", "<|cite_3|>": "arxiv-22418", "<|multi_cite_4_1|>": "ss-688163", "<|multi_cite_4_2|>": "arxiv-73321", "<|cite_5|>": "ss-767671", "<|cite_6|>": "ss-749221", "<|multi_cite_7_1|>": "arxiv-132151", "<|multi_cite_7_2|>": "ss-1888468", "<|multi_cite_7_3|>": "arxiv-162921", "<|multi_cite_8_1|>": "ss-997710", "<|multi_cite_8_2|>": "arxiv-83736", "<|cite_9|>": "arxiv-129813", "<|cite_10|>": "arxiv-73321", "<|cite_11|>": "ss-688163", "<|cite_12|>": "arxiv-73321", "<|multi_cite_17_1|>": "ss-1516973", "<|multi_cite_17_2|>": "ss-1340510", "<|multi_cite_17_3|>": "ss-1918357", "<|multi_cite_17_4|>": "ss-1876906", "<|cite_18|>": "arxiv-132151", "<|multi_cite_19_1|>": "ss-1918357", "<|multi_cite_19_2|>": "ss-1876906", "<|multi_cite_13_1|>": "ss-2296400", "<|multi_cite_13_2|>": "ss-679103", "<|cite_14|>": "ss-2296400", "<|cite_15|>": "ss-679103", "<|cite_16|>": "ss-1888468"}
2309.04862
<|paper_start|> Title: Distributional Data Augmentation Methods for Low Resource Language Abstract: Distributional Data Augmentation Methods for Low Resource Language: Text augmentation is a technique for constructing synthetic data from an under-resourced corpus to improve predictive performance. Synthetic data generation is common in numerous domains. However, recently text augmentation has emerged in natural language processing (NLP) to improve downstream tasks. One of the current state-of-the-art text augmentation techniques is easy data augmentation (EDA), which augments the training data by injecting and replacing synonyms and randomly permuting sentences. One major obstacle with EDA is the need for versatile and complete synonym dictionaries, which cannot be easily found in low-resource languages. To improve the utility of EDA, we propose two extensions, easy distributional data augmentation (EDDA) and type specific similar word replacement (TSSR), which uses semantic word context information and part-of-speech tags for word replacement and augmentation. In an extensive empirical evaluation, we show the utility of the proposed methods, measured by F1 score, on two representative datasets in Swedish as an example of a low-resource language. With the proposed methods, we show that augmented data improve classification performances in low-resource settings. Introduction Augmentation is a technique to construct synthetic training data from available datasets. Various augmentation techniques have been used mainly in the computer vision field to improve machine learning models <|cite_start|> (Reference: Text Data Augmentation for Deep Learning: ) <|cite_end|>, especially with huge deep learning models in the area. However, text augmentation has been growing recently, also being aligned with massive models that have come out nowadays <|cite_start|> (Reference: A Survey on Data Augmentation for Text Classification: Data augmentation, the artificial creation of training data for machine learning by transformations, is a widely studied research field across machine learning disciplines. While it is useful for increasing a model's generalization capabilities, it can also address many other challenges and problems, from overcoming a limited amount of training data, to regularizing the objective, to limiting the amount data used to protect privacy. Based on a precise description of the goals and applications of data augmentation and a taxonomy for existing works, this survey is concerned with data augmentation methods for textual classification and aims to provide a concise and comprehensive overview for researchers and practitioners. Derived from the taxonomy, we divide more than 100 methods into 12 different groupings and give state-of-the-art references expounding which methods are highly promising by relating them to each other. Finally, research perspectives that may constitute a building block for future work are provided.) <|cite_end|>. The two core reasons to use text augmentation are as follows: 1) some languages are in low-resource domains, thus it is hard to get enough data to train the model. 2) augmentation can be helpful to strengthen decision boundaries, leading to more robust classifiers or better uncertainty estimates so the model can be more familiar with the local space around examples <|cite_start|> (Reference: A Survey on Data Augmentation for Text Classification: Data augmentation, the artificial creation of training data for machine learning by transformations, is a widely studied research field across machine learning disciplines. While it is useful for increasing a model's generalization capabilities, it can also address many other challenges and problems, from overcoming a limited amount of training data, to regularizing the objective, to limiting the amount data used to protect privacy. Based on a precise description of the goals and applications of data augmentation and a taxonomy for existing works, this survey is concerned with data augmentation methods for textual classification and aims to provide a concise and comprehensive overview for researchers and practitioners. Derived from the taxonomy, we divide more than 100 methods into 12 different groupings and give state-of-the-art references expounding which methods are highly promising by relating them to each other. Finally, research perspectives that may constitute a building block for future work are provided.) <|cite_end|>. Unlike images, languages cannot be generalized or merged, meaning each language only has its own resources, while images can easily be merged regardless of topics and types. In this sense, text augmentation techniques can benefit low-resource languages such as Swedish, Kazakh, Tamil, Welsh, Upper Serbian, and many more <|cite_start|> (Reference: To Augment or Not to Augment? A Comparative Study on Text Augmentation Techniques for Low-Resource NLP: Data-hungry deep neural networks have established themselves as the standard for many NLP tasks including the traditional sequence tagging ones. Despite their state-of-the-art performance on high-resource languages, they still fall behind of their statistical counter-parts in low-resource scenarios. One methodology to counter attack this problem is text augmentation, i.e., generating new synthetic training data points from existing data. Although NLP has recently witnessed a load of textual augmentation techniques, the field still lacks a systematic performance analysis on a diverse set of languages and sequence tagging tasks. To fill this gap, we investigate three categories of text augmentation methodologies which perform changes on the syntax (e.g., cropping sub-sentences), token (e.g., random word insertion) and character (e.g., character swapping) levels. We systematically compare them on part-of-speech tagging, dependency parsing and semantic role labeling for a diverse set of language families using various models including the architectures that rely on pretrained multilingual contextualized language models such as mBERT. Augmentation most significantly improves dependency parsing, followed by part-of-speech tagging and semantic role labeling. We find the experimented techniques to be effective on morphologically rich languages in general rather than analytic languages such as Vietnamese. Our results suggest that the augmentation techniques can further improve over strong baselines based on mBERT. We identify the character-level methods as the most consistent performers, while synonym replacement and syntactic augmenters provide inconsistent improvements. Finally, we discuss that the results most heavily depend on the task, language pair, and the model type.) <|cite_end|>. There have been a few text augmentation techniques, from the most straightforward one <|cite_start|> (Reference: HotFlip: White-Box Adversarial Examples for Text Classification: We propose an efficient method to generate white-box adversarial examples to trick a character-level neural classifier. We find that only a few manipulations are needed to greatly decrease the accuracy. Our method relies on an atomic flip operation, which swaps one token for another, based on the gradients of the one-hot input vectors. Due to efficiency of our method, we can perform adversarial training which makes the model more robust to attacks at test time. With the use of a few semantics-preserving constraints, we demonstrate that HotFlip can be adapted to attack a word-level classifier as well.) <|cite_end|> <|cite_start|> (Reference: Model-portability experiments for textual temporal analysis: We explore a semi-supervised approach for improving the portability of time expression recognition to non-newswire domains: we generate additional training examples by substituting temporal expression words with potential synonyms. We explore using synonyms both from WordNet and from the Latent Words Language Model (LWLM), which predicts synonyms in context using an unsupervised approach. We evaluate a state-of-the-art time expression recognition system trained both with and without the additional training examples using data from TempEval 2010, Reuters and Wikipedia. We find that the LWLM provides substantial improvements on the Reuters corpus, and smaller improvements on the Wikipedia corpus. We find that WordNet alone never improves performance, though intersecting the examples from the LWLM and WordNet provides more stable results for Wikipedia.) <|cite_end|>, to complex ones using separate deep learning models <|cite_start|> (Reference: Conditional BERT Contextual Augmentation: We propose a novel data augmentation method for labeled sentences called conditional BERT contextual augmentation. Data augmentation methods are often applied to prevent overfitting and improve generalization of deep neural network models. Recently proposed contextual augmentation augments labeled sentences by randomly replacing words with more varied substitutions predicted by language model. BERT demonstrates that a deep bidirectional language model is more powerful than either an unidirectional language model or the shallow concatenation of a forward and backward model. We retrofit BERT to conditional BERT by introducing a new conditional masked language model\footnote{The term "conditional masked language model" appeared once in original BERT paper, which indicates context-conditional, is equivalent to term "masked language model". In our paper, "conditional masked language model" indicates we apply extra label-conditional constraint to the "masked language model".} task. The well trained conditional BERT can be applied to enhance contextual augmentation. Experiments on six various different text classification tasks show that our method can be easily applied to both convolutional or recurrent neural networks classifier to obtain obvious improvement.) <|cite_end|> <|cite_start|> (Reference: {{GAN: مركبات الجاليوم نترايد ( GaN ) هى إحدى فصائل أشباه الموصلات تعتبر من أهم الوصلات الالكترونية وذلك لتميزها التطبيقي وخاصة فى المجالات الكهروضوئية ( الليزر ) والمجالات التطبيقية الالكترونية الاخرى ( High power Devices ) بسبب سعة فجوة الطاقة حيث أن هذه السعة تتراوح ما بين 1.9 ev حتى 6.28 ونتيجة لذلك نستطيع الحصول على مصادر متعددة للطاقة ومثال على ذلك ثنائيات اﻹنبعاث الضوئي (LEDs ) ومصادر ثنائية أ خرى ( LDs ) ونظرا لأهمية هذا النوع من التقنية حاليا فان هناك توجها عالميا كبيرآ جدآ على مستوى مراكز الأبحاث سواء كان ذلك فى الشركات المتخصصة أو مراكزالأبحاث فى الجامعات والمعاهد لدراسة خصائص هذه المواد من النواحي الفيزيائية والتطبيقية سواء عن طريق التصنيع أو القياسات الكهربية والضوئية والمغنا طيسية وبناء على ماسبق فقد قمنا بدراسات عدة للدخول فى هذا المجال من أجل دراسة خصائص هذه المواد الكهربية وتم الحصول على العلاقة مابين التيار والجهد ودرجة الحرارة وكذلك دراسة السعة الكهربية لها وقد توصلنا الى نتائج جيدة جدا لهذه الوصلات الكهربية حيث اننا استخدمنا معدن الذهب وكذلك معدن الألمنيوم من أجل التوصيلات الكهربية وتمكنا بعد ذلك من الحصول على نتائج مطابقة بشكل كبير لماهو موجود نظريا كما هو موضح فى الجزء الخاص بالنتائج العملية فى هذا البحث .) <|cite_end|> <|cite_start|> (Reference: Controlled Text Generation for Data Augmentation in Intelligent Artificial Agents: Data availability is a bottleneck during early stages of development of new capabilities for intelligent artificial agents. We investigate the use of text generation techniques to augment the training data of a popular commercial artificial agent across categories of functionality, with the goal of faster development of new functionality. We explore a variety of encoder-decoder generative models for synthetic training data generation and propose using conditional variational auto-encoders. Our approach requires only direct optimization, works well with limited data and significantly outperforms the previous controlled text generation techniques. Further, the generated data are used as additional training samples in an extrinsic intent classification task, leading to improved performance by up to 5\% absolute f-score in low-resource cases, validating the usefulness of our approach.) <|cite_end|>. One of the easiest ways to apply text augmentation is with a technique called easy data augmentation (EDA). EDA has four main techniques to augment a sentence <|cite_start|> (Reference: E{DA: Diante de toda a surrealidade imposta pelo ano de 2020, senti que era o momento de desabafar um pouco em forma de charge. A pandemia afeta a todos de formas diferentes. O meu caso, em específico, como autor de Histórias em Quadrinhos e Ilustrador, o chamado “home office” não é algo estranho e nem fora da rotina. Mas os meses foram passando...A postura irresponsável do então ocupante do cargo máximo do poder executivo brasileiro, traduzida em um “E DAÍ?”, diante da constatação de mais de oito mil mortes (naquele momento), mesmo perante o caos e o colapso da saúde pública do país, foi o estopim para esta pintura.Devo confessar que a ideia não foi original. Diversos amigos Caricaturistas e Chargistas já haviam representado o presidente e sua total falta de empatia diante das mortes, mas eu senti que também precisava fazer. e, no momento, me senti bem. A charge não precisa necessariamente trazer o riso, e, riso nem sempre é algo leve. O humor pode ser simplesmente a nossa incredulidade de fronte aos absurdos. O que fica é o meu repúdio e um questionamento:como poderemos lidar com tantos despropósitos e todos agirmos de maneira responsável e cidadã, buscando passar por esta crise, sem este custo absurdo de perdas de vidas?) <|cite_end|> as follows: synonyms replace (SR), random Insertion (RI), random swap (RS), and random deletion (RD). While EDA can be regarded as a universal text augmentation technique that can be applied to any language. However, this may not always be true, as it is not truly universal in the sense of not being able to apply to different languages since it still depends on other language-dependent modules such as wordnet. Adapting EDA to low-resource languages may be even more challenging since some language dependencies cannot be easily solved. Therefore, this paper aims to provide a framework for modified EDA augmentation that can also easily be applied to low-resource languages. We show our framework for Swedish as an example of a low-resource language. While the Swedish language is classified into the low-resource group, there have been a few text augmentation trials for the language. One of the earliest text augmentation works has been done on clinical text data in Swedish by merging various sources of text for named entity recognition (NER) tasks using different deep models. However, this paper has a limitation in that it only tests on one Swedish clinical dataset and the augmentation techniques used in the paper are domain-specific, thus it cannot be applied to every Swedish text. Moreover, a group of researchers has tried controlled text perturbation using three main perturbation methods: n-gram shift, clause shift, and random shift on Swedish text <|cite_start|> (Reference: Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with Controllable Perturbations: Recent research has adopted a new experimental field centered around the concept of text perturbations which has revealed that shuffled word order has little to no impact on the downstream performance of Transformer-based language models across many NLP tasks. These findings contradict the common understanding of how the models encode hierarchical and structural information and even question if the word order is modeled with position embeddings. To this end, this paper proposes nine probing datasets organized by the type of \emph{controllable} text perturbation for three Indo-European languages with a varying degree of word order flexibility: English, Swedish and Russian. Based on the probing analysis of the M-BERT and M-BART models, we report that the syntactic sensitivity depends on the language and model pre-training objectives. We also find that the sensitivity grows across layers together with the increase of the perturbation granularity. Last but not least, we show that the models barely use the positional information to induce syntactic trees from their intermediate self-attention and contextualized representations.) <|cite_end|>. However, this paper focuses only on evaluating deep models such as BERT and BART <|cite_start|> (Reference: {Bert: Багато актуальних NLP завдань, включаючи задачу авто пунктуації, залежать від ефективного вирішення завдання прогнозування – визначення того, який саме  токен повинен бути наступним. У даній роботі розглянута підзадача прогнозування наступного токену на основі попередніх. Основною проблемою існуючих підходів є те, що вони не однаково ефективні. З метою вирішення цієї проблеми у даній роботі розглядається використання двонаправлених кодерів моделі BERT з даними, які були токенізовані.) <|cite_end|> <|cite_start|> (Reference: {{BART: 旧金山市湾区捷运(BART)系统技术标准高,具备较高的安全可靠度;容量大,储备充分,服务功能完善,舒适度高,具有可持续发展性;四通八达,与其它交通方式有效衔接系统充分体现了可持续发展,安全可靠,畅达、快捷、高效,客运交通一体化的先进理念,非常值得我国轨道交通建设和发展借鉴。) <|cite_end|> and investigates attention layers for each token to observe their behavior without discussing the effects of augmentation on the models' performances. They also do not disclose how the augmentation techniques are implemented, hindering the possibility of reproducing the technique. To the best of our knowledge, no previous work has been found where EDA with neural adaptation is applied to the Swedish text. Regarding the inner workings of EDA, it is heavily dependent on wordnet synonym replacement. As aforementioned, there may not always be a comprehensive dictionary in every language, especially in low-resource languages. Therefore, we replace wordnet with the word2vec <|cite_start|> (Reference: Distributed Representations of Words and Phrases and their Compositionality: The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.) <|cite_end|> <|cite_start|> (Reference: SALDO: a touch of yin to WordNet’s yang: ) <|cite_end|> model to integrate within this augmentation framework, which becomes a data-driven approach to augmenting data, which we call \textbf{E}asy \textbf{D}istributional \textbf{D}ata \textbf{A}ugmentation (\textbf{EDDA}). We expect that this approach can greatly help low-resource languages without good quality dictionary data, such as wordnet, use EDA techniques with a trainable component. Moreover, we also introduce how syntax information of words can also be used to augment data, which we call \textbf{T}ype \textbf{S}pecific \textbf{S}imilar word \textbf{R}eplacement (\textbf{TSSR}). This is due to randomness in EDDA may affect sentence sentiment <|cite_start|> (Reference: EasyAug: An Automatic Textual Data Augmentation Platform for Classification Tasks: Imbalanced data is a perennial problem that impedes the learning abilities of current machine learning-based classification models. One approach to address it is to leverage data augmentation to expand the training set. For image data, there are a number of suitable augmentation techniques that have proven effective in previous work. For textual data, however, due to the discrete units inherent in natural language, techniques that randomly perturb the signal may be ineffective. Additionally, due to the substantial discrepancy between different textual datasets (e.g., different domains), an augmentation approach that facilitates the classification on one dataset may be detrimental on another dataset. For practitioners, comparing different data augmentation techniques is non-trivial, as the corresponding methods might need to be incorporated into different system architectures, and the implementation of some approaches, such as generative models, is laborious. To address these challenges, we develop EasyAug, a data augmentation platform that provides several augmentation approaches. Users can conveniently compare the classification results and can easily choose the most suitable one for their own dataset. In addition, the system is extensible and can incorporate further augmentation approaches, such that with minimal effort a new method can comprehensively be compared with the baselines.) <|cite_end|> <|cite_start|> (Reference: A Survey on Data Augmentation for Text Classification: Data augmentation, the artificial creation of training data for machine learning by transformations, is a widely studied research field across machine learning disciplines. While it is useful for increasing a model's generalization capabilities, it can also address many other challenges and problems, from overcoming a limited amount of training data, to regularizing the objective, to limiting the amount data used to protect privacy. Based on a precise description of the goals and applications of data augmentation and a taxonomy for existing works, this survey is concerned with data augmentation methods for textual classification and aims to provide a concise and comprehensive overview for researchers and practitioners. Derived from the taxonomy, we divide more than 100 methods into 12 different groupings and give state-of-the-art references expounding which methods are highly promising by relating them to each other. Finally, research perspectives that may constitute a building block for future work are provided.) <|cite_end|> <|cite_start|> (Reference: {Do Not Have Enough Data? Deep Learning to the Rescue!: Based on recent advances in natural language modeling and those in text generation capabilities, we propose a novel data augmentation method for text classification tasks. We use a powerful pre-trained neural network model to artificially synthesize new labeled data for supervised learning. We mainly focus on cases with scarce labeled data. Our method, referred to as language-model-based data augmentation (LAMBADA), involves fine-tuning a state-of-the-art language generator to a specific task through an initial training phase on the existing (usually small) labeled data. Using the fine-tuned model and given a class label, new sentences for the class are generated. Our process then filters these new sentences by using a classifier trained on the original data. In a series of experiments, we show that LAMBADA improves classifiers' performance on a variety of datasets. Moreover, LAMBADA significantly improves upon the state-of-the-art techniques for data augmentation, specifically those applicable to text classification tasks with little data.) <|cite_end|> by producing sentimentally dissimilar synthetic sentences; therefore, this is a directed approach to complement EDDA. \smallskip \noindent \textbf{Contibutions. } The main contributions of this paper can be summarized as follows: \begin{itemize} \item We adapt EDA-style augmentation techniques for low-resource languages by using distributional synonym replacement that does not require strong language-specific dependency. We exemplify its usefulness in Swedish text. \item We introduce and evaluate a novel augmentation method using POS information, which we name TSSR, as a complementary module to our EDDA framework and show that this method can significantly improve predictive performance. \item We show that by using the proposed augmentation techniques, we increase the F1 score only using 40\%-50\% of the training data compared to the baseline performances without augmentation. \item We provide our code in the GitHub repository for reproducibility purposes\footnote{\url{https://github.com/mosh98/Text_Aug_Low_Res}}. \end{itemize} <|paper_end|>
[ "<|reference_start|> Text Data Augmentation for Deep Learning: <|reference_end|>", "<|reference_start|> Conditional BERT Contextual Augmentation: We propose a novel data augmentation method for labeled sentences called conditional BERT contextual augmentation. Data augmentation methods are often applied to prevent overfitting and improve generalization of deep neural network models. Recently proposed contextual augmentation augments labeled sentences by randomly replacing words with more varied substitutions predicted by language model. BERT demonstrates that a deep bidirectional language model is more powerful than either an unidirectional language model or the shallow concatenation of a forward and backward model. We retrofit BERT to conditional BERT by introducing a new conditional masked language model\\footnote{The term \"conditional masked language model\" appeared once in original BERT paper, which indicates context-conditional, is equivalent to term \"masked language model\". In our paper, \"conditional masked language model\" indicates we apply extra label-conditional constraint to the \"masked language model\".} task. The well trained conditional BERT can be applied to enhance contextual augmentation. Experiments on six various different text classification tasks show that our method can be easily applied to both convolutional or recurrent neural networks classifier to obtain obvious improvement. <|reference_end|>", "<|reference_start|> {{BART: 旧金山市湾区捷运(BART)系统技术标准高,具备较高的安全可靠度;容量大,储备充分,服务功能完善,舒适度高,具有可持续发展性;四通八达,与其它交通方式有效衔接系统充分体现了可持续发展,安全可靠,畅达、快捷、高效,客运交通一体化的先进理念,非常值得我国轨道交通建设和发展借鉴。 <|reference_end|>", "<|reference_start|> Distributed Representations of Words and Phrases and their Compositionality: The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. <|reference_end|>" ]
[ 0, 6, 12, 13 ]
{"<|cite_1|>": "ss-1202156", "<|cite_2|>": "arxiv-353520", "<|cite_3|>": "arxiv-353520", "<|cite_4|>": "arxiv-381845", "<|multi_cite_5_1|>": "arxiv-143449", "<|multi_cite_5_2|>": "ss-1534242", "<|multi_cite_6_1|>": "arxiv-184758", "<|multi_cite_6_2|>": "ss-1295153", "<|multi_cite_6_3|>": "arxiv-227744", "<|cite_7|>": "ss-1738302", "<|cite_9|>": "arxiv-370118", "<|multi_cite_10_1|>": "ss-1457177", "<|multi_cite_10_2|>": "ss-1532639", "<|multi_cite_11_1|>": "arxiv-51600", "<|multi_cite_11_2|>": "ss-1243558", "<|multi_cite_12_1|>": "ss-1275348", "<|multi_cite_12_2|>": "arxiv-353520", "<|multi_cite_12_3|>": "ss-1278406"}
2001.03346
<|paper_start|> Title: Time-Varying Graph Learning with Constraints on Graph Temporal Variation Abstract: Time-Varying Graph Learning with Constraints on Graph Temporal Variation: We propose a novel framework for learning time-varying graphs from spatiotemporal measurements. Given an appropriate prior on the temporal behavior of signals, our proposed method can estimate time-varying graphs from a small number of available measurements. To achieve this, we introduce two regularization terms in convex optimization problems that constrain sparseness of temporal variations of the time-varying networks. Moreover, a computationally-scalable algorithm is introduced to efficiently solve the optimization problem. The experimental results with synthetic and real datasets (point cloud and temperature data) demonstrate our proposed method outperforms the existing state-of-the-art methods. Introduction \label{sec:intro} \IEEEPARstart{S}{ignals} often have underlying network structures, e.g., sensor, traffic, brain, and social networks. Graphs, consisting of sets of nodes and edges, are a fundamental tool to describe the relationship among entities. Graph edges and the corresponding edge weights can be used to capture the similarity between nodes (where a higher positive weight indicates greater similarity). Introducing a graph representation enables us to efficiently analyze signals on networks in many practical applications such as epidemics <|cite_start|> (Reference: Analysis and Control of Epidemics: A survey of spreading processes on complex networks: This article reviews and presents various solved and open problems in the development, analysis, and control of epidemic models. We are interested in presenting a relatively concise report for new engineers looking to enter the field of spreading processes on complex networks.) <|cite_end|> <|cite_start|> (Reference: The effect of network topology on the spread of epidemics: Many network phenomena are well modeled as spreads of epidemics through a network. Prominent examples include the spread of worms and email viruses, and, more generally, faults. Many types of information dissemination can also be modeled as spreads of epidemics. In this paper we address the question of what makes an epidemic either weak or potent. More precisely, we identify topological properties of the graph that determine the persistence of epidemics. In particular, we show that if the ratio of cure to infection rates is larger than the spectral radius of the graph, then the mean epidemic lifetime is of order log n, where n is the number of nodes. Conversely, if this ratio is smaller than a generalization of the isoperimetric constant of the graph, then the mean epidemic lifetime is of order e/sup na/, for a positive constant a. We apply these results to several network topologies including the hypercube, which is a representative connectivity graph for a distributed hash table, the complete graph, which is an important connectivity graph for BGP, and the power law graph, of which the AS-level Internet graph is a prime example. We also study the star topology and the Erdos-Renyi graph as their epidemic spreading behaviors determine the spreading behavior of power law graphs.) <|cite_end|>, transportation networks <|cite_start|> (Reference: The role of the airline transportation network in the prediction and predictability of global epidemics: The systematic study of large-scale networks has unveiled the ubiquitous presence of connectivity patterns characterized by large-scale heterogeneities and unbounded statistical fluctuations. These features affect dramatically the behavior of the diffusion processes occurring on networks, determining the ensuing statistical properties of their evolution pattern and dynamics. In this article, we present a stochastic computational framework for the forecast of global epidemics that considers the complete worldwide air travel infrastructure complemented with census population data. We address two basic issues in global epidemic modeling: (i) we study the role of the large scale properties of the airline transportation network in determining the global diffusion pattern of emerging diseases; and (ii) we evaluate the reliability of forecasts and outbreak scenarios with respect to the intrinsic stochasticity of disease transmission and traffic flows. To address these issues we define a set of quantitative measures able to characterize the level of heterogeneity and predictability of the epidemic pattern. These measures may be used for the analysis of containment policies and epidemic risk assessment.) <|cite_end|>, and social networks <|cite_start|> (Reference: Diffusion in Social Networks as SIS Epidemics: Beyond Full Mixing and Complete Graphs: Peer influence and interactions between agents in a population give rise to complex, nonlinear behaviors. This paper adopts the SIS (susceptible-infected-susceptible) framework from epidemiology to analytically study how network topology affects the diffusion of ideas/opinions/beliefs/innovations in social networks. We introduce the scaled SIS process, which models peer influence as neighbor-to-neighbor infections. We model the scaled SIS process as a continuous-time Markov process and derive for this process its closed form equilibrium distribution. The adjacency matrix that describes the underlying social network is explicitly reflected in this distribution. The paper shows that interesting population asymptotic behaviors occur for scenarios where the individual tendencies of each agent oppose peer influences. Specifically, we determine how the most probable configuration of agent states (i.e., the population configuration with maximum equilibrium distribution) depends on both model parameters and network topology. We show that, for certain regions of the parameter space, this and related issues reduce to standard graph questions like the maximum independent set problem.) <|cite_end|>. Even when data is not associated with an actual network, graphs are efficient tools to represent latent structures in the data. For example, a principal component analysis can be improved by imposing a prior based on graphs <|cite_start|> (Reference: Fast Robust PCA on Graphs: Mining useful clusters from high dimensional data has received significant attention of the computer vision and pattern recognition community in the recent years. Linear and non-linear dimensionality reduction has played an important role to overcome the curse of dimensionality. However, often such methods are accompanied with three different problems: high computational complexity (usually associated with the nuclear norm minimization), non-convexity (for matrix factorization methods) and susceptibility to gross corruptions in the data. In this paper we propose a principal component analysis (PCA) based solution that overcomes these three issues and approximates a low-rank recovery method for high dimensional datasets. We target the low-rank recovery by enforcing two types of graph smoothness assumptions, one on the data samples and the other on the features by designing a convex optimization problem. The resulting algorithm is fast, efficient and scalable for huge datasets with O(nlog(n)) computational complexity in the number of data samples. It is also robust to gross corruptions in the dataset as well as to the model parameters. Clustering experiments on 7 benchmark datasets with different types of corruptions and background separation experiments on 3 video datasets show that our proposed model outperforms 10 state-of-the-art dimensionality reduction models. Our theoretical analysis proves that the proposed model is able to recover approximate low-rank representations with a bounded error for clusterable data.) <|cite_end|> <|cite_start|> (Reference: Graph dual regularization non-negative matrix factorization for co-clustering: ) <|cite_end|> <|cite_start|> (Reference: Low-rank matrix approximation with manifold regularization: This paper proposes a new model of low-rank matrix factorization that incorporates manifold regularization to the matrix factorization. Superior to the graph-regularized nonnegative matrix factorization, this new regularization model has globally optimal and closed-form solutions. A direct algorithm (for data with small number of points) and an alternate iterative algorithm with inexact inner iteration (for large scale data) are proposed to solve the new model. A convergence analysis establishes the global convergence of the iterative algorithm. The efficiency and precision of the algorithm are demonstrated numerically through applications to six real-world datasets on clustering and classification. Performance comparison with existing algorithms shows the effectiveness of the proposed method for low-rank factorization in general.) <|cite_end|>. Graph-based image processing enables us to improve performance in several image processing tasks <|cite_start|> (Reference: Bilateral Filter: Graph Spectral Interpretation and Extensions: In this paper we study the bilateral filter proposed by Tomasi and Manduchi, as a spectral domain transform defined on a weighted graph. The nodes of this graph represent the pixels in the image and a graph signal defined on the nodes represents the intensity values. Edge weights in the graph correspond to the bilateral filter coefficients and hence are data adaptive. Spectrum of a graph is defined in terms of the eigenvalues and eigenvectors of the graph Laplacian matrix. We use this spectral interpretation to generalize the bilateral filter and propose more flexible and application specific spectral designs of bilateral-like filters. We show that these spectral filters can be implemented with k-iterative bilateral filtering operations and do not require expensive diagonalization of the Laplacian matrix.) <|cite_end|> <|cite_start|> (Reference: A graph-based joint bilateral approach for depth enhancement: Depth images are often presented at a lower spatial resolution, either due to limitations in the acquisition of the depth or to increase compression efficiency. As a result, upsampling low-resolution depth images to a higher spatial resolution is typically required prior to depth image based rendering. In this paper, depth enhancement and up-sampling techniques are proposed using a graph-based formulation. In one scheme, the depth is first upsampled using a conventional method, then followed by a graph-based joint bilateral filtering to enhance edges and reduce noise. A second scheme avoids the two-step processing and upsamples the depth directly using the proposed graph-based joint bilateral upsampling. Both filtering and interpolation problems are formulated as regularization problems and the solutions are different from conventional approaches. Further, we also studied operations on different graph structures such as star graph and 8-connected graph. Experimental results show that the proposed methods produce slightly more accurate depth at the full resolution with improved rendering quality of intermediate views.) <|cite_end|> <|cite_start|> (Reference: Graph signal denoising via trilateral filter on graph spectral domain: This paper presents a graph signal denoising method with the trilateral filter defined in the graph spectral domain. The original trilateral filter (TF) is a data-dependent filter that is widely used as an edge-preserving smoothing method for image processing. However, because of the data-dependency, one cannot provide its frequency domain representation. To overcome this problem, we establish the graph spectral domain representation of the data-dependent filter, i.e., a spectral graph TF (SGTF). This representation enables us to design an effective graph signal denoising filter with a Tikhonov regularization. Moreover, for the proposed graph denoising filter, we provide a parameter optimization technique to search for a regularization parameter that approximately minimizes the mean squared error w.r.t. the unknown graph signal of interest. Comprehensive experimental results validate our graph signal processing-based approach for images and graph signals.) <|cite_end|> <|cite_start|> (Reference: Nonlocal discrete regularization on weighted graphs: a framework for image and manifold processing: We introduce a nonlocal discrete regularization framework on weighted graphs of the arbitrary topologies for image and manifold processing. The approach considers the problem as a variational one, which consists of minimizing a weighted sum of two energy terms: a regularization one that uses a discrete weighted -Dirichlet energy and an approximation one. This is the discrete analogue of recent continuous Euclidean nonlocal regularization functionals. The proposed formulation leads to a family of simple and fast nonlinear processing methods based on the weighted -Laplace operator, parameterized by the degree of regularity, the graph structure and the graph weight function. These discrete processing methods provide a graph-based version of recently proposed semi-local or nonlocal processing methods used in image and mesh processing, such as the bilateral filter, the TV digital filter or the nonlocal means filter. It works with equal ease on regular 2-D and 3-D images, manifolds or any data. We illustrate the abilities of the approach by applying it to various types of images, meshes, manifolds, and data represented as graphs.) <|cite_end|> <|cite_start|> (Reference: Total generalized variation for graph signals: This paper proposes a second-order discrete total generalized variation (TGV) for arbitrary graph signals, which we call the graph TGV (G-TGV). The original TGV was introduced as a natural higher-order extension of the well-known total variation (TV) and is an effective prior for piecewise smooth signals. Similarly, the proposed G-TGV is an extension of the TV for graph signals (G-TV) and inherits the capability of the TGV, such as avoiding staircasing effect. Thus the G-TGV is expected to be a fundamental building block for graph signal processing. We provide its applications to piecewise-smooth graph signal inpainting and 3D mesh smoothing with illustrative experimental results.) <|cite_end|>. However, graphs are not provided in many cases {\em a priori}. {\it Graph learning} methods aim at identifying graphs from observed data <|cite_start|> (Reference: Learning graphs from data: A signal representation perspective: The construction of a meaningful graph topology plays a crucial role in the effective representation, processing, analysis and visualization of structured data. When a natural choice of the graph is not readily available from the data sets, it is thus desirable to infer or learn a graph topology from the data. In this tutorial overview, we survey solutions to the problem of graph learning, including classical viewpoints from statistics and physics, and more recent approaches that adopt a graph signal processing (GSP) perspective. We further emphasize the conceptual similarities and differences between classical and GSP-based graph inference methods, and highlight the potential advantage of the latter in a number of theoretical and practical scenarios. We conclude with several open issues and challenges that are keys to the design of future signal processing and machine learning algorithms for learning graphs from data.) <|cite_end|> <|cite_start|> (Reference: Connecting the Dots: Identifying Network Structure via Graph Signal Processing: Network topology inference is a significant problem in network science. Most graph signal processing (GSP) efforts to date assume that the underlying network is known and then analyze how the graph?s algebraic and spectral characteristics impact the properties of the graph signals of interest. Such an assumption is often untenable beyond applications dealing with, e.g., directly observable social and infrastructure networks; and typically adopted graph construction schemes are largely informal, distinctly lacking an element of validation. This article offers an overview of graph-learning methods developed to bridge the aforementioned gap, by using information available from graph signals to infer the underlying graph topology. Fairly mature statistical approaches are surveyed first, where correlation analysis takes center stage along with its connections to covariance selection and high-dimensional regression for learning Gaussian graphical models. Recent GSP-based network inference frameworks are also described, which postulate that the network exists as a latent underlying structure and that observations are generated as a result of a network process defined in such a graph. A number of arguably more nascent topics are also briefly outlined, including inference of dynamic networks and nonlinear models of pairwise interaction, as well as extensions to directed (di) graphs and their relation to causal inference. All in all, this article introduces readers to challenges and opportunities for SP research in emerging topic areas at the crossroads of modeling, prediction, and control of complex behavior arising in networked systems that evolve over time.) <|cite_end|> <|cite_start|> (Reference: Topology identification and learning over graphs: Accounting for nonlinearities and dynamics: Identifying graph topologies as well as processes evolving over graphs emerge in various applications involving gene-regulatory, brain, power, and social networks, to name a few. Key graph-aware learning tasks include regression, classification, subspace clustering, anomaly identification, interpolation, extrapolation, and dimensionality reduction. Scalable approaches to deal with such high-dimensional tasks experience a paradigm shift to address the unique modeling and computational challenges associated with data-driven sciences. Albeit simple and tractable, linear time-invariant models are limited since they are incapable of handling generally evolving topologies, as well as nonlinear and dynamic dependencies between nodal processes. To this end, the main goal of this paper is to outline overarching advances, and develop a principled framework to capture nonlinearities through kernels, which are judiciously chosen from a preselected dictionary to optimally fit the data. The framework encompasses and leverages (non) linear counterparts of partial correlation and partial Granger causality, as well as (non)linear structural equations and vector autoregressions, along with attributes such as low rank, sparsity, and smoothness to capture even directional dependencies with abrupt change points, as well as time-evolving processes over possibly time-evolving topologies. The overarching approach inherits the versatility and generality of kernel-based methods, and lends itself to batch and computationally affordable online learning algorithms, which include novel Kalman filters over graphs. Real data experiments highlight the impact of the nonlinear and dynamic models on consumer and financial networks, as well as gene-regulatory and functional connectivity brain networks, where connectivity patterns revealed exhibit discernible differences relative to existing approaches.) <|cite_end|>. Each observation is a vector, and each entry corresponds to the observation at one node. The goal is to obtain the weights of all the edges connecting those nodes. Most graph learning methods identify a single static graph from all the observations <|cite_start|> (Reference: How to learn a graph from smooth signals: We propose a framework that learns the graph structure underlying a set of smooth signals. Given $X\in\mathbb{R}^{m\times n}$ whose rows reside on the vertices of an unknown graph, we learn the edge weights $w\in\mathbb{R}_+^{m(m-1)/2}$ under the smoothness assumption that $\text{tr}{X^\top LX}$ is small. We show that the problem is a weighted $\ell$-1 minimization that leads to naturally sparse solutions. We point out how known graph learning or construction techniques fall within our framework and propose a new model that performs better than the state of the art in many settings. We present efficient, scalable primal-dual based algorithms for both our model and the previous state of the art, and evaluate their performance on artificial and real data.) <|cite_end|> <|cite_start|> (Reference: Learning Laplacian Matrix in Smooth Graph Signal Representations: The construction of a meaningful graph plays a crucial role in the success of many graph-based representations and algorithms for handling structured data, especially in the emerging field of graph signal processing. However, a meaningful graph is not always readily available from the data, nor easy to define depending on the application domain. In particular, it is often desirable in graph signal processing applications that a graph is chosen such that the data admit certain regularity or smoothness on the graph. In this paper, we address the problem of learning graph Laplacians, which is equivalent to learning graph topologies, such that the input data form graph signals with smooth variations on the resulting topology. To this end, we adopt a factor analysis model for the graph signals and impose a Gaussian probabilistic prior on the latent variables that control these signals. We show that the Gaussian prior leads to an efficient representation that favors the smoothness property of the graph signals. We then propose an algorithm for learning graphs that enforces such property and is based on minimizing the variations of the signals on the learned graph. Experiments on both synthetic and real world data demonstrate that the proposed graph learning framework can efficiently infer meaningful graph topologies from signal observations under the smoothness prior.) <|cite_end|> <|cite_start|> (Reference: Graph Learning From Data Under Laplacian and Structural Constraints: Graphs are fundamental mathematical structures used in various fields to represent data, signals, and processes. In this paper, we propose a novel framework for learning/estimating graphs from data. The proposed framework includes (i) formulation of various graph learning problems, (ii) their probabilistic interpretations, and (iii) associated algorithms. Specifically, graph learning problems are posed as the estimation of graph Laplacian matrices from some observed data under given structural constraints (e.g., graph connectivity and sparsity level). From a probabilistic perspective, the problems of interest correspond to maximum a posteriori parameter estimation of Gaussian–Markov random field models, whose precision (inverse covariance) is a graph Laplacian matrix. For the proposed graph learning problems, specialized algorithms are developed by incorporating the graph Laplacian and structural constraints. The experimental results demonstrate that the proposed algorithms outperform the current state-of-the-art methods in terms of accuracy and computational efficiency.) <|cite_end|> <|cite_start|> (Reference: Characterization and Inference of Graph Diffusion Processes from Observations of Stationary Signals: Many tools from the field of graph signal processing exploit knowledge of the underlying graph's structure (e.g., as encoded in the Laplacian matrix) to process signals on the graph. Therefore, in the case when no graph is available, graph signal processing tools cannot be used anymore. Researchers have proposed approaches to infer a graph topology from observations of signals on its nodes. Since the problem is ill-posed, these approaches make assumptions, such as smoothness of the signals on the graph, or sparsity priors. In this paper, we propose a characterization of the space of valid graphs, in the sense that they can explain stationary signals. To simplify the exposition in this paper, we focus here on the case where signals were i.i.d. at some point back in time and were observed after diffusion on a graph. We show that the set of graphs verifying this assumption has a strong connection with the eigenvectors of the covariance matrix, and forms a convex set. Along with a theoretical study in which these eigenvectors are assumed to be known, we consider the practical case when the observations are noisy, and experimentally observe how fast the set of valid graphs converges to the set obtained when the exact eigenvectors are known, as the number of observations grows. To illustrate how this characterization can be used for graph recovery, we present two methods for selecting a particular point in this set under chosen criteria, namely graph simplicity and sparsity. Additionally, we introduce a measure to evaluate how much a graph is adapted to signals under a stationarity assumption. Finally, we evaluate how state-of-the-art methods relate to this framework through experiments on a dataset of temperatures.) <|cite_end|> <|cite_start|> (Reference: Learning heat diffusion graphs: Effective information analysis generally boils down to properly identifying the structure or geometry of the data, which is often represented by a graph. In some applications, this structure may be partly determined by design constraints or pre-determined sensing arrangements, like in road transportation networks for example. In general though, the data structure is not readily available and becomes pretty difficult to define. In particular, the global smoothness assumptions, that most of the existing works adopt, are often too general and unable to properly capture localized properties of data. In this paper, we go beyond this classical data model and rather propose to represent information as a sparse combination of localized functions that live on a data structure represented by a graph. Based on this model, we focus on the problem of inferring the connectivity that best explains the data samples at different vertices of a graph that is a priori unknown. We concentrate on the case where the observed data is actually the sum of heat diffusion processes, which is a quite common model for data on networks or other irregular structures. We cast a new graph learning problem and solve it with an efficient nonconvex optimization algorithm. Experiments on both synthetic and real world data finally illustrate the benefits of the proposed graph learning framework and confirm that the data structure can be efficiently learned from data observations only. We believe that our algorithm will help solving key questions in diverse application domains such as social and biological network analysis where it is crucial to unveil proper geometry for data understanding and inference.) <|cite_end|> <|cite_start|> (Reference: Signal Processing on Graphs: Causal Modeling of Unstructured Data: Many applications collect a large number of time series, for example, the financial data of companies quoted in a stock exchange, the health care data of all patients that visit the emergency room of a hospital, or the temperature sequences continuously measured by weather stations across the US. These data are often referred to as unstructured. A first task in its analytics is to derive a low dimensional representation, a graph or discrete manifold, that describes well the interrelations among the time series and their intrarelations across time. This paper presents a computationally tractable algorithm for estimating this graph that structures the data. The resulting graph is directed and weighted, possibly capturing causal relations, not just reciprocal correlations as in many existing approaches in the literature. A convergence analysis is carried out. The algorithm is demonstrated on random graph datasets and real network time series datasets, and its performance is compared to that of related methods. The adjacency matrices estimated with the new method are close to the true graph in the simulated data and consistent with prior physical knowledge in the real dataset tested.) <|cite_end|> <|cite_start|> (Reference: Learning Sparse Graphs Under Smoothness Prior: In this paper, we are interested in learning the underlying graph structure behind training data. Solving this basic problem is essential to carry out any graph signal processing or machine learning task. To realize this, we assume that the data is smooth with respect to the graph topology, and we parameterize the graph topology using an edge sampling function. That is, the graph Laplacian is expressed in terms of a sparse edge selection vector, which provides an explicit handle to control the sparsity level of the graph. We solve the sparse graph learning problem given some training data in both the noiseless and noisy settings. Given the true smooth data, the posed sparse graph learning problem can be solved optimally and is based on simple rank ordering. Given the noisy data, we show that the joint sparse graph learning and denoising problem can be simplified to designing only the sparse edge selection vector, which can be solved using convex optimization.) <|cite_end|> <|cite_start|> (Reference: Learning graphs with monotone topology properties and multiple connected components: Recent papers have formulated the problem of learning graphs from data as an inverse covariance estimation problem with graph Laplacian constraints. While such problems are convex, existing methods cannot guarantee that solutions will have specific graph topology properties (e.g., being a tree), which are desirable for some applications. The problem of learning a graph with topology properties is in general non-convex. In this paper, we propose an approach to solve these problems by decomposing them into two sub-problems for which efficient solutions are known. Specifically, a graph topology inference (GTI) step is employed to select a feasible graph topology. Then, a graph weight estimation (GWE) step is performed by solving a generalized graph Laplacian estimation problem, where edges are constrained by the topology found in the GTI step. Our main result is a bound on the error of the GWE step as a function of the error in the GTI step. This error bound indicates that the GTI step should be solved using an algorithm that approximates the data similarity matrix by another matrix whose entries have been thresholded to zero to have the desired type of graph topology. The GTI stage can leverage existing methods, which are typically based on minimizing the total weight of removed edges. Since the GWE stage is an inverse covariance estimation problem with linear constraints, it can be solved using existing convex optimization methods. We demonstrate that our approach can achieve good results for both synthetic and texture image data.) <|cite_end|> <|cite_start|> (Reference: Graph Learning from Filtered Signals: Graph System and Diffusion Kernel Identification: This paper introduces a novel graph signal processing framework for building graph-based models from classes of filtered signals. In our framework, graph-based modeling is formulated as a graph system identification problem, where the goal is to learn a weighted graph (a graph Laplacian matrix) and a graph-based filter (a function of graph Laplacian matrices). In order to solve the proposed problem, an algorithm is developed to jointly identify a graph and a graph-based filter (GBF) from multiple signal/data observations. Our algorithm is valid under the assumption that GBFs are one-to-one functions. The proposed approach can be applied to learn diffusion (heat) kernels, which are popular in various fields for modeling diffusion processes. In addition, for specific choices of graph-based filters, the proposed problem reduces to a graph Laplacian estimation problem. Our experimental results demonstrate that the proposed algorithm outperforms the current state-of-the-art methods. We also implement our framework on a real climate dataset for modeling of temperature signals.) <|cite_end|>. These {\it static graph learning} methods assume that the node relationships obtained from the observations do not change during the measurement process. However, in many applications where the observations are obtained over a period of time, a time-varying graph will provide a better model. Examples of such applications include estimation of the time-varying brain functional connectivity from EEG or fMRI data <|cite_start|> (Reference: The dynamic functional connectome: State-of-the-art and perspectives: ) <|cite_end|>, identification of temporal transit of biological networks such as protein, RNA, and DNA <|cite_start|> (Reference: Inference of dynamic networks using time-course {{Data}}: Cells execute their functions through dynamic operations of biological networks. Dynamic networks delineate the operation of biological networks in terms of temporal changes of abundances or activities of nodes (proteins and RNAs), as well as formation of new edges and disappearance of existing edges over time. Global genomic and proteomic technologies can be used to decode dynamic networks. However, using these experimental methods, it is still challenging to identify temporal transition of nodes and edges. Thus, several computational methods for estimating dynamic topological and functional characteristics of networks have been introduced. In this review, we summarize concepts and applications of these computational methods for inferring dynamic networks and further summarize methods for estimating spatial transition of biological networks.) <|cite_end|>, and inference of relationships among companies from historical stock price data <|cite_start|> (Reference: Network Inference via the Time-Varying Graphical Lasso: Many important problems can be modeled as a system of interconnected entities, where each entity is recording time-dependent observations or measurements. In order to spot trends, detect anomalies, and interpret the temporal dynamics of such data, it is essential to understand the relationships between the different entities and how these relationships evolve over time. In this paper, we introduce the time-varying graphical lasso (TVGL), a method of inferring time-varying networks from raw time series data. We cast the problem in terms of estimating a sparse time-varying inverse covariance matrix, which reveals a dynamic network of interdependencies between the entities. Since dynamic network inference is a computationally expensive task, we derive a scalable message-passing algorithm based on the Alternating Direction Method of Multipliers (ADMM) to solve this problem in an efficient way. We also discuss several extensions, including a streaming algorithm to update the model and incorporate new observations in real time. Finally, we evaluate our TVGL algorithm on both real and synthetic datasets, obtaining interpretable results and outperforming state-of-the-art baselines in terms of both accuracy and scalability.) <|cite_end|>, dynamic point cloud processing, and analysis of physical measurement data such as temperature. A straightforward approach to estimate a time-varying graph would consist of aggregating temporal observations into non-overlapping windows and then using an existing static graph learning method to estimate a graph for each time windows. However, such an approach would possess some drawbacks. First, this method would estimate a graph {\em independently} for each temporal interval, thus ignoring temporal relations that may exist in time-varying graphs. Second, time-varying graph learning may require estimating graphs from time windows containing only a small fraction of observations due to the trade-off between the choice of window length and temporal resolution. For example, if we choose a short window to adapt to fast temporal changes in the graph, then we may not have enough data to learn a graph within each window. However, the existing static graph learning methods, however, cannot successfully infer graphs from a small number of observations. On the other hand, if we use a longer window, we may not be able to keep up with the temporal changes. This paper presents a {\it time-varying graph learning} method based on time-varying graph factor analysis (TGFA), which is an extension of its static counterpart, static graph factor analysis (SGFA) <|cite_start|> (Reference: Learning Laplacian Matrix in Smooth Graph Signal Representations: The construction of a meaningful graph plays a crucial role in the success of many graph-based representations and algorithms for handling structured data, especially in the emerging field of graph signal processing. However, a meaningful graph is not always readily available from the data, nor easy to define depending on the application domain. In particular, it is often desirable in graph signal processing applications that a graph is chosen such that the data admit certain regularity or smoothness on the graph. In this paper, we address the problem of learning graph Laplacians, which is equivalent to learning graph topologies, such that the input data form graph signals with smooth variations on the resulting topology. To this end, we adopt a factor analysis model for the graph signals and impose a Gaussian probabilistic prior on the latent variables that control these signals. We show that the Gaussian prior leads to an efficient representation that favors the smoothness property of the graph signals. We then propose an algorithm for learning graphs that enforces such property and is based on minimizing the variations of the signals on the learned graph. Experiments on both synthetic and real world data demonstrate that the proposed graph learning framework can efficiently infer meaningful graph topologies from signal observations under the smoothness prior.) <|cite_end|>. We propose the TGFA-based methods to estimate time-varying graphs from a collection of spatiotemporal measurements. The SGFA formulates a signal generation model based on a graph signal processing (GSP) perspective, where it is assumed that the observed signals have some specific spectral properties with respect to graph Fourier transform (GFT) of the graph to be learned. For example, if a multivariate Gaussian model is chosen, it leads to observed signals generated from a Gaussian distribution whose inverse covariance matrix (i.e., precision matrix) is given by the graph Laplacian of the underlying graph <|cite_start|> (Reference: Learning Laplacian Matrix in Smooth Graph Signal Representations: The construction of a meaningful graph plays a crucial role in the success of many graph-based representations and algorithms for handling structured data, especially in the emerging field of graph signal processing. However, a meaningful graph is not always readily available from the data, nor easy to define depending on the application domain. In particular, it is often desirable in graph signal processing applications that a graph is chosen such that the data admit certain regularity or smoothness on the graph. In this paper, we address the problem of learning graph Laplacians, which is equivalent to learning graph topologies, such that the input data form graph signals with smooth variations on the resulting topology. To this end, we adopt a factor analysis model for the graph signals and impose a Gaussian probabilistic prior on the latent variables that control these signals. We show that the Gaussian prior leads to an efficient representation that favors the smoothness property of the graph signals. We then propose an algorithm for learning graphs that enforces such property and is based on minimizing the variations of the signals on the learned graph. Experiments on both synthetic and real world data demonstrate that the proposed graph learning framework can efficiently infer meaningful graph topologies from signal observations under the smoothness prior.) <|cite_end|> <|cite_start|> (Reference: Learning graphs from data: A signal representation perspective: The construction of a meaningful graph topology plays a crucial role in the effective representation, processing, analysis and visualization of structured data. When a natural choice of the graph is not readily available from the data sets, it is thus desirable to infer or learn a graph topology from the data. In this tutorial overview, we survey solutions to the problem of graph learning, including classical viewpoints from statistics and physics, and more recent approaches that adopt a graph signal processing (GSP) perspective. We further emphasize the conceptual similarities and differences between classical and GSP-based graph inference methods, and highlight the potential advantage of the latter in a number of theoretical and practical scenarios. We conclude with several open issues and challenges that are keys to the design of future signal processing and machine learning algorithms for learning graphs from data.) <|cite_end|>. Unlike SGFA, TGFA considers the graph evolution as illustrated in Fig. \ref{fig:tgfa}. The graph evolution can be represented by a sequence of graph Laplacians and their corresponding temporal variations. This study focuses on two time-varying graph models, with the following two properties: \textbf{(P1) Temporal homogeneity}: Most edges and their weights in the time-varying graph should remain unchanged over a short-term time horizon. In other words, at any given time only a small number of edges in time-varying graphs change. Time-varying graphs in many applications satisfy this property. For example, consider a sensor network where the nodes and the edges represent sensor locations and correlations among sensor measurements respectively. If the sensors record the temperature in a building, various factors such as air conditioning, sunlight, and number of people in the room, locally affect the correlations among the sensor measurements but these factors vary smoothly over time. As a result, this sensor network will be a time-varying graph such that most edges remain constant, while the weights change only slightly over time, i.e., it follows \textbf{(P1)}. In addition to this example, time-varying graphs in fMRI and various biological networks seem to have this property <|cite_start|> (Reference: Estimating time-varying brain connectivity networks from functional MRI time series: ) <|cite_end|> <|cite_start|> (Reference: Inference of dynamic networks using time-course {{Data}}: Cells execute their functions through dynamic operations of biological networks. Dynamic networks delineate the operation of biological networks in terms of temporal changes of abundances or activities of nodes (proteins and RNAs), as well as formation of new edges and disappearance of existing edges over time. Global genomic and proteomic technologies can be used to decode dynamic networks. However, using these experimental methods, it is still challenging to identify temporal transition of nodes and edges. Thus, several computational methods for estimating dynamic topological and functional characteristics of networks have been introduced. In this review, we summarize concepts and applications of these computational methods for inferring dynamic networks and further summarize methods for estimating spatial transition of biological networks.) <|cite_end|>. \textbf{(P2) Switching behavior}: Edges and weights remain almost unchanged over time; however, they may suddenly change within a few time slots. This type of time-varying graph appears in situations where some factors cause sudden changes in graph topologies. Prominent examples include brain networks, where epileptic seizures make their topology change suddenly <|cite_start|> (Reference: Graph theoretical analysis reveals disrupted topological properties of whole brain functional networks in temporal lobe epilepsy: ) <|cite_end|>. \begin{figure} \centering \includegraphics[width=\linewidth]{tgfa-eps-converted-to.pdf} \caption{An overview of time-varying graph factor analysis. ${\bf L}_{t}$ and $\Delta {\bf L}_{t}$ represent the graph Laplacian at the $t$th time slot and the graph temporal variation. This study focuses on learning a time-varying graph, which is the sequence of the graph Laplacian, from the observed signal ${\bf x}$.} \label{fig:tgfa} \end{figure} In this paper, we design an algorithm to estimate the two types of time-varying graphs, namely, graphs with temporal homogeneity and graphs with switching behavior. For this purpose, we formulate the graph learning problem as a convex optimization with regularization of temporal variation derived from TGFA. To solve the convex optimization problem, we utilize a primal-dual splitting algorithm <|cite_start|> (Reference: A Primal–Dual Splitting Method for Convex Optimization Involving Lipschitzian, Proximable and Linear Composite Terms: ) <|cite_end|>, which enables us to estimate time-varying graphs more successfully than static graph learning methods. In experiments with synthetic datasets, our proposed method outperforms existing methods especially when only small amounts of data are available. We also evaluate the performance of our methods for tasks involving real data, such as dynamic point cloud denoising and learning time-varying graphs from spatiotemporal meteorological and temperature data. Our results for dynamic point cloud denoising show that the estimating graph topology information with our method allows us to improve the denoising performance. In the meteorological and temperature data application, we show that our method can learn reasonable time-varying graphs that capture the geographical characteristics \textit{without using geographical information}. Our recent work <|cite_start|> (Reference: Time-varying Graph Learning Based on Sparseness of Temporal Variation: We propose a method for graph learning from spatiotemporal measurements. We aim at inferring time-varying graphs under the assumption that changes in graph topology and weights are sparse in time. The problem is formulated as a convex optimization problem to impose a constraint on the temporal relation of the time-varying graph. Experimental results with synthetic data show the effectiveness of our proposed method.) <|cite_end|> proposed a framework for learning time-varying graphs that follow \textbf{(P1)}. Our work in this paper substantially extends the work in <|cite_start|> (Reference: Time-varying Graph Learning Based on Sparseness of Temporal Variation: We propose a method for graph learning from spatiotemporal measurements. We aim at inferring time-varying graphs under the assumption that changes in graph topology and weights are sparse in time. The problem is formulated as a convex optimization problem to impose a constraint on the temporal relation of the time-varying graph. Experimental results with synthetic data show the effectiveness of our proposed method.) <|cite_end|> by allowing learning for graphs with property \textbf{(P2)}, which is more suitable for change point detection problems. We also evaluate the robustness of our proposed approach under several conditions of temporal resolution and we compare the computation time with some related approaches. The remainder of this paper is organized as follows. Section \ref{notations} presents preliminaries concerning graphs and proximal operators. Section \ref{sec:tvgl_form} describes the problem formulation of time-varying graph learning. Section \ref{sec:proposed} defines the regularization for temporal graph variation and the optimization problem to learn graphs and proposes an algorithm to find a solution. Section \ref{exp} and \ref{application} provides experimental results with synthetic data and real data respectively. Finally, we show the conclusion and future works of this study in Section \ref{conclusion}. \subsection{Related Work} Several approaches address graph learning problem, and two overview papers about graph learning have been published recently <|cite_start|> (Reference: Learning graphs from data: A signal representation perspective: The construction of a meaningful graph topology plays a crucial role in the effective representation, processing, analysis and visualization of structured data. When a natural choice of the graph is not readily available from the data sets, it is thus desirable to infer or learn a graph topology from the data. In this tutorial overview, we survey solutions to the problem of graph learning, including classical viewpoints from statistics and physics, and more recent approaches that adopt a graph signal processing (GSP) perspective. We further emphasize the conceptual similarities and differences between classical and GSP-based graph inference methods, and highlight the potential advantage of the latter in a number of theoretical and practical scenarios. We conclude with several open issues and challenges that are keys to the design of future signal processing and machine learning algorithms for learning graphs from data.) <|cite_end|> <|cite_start|> (Reference: Connecting the Dots: Identifying Network Structure via Graph Signal Processing: Network topology inference is a significant problem in network science. Most graph signal processing (GSP) efforts to date assume that the underlying network is known and then analyze how the graph?s algebraic and spectral characteristics impact the properties of the graph signals of interest. Such an assumption is often untenable beyond applications dealing with, e.g., directly observable social and infrastructure networks; and typically adopted graph construction schemes are largely informal, distinctly lacking an element of validation. This article offers an overview of graph-learning methods developed to bridge the aforementioned gap, by using information available from graph signals to infer the underlying graph topology. Fairly mature statistical approaches are surveyed first, where correlation analysis takes center stage along with its connections to covariance selection and high-dimensional regression for learning Gaussian graphical models. Recent GSP-based network inference frameworks are also described, which postulate that the network exists as a latent underlying structure and that observations are generated as a result of a network process defined in such a graph. A number of arguably more nascent topics are also briefly outlined, including inference of dynamic networks and nonlinear models of pairwise interaction, as well as extensions to directed (di) graphs and their relation to causal inference. All in all, this article introduces readers to challenges and opportunities for SP research in emerging topic areas at the crossroads of modeling, prediction, and control of complex behavior arising in networked systems that evolve over time.) <|cite_end|>. Among the techniques for learning time-varying graphs, the Kalofolias et al. method, where constraints are introduced so that the edge weights change smoothly over time, is close to ours <|cite_start|> (Reference: Learning Time Varying Graphs: We consider the problem of inferring the hidden structure of high-dimensional time-varying data. In particular, we aim at capturing the dynamic relationships by representing data as valued nodes in a sequence of graphs. Our approach is motivated by the observation that imposing a meaningful graph topology can help solving the generally ill-posed and challenging problem of structure inference. To capture the temporal evolution in the sequence of graphs, we introduce a new prior that asserts that the graph edges change smoothly in time. We propose a primal-dual optimization algorithm that scales linearly with the number of allowed edges and can be easily parallelized. Our new algorithm is shown to outperform standard graph learning and other baseline methods both on a synthetic and a real dataset.) <|cite_end|>. This approach uses a smoothness criterion and Tikhonov regularization of temporal variation in graphs to learn a time-varying graph. However, it does not learn time-varying graphs following \textbf{(P1)} exactly because Tikhonov regularization promotes smooth variation of edge weights over time, i.e., it allows changes of both edges and edge weights over short-term time horizons. While our approach has similar cost functions as those employed in <|cite_start|> (Reference: Learning Time Varying Graphs: We consider the problem of inferring the hidden structure of high-dimensional time-varying data. In particular, we aim at capturing the dynamic relationships by representing data as valued nodes in a sequence of graphs. Our approach is motivated by the observation that imposing a meaningful graph topology can help solving the generally ill-posed and challenging problem of structure inference. To capture the temporal evolution in the sequence of graphs, we introduce a new prior that asserts that the graph edges change smoothly in time. We propose a primal-dual optimization algorithm that scales linearly with the number of allowed edges and can be easily parallelized. Our new algorithm is shown to outperform standard graph learning and other baseline methods both on a synthetic and a real dataset.) <|cite_end|>, we use different regularization terms that favors learning time-varying graphs with the \textbf{(P1)} property. However, we cannot solve the optimization problem in our approach straightforwardly in the same manner as <|cite_start|> (Reference: Learning Time Varying Graphs: We consider the problem of inferring the hidden structure of high-dimensional time-varying data. In particular, we aim at capturing the dynamic relationships by representing data as valued nodes in a sequence of graphs. Our approach is motivated by the observation that imposing a meaningful graph topology can help solving the generally ill-posed and challenging problem of structure inference. To capture the temporal evolution in the sequence of graphs, we introduce a new prior that asserts that the graph edges change smoothly in time. We propose a primal-dual optimization algorithm that scales linearly with the number of allowed edges and can be easily parallelized. Our new algorithm is shown to outperform standard graph learning and other baseline methods both on a synthetic and a real dataset.) <|cite_end|> because the regularization terms used are not differentiable. Therefore, we reformulate the optimization problem to make it solvable with a primal-dual splitting algorithm that leads to an efficient learning of a time-varying graph. Hallac et al. addresses the problem of learning a time-varying graph using time-varying graphical Lasso (TVGL) <|cite_start|> (Reference: Network Inference via the Time-Varying Graphical Lasso: Many important problems can be modeled as a system of interconnected entities, where each entity is recording time-dependent observations or measurements. In order to spot trends, detect anomalies, and interpret the temporal dynamics of such data, it is essential to understand the relationships between the different entities and how these relationships evolve over time. In this paper, we introduce the time-varying graphical lasso (TVGL), a method of inferring time-varying networks from raw time series data. We cast the problem in terms of estimating a sparse time-varying inverse covariance matrix, which reveals a dynamic network of interdependencies between the entities. Since dynamic network inference is a computationally expensive task, we derive a scalable message-passing algorithm based on the Alternating Direction Method of Multipliers (ADMM) to solve this problem in an efficient way. We also discuss several extensions, including a streaming algorithm to update the model and incorporate new observations in real time. Finally, we evaluate our TVGL algorithm on both real and synthetic datasets, obtaining interpretable results and outperforming state-of-the-art baselines in terms of both accuracy and scalability.) <|cite_end|>, which combines graphical Lasso with a temporal regularization and finds the solution using alternating direction method of multipliers (ADMM). Note that graphs estimated with this approach often have negative edge weights. In contrast, our proposed method constrains to have non-negative edges in the estimated graph, because such graphs are often desired for many applications <|cite_start|> (Reference: Learning graphs from data: A signal representation perspective: The construction of a meaningful graph topology plays a crucial role in the effective representation, processing, analysis and visualization of structured data. When a natural choice of the graph is not readily available from the data sets, it is thus desirable to infer or learn a graph topology from the data. In this tutorial overview, we survey solutions to the problem of graph learning, including classical viewpoints from statistics and physics, and more recent approaches that adopt a graph signal processing (GSP) perspective. We further emphasize the conceptual similarities and differences between classical and GSP-based graph inference methods, and highlight the potential advantage of the latter in a number of theoretical and practical scenarios. We conclude with several open issues and challenges that are keys to the design of future signal processing and machine learning algorithms for learning graphs from data.) <|cite_end|> <|cite_start|> (Reference: Connecting the Dots: Identifying Network Structure via Graph Signal Processing: Network topology inference is a significant problem in network science. Most graph signal processing (GSP) efforts to date assume that the underlying network is known and then analyze how the graph?s algebraic and spectral characteristics impact the properties of the graph signals of interest. Such an assumption is often untenable beyond applications dealing with, e.g., directly observable social and infrastructure networks; and typically adopted graph construction schemes are largely informal, distinctly lacking an element of validation. This article offers an overview of graph-learning methods developed to bridge the aforementioned gap, by using information available from graph signals to infer the underlying graph topology. Fairly mature statistical approaches are surveyed first, where correlation analysis takes center stage along with its connections to covariance selection and high-dimensional regression for learning Gaussian graphical models. Recent GSP-based network inference frameworks are also described, which postulate that the network exists as a latent underlying structure and that observations are generated as a result of a network process defined in such a graph. A number of arguably more nascent topics are also briefly outlined, including inference of dynamic networks and nonlinear models of pairwise interaction, as well as extensions to directed (di) graphs and their relation to causal inference. All in all, this article introduces readers to challenges and opportunities for SP research in emerging topic areas at the crossroads of modeling, prediction, and control of complex behavior arising in networked systems that evolve over time.) <|cite_end|>. Furthermore, our method has a lower computation complexity, as TVGL requires eigendecompositions of target matrices to calculate a proximal operator of a logarithm determinant term, whereas our approach is an eigendecomposition-free algorithm. Baingana et al. consider the problem of identifying a time-varying graph to capture causal relationships in the network contagion evolution <|cite_start|> (Reference: Tracking switched dynamic network topologies from information cascades: Contagions, such as the spread of popular news stories, or infectious diseases, propagate in cascades over dynamic networks with unobservable topologies. However, “social signals,” such as product purchase time, or blog entry timestamps are measurable, and implicitly depend on the underlying topology, making it possible to track it over time. Interestingly, network topologies often “jump” between discrete states that may account for sudden changes in the observed signals. The present paper advocates a switched dynamic structural equation model to capture the topology dependent cascade evolution, as well as the discrete states driving the underlying topologies. Conditions under which the proposed switched model is identifiable are established. Leveraging the edge sparsity inherent to social networks, a recursive ℓ1-norm regularized least-squares estimator is put forth to jointly track the states and network topologies. An efficient first-order proximal-gradient algorithm is developed to solve the resulting optimization problem. Numerical experiments on both synthetic data and real cascades measured over the span of one year are conducted, and test results corroborate the efficacy of the advocated approach.) <|cite_end|>. This model focuses on the spread process on networks (e.g., infectious diseases and fake news) and aims to identify the links propagating information over nodes at each time. However, only directed graphs can be estimated by this approach. In contrast, our method estimates undirected graphs whose connections change over time. This is often desired for data that do not have causal relationships. For example, data acquired by physical sensors such as point cloud coordinates and temperatures. \subsection{Notation} The notation used in this paper is summarized in Table \ref{tb:notation}. Throughout this paper, vectors and matrices are written in bold lowercase and bold uppercase letters respectively. The calligraphic capital letters, namely, $\mathcal{V}$ and $ \mathcal{W}_{m}$, denote sets. \renewcommand{\arraystretch}{1}{ \begin{table}[tb] \centering \caption{List of notation} {\small \begin{tabular}{l l} \hline\hline $N$ & Number of nodes \\ $K$ & Number of data chunk \\ $T$ & Number of time frames \\ ${a}_{i}, ({\bf a})_{i}, a[i]$ & $i$th entry of a vector \\ ${A}_{ij}, ({\bf A})_{ij}$ & $(i,j)$ entry of a matrix\\ $({\bf A})_{i}$ & $i$th column of ${\bf A}$ \\ ${\bf A}^{\dagger}$ & Moore-Penrose pseudoinverse of ${\bf A}$ \\ $\mathbb{R}_{+}$ & Set of the nonnegative real numbers \\ $ \circ$ & Hadamard product \\ $ \| {\bf a} \|_{2}^{2}, \| {\bf A} \|_{F}^{2} $ & sum of squared values of all elements \\ $ \| {\bf a} \|_{1}, \| {\bf A} \|_{1} $ & sum of absolute values of all elements \\ $\mathrm{Tr}({\bf A})$ & Trace of a matrix \\ $\mathrm{diag}({\bf A})$ & vector formed by diagonal elements\\ \hline\hline \end{tabular} \label{tb:notation} } \end{table} } <|paper_end|>
[ "<|reference_start|> Graph Learning From Data Under Laplacian and Structural Constraints: Graphs are fundamental mathematical structures used in various fields to represent data, signals, and processes. In this paper, we propose a novel framework for learning/estimating graphs from data. The proposed framework includes (i) formulation of various graph learning problems, (ii) their probabilistic interpretations, and (iii) associated algorithms. Specifically, graph learning problems are posed as the estimation of graph Laplacian matrices from some observed data under given structural constraints (e.g., graph connectivity and sparsity level). From a probabilistic perspective, the problems of interest correspond to maximum a posteriori parameter estimation of Gaussian–Markov random field models, whose precision (inverse covariance) is a graph Laplacian matrix. For the proposed graph learning problems, specialized algorithms are developed by incorporating the graph Laplacian and structural constraints. The experimental results demonstrate that the proposed algorithms outperform the current state-of-the-art methods in terms of accuracy and computational efficiency. <|reference_end|>", "<|reference_start|> Signal Processing on Graphs: Causal Modeling of Unstructured Data: Many applications collect a large number of time series, for example, the financial data of companies quoted in a stock exchange, the health care data of all patients that visit the emergency room of a hospital, or the temperature sequences continuously measured by weather stations across the US. These data are often referred to as unstructured. A first task in its analytics is to derive a low dimensional representation, a graph or discrete manifold, that describes well the interrelations among the time series and their intrarelations across time. This paper presents a computationally tractable algorithm for estimating this graph that structures the data. The resulting graph is directed and weighted, possibly capturing causal relations, not just reciprocal correlations as in many existing approaches in the literature. A convergence analysis is carried out. The algorithm is demonstrated on random graph datasets and real network time series datasets, and its performance is compared to that of related methods. The adjacency matrices estimated with the new method are close to the true graph in the simulated data and consistent with prior physical knowledge in the real dataset tested. <|reference_end|>", "<|reference_start|> Network Inference via the Time-Varying Graphical Lasso: Many important problems can be modeled as a system of interconnected entities, where each entity is recording time-dependent observations or measurements. In order to spot trends, detect anomalies, and interpret the temporal dynamics of such data, it is essential to understand the relationships between the different entities and how these relationships evolve over time. In this paper, we introduce the time-varying graphical lasso (TVGL), a method of inferring time-varying networks from raw time series data. We cast the problem in terms of estimating a sparse time-varying inverse covariance matrix, which reveals a dynamic network of interdependencies between the entities. Since dynamic network inference is a computationally expensive task, we derive a scalable message-passing algorithm based on the Alternating Direction Method of Multipliers (ADMM) to solve this problem in an efficient way. We also discuss several extensions, including a streaming algorithm to update the model and incorporate new observations in real time. Finally, we evaluate our TVGL algorithm on both real and synthetic datasets, obtaining interpretable results and outperforming state-of-the-art baselines in terms of both accuracy and scalability. <|reference_end|>", "<|reference_start|> Connecting the Dots: Identifying Network Structure\nvia Graph Signal Processing: Network topology inference is a significant problem in network science. Most graph signal processing (GSP) efforts to date assume that the underlying network is known and then analyze how the graph?s algebraic and spectral characteristics impact the properties of the graph signals of interest. Such an assumption is often untenable beyond applications dealing with, e.g., directly observable social and infrastructure networks; and typically adopted graph construction schemes are largely informal, distinctly lacking an element of validation. This article offers an overview of graph-learning methods developed to bridge the aforementioned gap, by using information available from graph signals to infer the underlying graph topology. Fairly mature statistical approaches are surveyed first, where correlation analysis takes center stage along with its connections to covariance selection and high-dimensional regression for learning Gaussian graphical models. Recent GSP-based network inference frameworks are also described, which postulate that the network exists as a latent underlying structure and that observations are generated as a result of a network process defined in such a graph. A number of arguably more nascent topics are also briefly outlined, including inference of dynamic networks and nonlinear models of pairwise interaction, as well as extensions to directed (di) graphs and their relation to causal inference. All in all, this article introduces readers to challenges and opportunities for SP research in emerging topic areas at the crossroads of modeling, prediction, and control of complex behavior arising in networked systems that evolve over time. <|reference_end|>" ]
[ 17, 20, 26, 37 ]
{"<|multi_cite_1_1|>": "arxiv-77200", "<|multi_cite_1_2|>": "ss-1274564", "<|cite_2|>": "ss-2241759", "<|cite_3|>": "ss-1644167", "<|multi_cite_4_1|>": "arxiv-81762", "<|multi_cite_4_2|>": "ss-1663713", "<|multi_cite_4_3|>": "ss-1170573", "<|multi_cite_5_1|>": "arxiv-42955", "<|multi_cite_5_2|>": "ss-918091", "<|multi_cite_5_3|>": "ss-1532623", "<|multi_cite_5_4|>": "ss-1258679", "<|multi_cite_5_6|>": "ss-1532622", "<|multi_cite_6_1|>": "arxiv-161073", "<|multi_cite_6_2|>": "ss-1126322", "<|multi_cite_6_3|>": "ss-1278662", "<|multi_cite_7_1|>": "arxiv-90246", "<|multi_cite_7_2|>": "arxiv-62905", "<|multi_cite_7_3|>": "ss-1207682", "<|multi_cite_7_4|>": "arxiv-97580", "<|multi_cite_7_5|>": "arxiv-109370", "<|multi_cite_7_6|>": "arxiv-73833", "<|multi_cite_7_7|>": "arxiv-105627", "<|multi_cite_7_8|>": "ss-2347395", "<|multi_cite_7_9|>": "arxiv-150778", "<|cite_8|>": "ss-846850", "<|cite_9|>": "ss-2381844", "<|cite_10|>": "arxiv-118346", "<|cite_11|>": "arxiv-62905", "<|multi_cite_12_1|>": "arxiv-62905", "<|multi_cite_12_2|>": "arxiv-161073", "<|multi_cite_13_1|>": "ss-1374834", "<|multi_cite_13_2|>": "ss-2381844", "<|cite_14|>": "ss-2381845", "<|cite_15|>": "ss-825097", "<|cite_16|>": "ss-2323357", "<|cite_17|>": "ss-2323357", "<|multi_cite_18_1|>": "arxiv-161073", "<|multi_cite_18_2|>": "ss-1126322", "<|cite_19|>": "ss-1980514", "<|cite_20|>": "ss-1980514", "<|cite_21|>": "ss-1980514", "<|cite_22|>": "arxiv-118346", "<|multi_cite_23_1|>": "arxiv-161073", "<|multi_cite_23_2|>": "ss-1126322", "<|cite_24|>": "ss-2381846"}
1502.05928-1
, and~(\ref{eq:loss2}) converts to \begin{equation}\label{eq:lasso} L(\mathbf{X},\mathbf{D},\mathbf{A})=\min_{\mathbf{D},\mathbf{A}}\sum_{i=1}^{n}\begin{pmatrix}\frac{1}{2}\|\mathbf{x}_i-\mathbf{D}\bm{\alpha}_i\|_{2}^{2}+\lambda\|\bm{\alpha}_i\|_{1}\end{pmatrix}, \end{equation} where $\mathbf{x}_i$ is the $i^{\textup{th}}$ training sample and $\bm{\alpha}_i$ is the $i^{\textup{th}}$ column of $\mathbf{A}$. The reconstructive formulation given in~(\ref{eq:lasso}) is non-convex when both the dictionary $\mathbf{D}$ and coefficients $\mathbf{A}$ are unknown. However, this optimization problem is convex if it is solved iteratively and alternately on these two unknowns. Several fast algorithms have recently been proposed for this purpose, such as K-SVD <|cite_start|> (Reference: K-SVD : An Algorithm for Designing of Overcomplete Dictionaries for Sparse Representation: In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a pre-specified set of linear transforms, or by adapting the dictionary to a set of training signals. Both these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method – the K-SVD algorithm – generalizing the K-Means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results on both synthetic tests and in applications on real image data.) <|cite_end|>, online learning <|cite_start|> (Reference: Online dictionary learning for sparse coding: Sparse coding---that is, modelling data vectors as sparse linear combinations of basis elements---is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on learning the basis set, also called dictionary, to adapt it to specific data, an approach that has recently proven to be very effective for signal reconstruction and classification in the audio and image processing domains. This paper proposes a new online optimization algorithm for dictionary learning, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples. A proof of convergence is presented, along with experiments with natural images demonstrating that it leads to faster performance and better dictionaries than classical batch algorithms for both small and large datasets.) <|cite_end|> <|cite_start|> (Reference: Online Learning for Matrix Factorization and Sparse Coding: Sparse coding--that is, modelling data vectors as sparse linear combinations of basis elements--is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on the large-scale matrix factorization problem that consists of learning the basis set, adapting it to specific data. Variations of this problem include dictionary learning in signal processing, non-negative matrix factorization and sparse principal component analysis. In this paper, we propose to address these tasks with a new online optimization algorithm, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples, and extends naturally to various matrix factorization formulations, making it suitable for a wide range of learning problems. A proof of convergence is presented, along with experiments with natural images and genomic data demonstrating that it leads to state-of-the-art performance in terms of speed and optimization for both small and large datasets.) <|cite_end|>, and cyclic coordinate descent <|cite_start|> (Reference: {Regularization paths for generalized linear models via coordinate descent: We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, two-class logistic regression, and multinomial regression problems while the penalties include ℓ(1) (the lasso), ℓ(2) (ridge regression) and mixtures of the two (the elastic net). The algorithms use cyclical coordinate descent, computed along a regularization path. The methods can handle large problems and can also deal efficiently with sparse features. In comparative timings we find that the new algorithms are considerably faster than competing methods.) <|cite_end|>. In~(\ref{eq:lasso}), the main optimization goal for the computation of the dictionary and sparse coefficients is minimizing the reconstruction error in the mean-squared sense. While this works well in applications where the primary goal is to reconstruct signals as accurately as possible, such as in denoising, image inpainting, and coding, it is not the ultimate goal in classification tasks as discriminating signals is more important here <|cite_start|> (Reference: Sparse representation for signal classification: In this paper, application of sparse representation (factorization) of signals over an overcomplete basis (dictionary) for signal classification is discussed. Searching for the sparse representation of a signal over an overcomplete dictionary is achieved by optimizing an objective function that includes two terms: one that measures the signal reconstruction error and another that measures the sparsity. This objective function works well in applications where signals need to be reconstructed, like coding and denoising. On the other hand, discriminative methods, such as linear discriminative analysis (LDA), are better suited for classification tasks. However, discriminative methods are usually sensitive to corruption in signals due to lacking crucial properties for signal reconstruction. In this paper, we present a theoretical framework for signal classification with sparse representation. The approach combines the discrimination power of the discriminative methods with the reconstruction property and the sparsity of the sparse representation that enables one to deal with signal corruptions: noise, missing data and outliers. The proposed approach is therefore capable of robust classification with a sparse representation of signals. The theoretical results are demonstrated with signal classification tasks, showing that the proposed approach outperforms the standard discriminative methods and the standard sparse representation in the case of corrupted signals.) <|cite_end|>. Recently, there have been several attempts to include category information in computing either dictionary, coefficients, or both. This branch of DLSR is called supervised dictionary learning and sparse representation (S-DLSR). In the following section, an overview of proposed S-DLSR approaches in the literature will be provided. <|paper_end|>
[ "<|reference_start|> Online dictionary learning for sparse coding: Sparse coding---that is, modelling data vectors as sparse linear combinations of basis elements---is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on learning the basis set, also called dictionary, to adapt it to specific data, an approach that has recently proven to be very effective for signal reconstruction and classification in the audio and image processing domains. This paper proposes a new online optimization algorithm for dictionary learning, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples. A proof of convergence is presented, along with experiments with natural images demonstrating that it leads to faster performance and better dictionaries than classical batch algorithms for both small and large datasets. <|reference_end|>", "<|reference_start|> Online Learning for Matrix Factorization and Sparse Coding: Sparse coding--that is, modelling data vectors as sparse linear combinations of basis elements--is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on the large-scale matrix factorization problem that consists of learning the basis set, adapting it to specific data. Variations of this problem include dictionary learning in signal processing, non-negative matrix factorization and sparse principal component analysis. In this paper, we propose to address these tasks with a new online optimization algorithm, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples, and extends naturally to various matrix factorization formulations, making it suitable for a wide range of learning problems. A proof of convergence is presented, along with experiments with natural images and genomic data demonstrating that it leads to state-of-the-art performance in terms of speed and optimization for both small and large datasets. <|reference_end|>", "<|reference_start|> {Regularization paths for generalized linear models via coordinate descent: We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, two-class logistic regression, and multinomial regression problems while the penalties include ℓ(1) (the lasso), ℓ(2) (ridge regression) and mixtures of the two (the elastic net). The algorithms use cyclical coordinate descent, computed along a regularization path. The methods can handle large problems and can also deal efficiently with sparse features. In comparative timings we find that the new algorithms are considerably faster than competing methods. <|reference_end|>", "<|reference_start|> Sparse representation for signal classification: In this paper, application of sparse representation (factorization) of signals over an overcomplete basis (dictionary) for signal classification is discussed. Searching for the sparse representation of a signal over an overcomplete dictionary is achieved by optimizing an objective function that includes two terms: one that measures the signal reconstruction error and another that measures the sparsity. This objective function works well in applications where signals need to be reconstructed, like coding and denoising. On the other hand, discriminative methods, such as linear discriminative analysis (LDA), are better suited for classification tasks. However, discriminative methods are usually sensitive to corruption in signals due to lacking crucial properties for signal reconstruction. In this paper, we present a theoretical framework for signal classification with sparse representation. The approach combines the discrimination power of the discriminative methods with the reconstruction property and the sparsity of the sparse representation that enables one to deal with signal corruptions: noise, missing data and outliers. The proposed approach is therefore capable of robust classification with a sparse representation of signals. The theoretical results are demonstrated with signal classification tasks, showing that the proposed approach outperforms the standard discriminative methods and the standard sparse representation in the case of corrupted signals. <|reference_end|>" ]
[ 1, 2, 3, 4 ]
{"<|cite_1|>": "ss-1317763", "<|multi_cite_2_1|>": "ss-1378478", "<|multi_cite_2_2|>": "ss-792684", "<|cite_3|>": "ss-2094762", "<|multi_cite_4_1|>": "ss-792685", "<|multi_cite_4_2|>": "ss-772163", "<|multi_cite_4_3|>": "ss-1372620", "<|cite_5|>": "ss-1014574", "<|cite_6|>": "ss-844467", "<|cite_7|>": "ss-1254879", "<|cite_8|>": "ss-1366544", "<|multi_cite_9_1|>": "ss-1214434", "<|multi_cite_9_2|>": "ss-792686", "<|multi_cite_9_3|>": "ss-792687", "<|multi_cite_10_1|>": "ss-1376926", "<|multi_cite_10_2|>": "ss-1817992", "<|multi_cite_11_1|>": "ss-998030", "<|multi_cite_11_2|>": "ss-1005383", "<|multi_cite_11_3|>": "ss-792688", "<|multi_cite_12_1|>": "ss-1936541", "<|multi_cite_12_2|>": "ss-1186558", "<|multi_cite_12_3|>": "ss-792689", "<|multi_cite_12_4|>": "ss-2279626", "<|cite_13|>": "ss-792690", "<|multi_cite_14_1|>": "arxiv-4899", "<|multi_cite_14_2|>": "ss-930128", "<|multi_cite_14_3|>": "arxiv-33961", "<|multi_cite_15_1|>": "ss-793499", "<|multi_cite_15_2|>": "ss-875839", "<|multi_cite_16_1|>": "ss-1814025", "<|multi_cite_16_2|>": "ss-792691", "<|cite_17|>": "ss-881994", "<|cite_18|>": "ss-826608", "<|multi_cite_19_1|>": "arxiv-3536", "<|multi_cite_19_2|>": "arxiv-672114", "<|cite_20|>": "ss-835639", "<|cite_21|>": "ss-808398", "<|cite_22|>": "ss-1282195", "<|multi_cite_23_1|>": "ss-1156325", "<|multi_cite_23_2|>": "ss-1029272", "<|multi_cite_24_1|>": "ss-990150", "<|multi_cite_24_2|>": "ss-1062180", "<|multi_cite_24_3|>": "ss-720079", "<|multi_cite_24_4|>": "ss-1275477", "<|cite_25|>": "arxiv-4899", "<|cite_26|>": "ss-792690", "<|cite_27|>": "ss-793499", "<|cite_28|>": "ss-1814025", "<|cite_29|>": "ss-826608", "<|cite_30|>": "ss-881994", "<|cite_31|>": "arxiv-4899", "<|multi_cite_32_1|>": "ss-990150", "<|multi_cite_32_2|>": "ss-1062180", "<|multi_cite_32_3|>": "ss-720079", "<|cite_33|>": "ss-1062180", "<|multi_cite_34_1|>": "ss-1156325", "<|multi_cite_34_2|>": "ss-1029272", "<|multi_cite_34_3|>": "ss-1062180", "<|cite_35|>": "ss-881994", "<|cite_36|>": "ss-1282195", "<|cite_37|>": "ss-766242", "<|multi_cite_38_1|>": "ss-1544290", "<|multi_cite_38_2|>": "arxiv-8525", "<|cite_39|>": "ss-1318754", "<|cite_40|>": "ss-1268364"}
2406.10514
<|paper_start|> Title: GTR-Voice: Articulatory Phonetics Informed Controllable Expressive Speech Synthesis Abstract: GTR-Voice: Articulatory Phonetics Informed Controllable Expressive Speech Synthesis: Expressive speech synthesis aims to generate speech that captures a wide range of para-linguistic features, including emotion and articulation, though current research primarily emphasizes emotional aspects over the nuanced articulatory features mastered by professional voice actors. Inspired by this, we explore expressive speech synthesis through the lens of articulatory phonetics. Specifically, we define a framework with three dimensions: Glottalization, Tenseness, and Resonance (GTR), to guide the synthesis at the voice production level. With this framework, we record a high-quality speech dataset named GTR-Voice, featuring 20 Chinese sentences articulated by a professional voice actor across 125 distinct GTR combinations. We verify the framework and GTR annotations through automatic classification and listening tests, and demonstrate precise controllability along the GTR dimensions on two fine-tuned expressive TTS models. We open-source the dataset and TTS models. Introduction Expressive speech synthesis aims to generate speech that captures a wide range of para-linguistic features <|cite_start|> (Reference: A Survey on Neural Speech Synthesis: Text to speech (TTS), or speech synthesis, which aims to synthesize intelligible and natural speech given text, is a hot research topic in speech, language, and machine learning communities and has broad applications in the industry. As the development of deep learning and artificial intelligence, neural network-based TTS has significantly improved the quality of synthesized speech in recent years. In this paper, we conduct a comprehensive survey on neural TTS, aiming to provide a good understanding of current research and future trends. We focus on the key components in neural TTS, including text analysis, acoustic models and vocoders, and several advanced topics, including fast TTS, low-resource TTS, robust TTS, expressive TTS, and adaptive TTS, etc. We further summarize resources related to TTS (e.g., datasets, opensource implementations) and discuss future research directions. This survey can serve both academic researchers and industry practitioners working on TTS.) <|cite_end|> <|cite_start|> (Reference: {A review of deep learning based speech synthesis: Speech synthesis, also known as text-to-speech (TTS), has attracted increasingly more attention. Recent advances on speech synthesis are overwhelmingly contributed by deep learning or even end-to-end techniques which have been utilized to enhance a wide range of application scenarios such as intelligent speech interaction, chatbot or conversational artificial intelligence (AI). For speech synthesis, deep learning based techniques can leverage a large scale of pairs to learn effective feature representations to bridge the gap between text and speech, thus better characterizing the properties of events. To better understand the research dynamics in the speech synthesis field, this paper firstly introduces the traditional speech synthesis methods and highlights the importance of the acoustic modeling from the composition of the statistical parametric speech synthesis (SPSS) system. It then gives an overview of the advances on deep learning based speech synthesis, including the end-to-end approaches which have achieved start-of-the-art performance in recent years. Finally, it discusses the problems of the deep learning methods for speech synthesis, and also points out some appealing research directions that can bring the speech synthesis research into a new frontier.) <|cite_end|> <|cite_start|> (Reference: The Theory behind Controllable Expressive Speech Synthesis: a Cross-disciplinary Approach: As part of the Human-Computer Interaction field, Expressive speech synthesis is a very rich domain as it requires knowledge in areas such as machine learning, signal processing, sociology, psychology. In this Chapter, we will focus mostly on the technical side. From the recording of expressive speech to its modeling, the reader will have an overview of the main paradigms used in this field, through some of the most prominent systems and methods. We explain how speech can be represented and encoded with audio features. We present a history of the main methods of Text-to-Speech synthesis: concatenative, parametric and statistical parametric speech synthesis. Finally, we focus on the last one, with the last techniques modeling Text-to-Speech synthesis as a sequence-to-sequence problem. This enables the use of Deep Learning blocks such as Convolutional and Recurrent Neural Networks as well as Attention Mechanism. The last part of the Chapter intends to assemble the different aspects of the theory and summarize the concepts.) <|cite_end|>. A successful system could find broad applications in media production, education, and entertainment, creating a realistic and immersive experience for users. In recent years, deep-learning based speech synthesis methods have achieved high quality and naturalness <|cite_start|> (Reference: {NaturalSpeech: End-to-End Text-to-Speech Synthesis with Human-Level Quality: Text-to-speech (TTS) has made rapid progress in both academia and industry in recent years. Some questions naturally arise that whether a TTS system can achieve human-level quality, how to define/judge that quality, and how to achieve it. In this paper, we answer these questions by first defining the human-level quality based on the statistical significance of subjective measure and introducing appropriate guidelines to judge it, and then developing a TTS system called NaturalSpeech that achieves human-level quality on benchmark datasets. Specifically, we leverage a variational auto-encoder (VAE) for end-to-end text-to-waveform generation, with several key modules to enhance the capacity of the prior from text and reduce the complexity of the posterior from speech, including phoneme pre-training, differentiable duration modeling, bidirectional prior/posterior modeling, and a memory mechanism in VAE. Experimental evaluations on the popular LJSpeech dataset show that our proposed NaturalSpeech achieves <inline-formula><tex-math notation="LaTeX">$-0.01$</tex-math><alternatives><mml:math><mml:mrow><mml:mo>-</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>01</mml:mn></mml:mrow></mml:math><inline-graphic xlink:href="tan-ieq1-3356232.gif"/></alternatives></inline-formula> CMOS (comparative mean opinion score) to human recordings at the sentence level, with Wilcoxon signed rank test at p-level <inline-formula><tex-math notation="LaTeX">$p \gg 0.05$</tex-math><alternatives><mml:math><mml:mrow><mml:mi>p</mml:mi><mml:mo>≫</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>05</mml:mn></mml:mrow></mml:math><inline-graphic xlink:href="tan-ieq2-3356232.gif"/></alternatives></inline-formula>, which demonstrates no statistically significant difference from human recordings for the first time.) <|cite_end|> <|cite_start|> (Reference: ControlVC: Zero-Shot Voice Conversion with Time-Varying Controls on Pitch and Speed: Recent developments in neural speech synthesis and vocoding have sparked a renewed interest in voice conversion (VC). Beyond timbre transfer, achieving controllability on para-linguistic parameters such as pitch and Speed is critical in deploying VC systems in many application scenarios. Existing studies, however, either only provide utterance-level global control or lack interpretability on the controls. In this paper, we propose ControlVC, the first neural voice conversion system that achieves time-varying controls on pitch and speed. ControlVC uses pre-trained encoders to compute pitch and linguistic embeddings from the source utterance and speaker embeddings from the target utterance. These embeddings are then concatenated and converted to speech using a vocoder. It achieves speed control through TD-PSOLA pre-processing on the source utterance, and achieves pitch control by manipulating the pitch contour before feeding it to the pitch encoder. Systematic subjective and objective evaluations are conducted to assess the speech quality and controllability. Results show that, on non-parallel and zero-shot conversion tasks, ControlVC significantly outperforms two other self-constructed baselines on speech quality, and it can successfully achieve time-varying pitch and speed control.) <|cite_end|>. On expressiveness, state-of-the-art methods show good emotion rendering <|cite_start|> (Reference: EmoDiff: Intensity Controllable Emotional Text-to-Speech with Soft-Label Guidance: Although current neural text-to-speech (TTS) models are able to generate high-quality speech, intensity controllable emotional TTS is still a challenging task. Most existing methods need external optimizations for intensity calculation, leading to suboptimal results or degraded quality. In this paper, we propose EmoDiff, a diffusion-based TTS model where emotion intensity can be manipulated by a proposed soft-label guidance technique derived from classifier guidance. Specifically, instead of being guided with a one-hot vector for the specified emotion, EmoDiff is guided with a soft label where the value of the specified emotion and \textit{Neutral} is set to $\alpha$ and $1-\alpha$ respectively. The $\alpha$ here represents the emotion intensity and can be chosen from 0 to 1. Our experiments show that EmoDiff can precisely control the emotion intensity while maintaining high voice quality. Moreover, diverse speech with specified emotion intensity can be generated by sampling in the reverse denoising process.) <|cite_end|> <|cite_start|> (Reference: Fine-grained Emotional Control of Text-To-Speech: Learning To Rank Inter- And Intra-Class Emotion Intensities: State-of-the-art Text-To-Speech (TTS) models are capable of producing high-quality speech. The generated speech, however, is usually neutral in emotional expression, whereas very often one would want fine-grained emotional control of words or phonemes. Although still challenging, the first TTS models have been recently proposed that are able to control voice by manually assigning emotion intensity. Unfortunately, due to the neglect of intra-class distance, the intensity differences are often unrecognizable. In this paper, we propose a fine-grained controllable emotional TTS, that considers both inter- and intra-class distances and be able to synthesize speech with recognizable intensity difference. Our subjective and objective experiments demonstrate that our model exceeds two state-of-the-art controllable TTS models for controllability, emotion expressiveness and naturalness.) <|cite_end|> and style imitation <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|> <|cite_start|> (Reference: Audiobox: Unified Audio Generation with Natural Language Prompts: Audio is an essential part of our life, but creating it often requires expertise and is time-consuming. Research communities have made great progress over the past year advancing the performance of large scale audio generative models for a single modality (speech, sound, or music) through adopting more powerful generative models and scaling data. However, these models lack controllability in several aspects: speech generation models cannot synthesize novel styles based on text description and are limited on domain coverage such as outdoor environments; sound generation models only provide coarse-grained control based on descriptions like "a person speaking" and would only generate mumbling human voices. This paper presents Audiobox, a unified model based on flow-matching that is capable of generating various audio modalities. We design description-based and example-based prompting to enhance controllability and unify speech and sound generation paradigms. We allow transcript, vocal, and other audio styles to be controlled independently when generating speech. To improve model generalization with limited labels, we adapt a self-supervised infilling objective to pre-train on large quantities of unlabeled audio. Audiobox sets new benchmarks on speech and sound generation (0.745 similarity on Librispeech for zero-shot TTS; 0.77 FAD on AudioCaps for text-to-sound) and unlocks new methods for generating audio with novel vocal and acoustic styles. We further integrate Bespoke Solvers, which speeds up generation by over 25 times compared to the default ODE solver for flow-matching, without loss of performance on several tasks. Our demo is available at https://audiobox.metademolab.com/) <|cite_end|>. However, compared to humans especially professional voice actors, their expressiveness and controllability are still very limited. There are many expressions that voice actors can do that these state-of-the-art methods cannot. An important reason, we argue, is that the scope of existing research is limited to certain aspects of speech expressiveness such as emotion and style, while many other aspects are simply not paid much attention by the research community. According to the definition of expressive speech synthesis, speech expressiveness exists in various para-linguistic aspects. These aspects range from low-level aspects such as articulation and pronunciation to mid-level aspects such as speaking style, and to high-level aspects such as emotion and attitude. Existing research has been primarily focusing on mid- and high-level aspects, especially style <|cite_start|> (Reference: Self-supervised Context-aware Style Representation for Expressive Speech Synthesis: Expressive speech synthesis, like audiobook synthesis, is still challenging for style representation learning and prediction. Deriving from reference audio or predicting style tags from text requires a huge amount of labeled data, which is costly to acquire and difficult to define and annotate accurately. In this paper, we propose a novel framework for learning style representation from abundant plain text in a self-supervised manner. It leverages an emotion lexicon and uses contrastive learning and deep clustering. We further integrate the style representation as a conditioned embedding in a multi-style Transformer TTS. Comparing with multi-style TTS by predicting style tags trained on the same dataset but with human annotations, our method achieves improved results according to subjective evaluations on both in-domain and out-of-domain test sets in audiobook speech. Moreover, with implicit context-aware style representation, the emotion transition of synthesized audio in a long paragraph appears more natural. The audio samples are available on the demo web.) <|cite_end|> <|cite_start|> (Reference: EE-TTS: Emphatic Expressive TTS with Linguistic Information: While Current TTS systems perform well in synthesizing high-quality speech, producing highly expressive speech remains a challenge. Emphasis, as a critical factor in determining the expressiveness of speech, has attracted more attention nowadays. Previous works usually enhance the emphasis by adding intermediate features, but they can not guarantee the overall expressiveness of the speech. To resolve this matter, we propose Emphatic Expressive TTS (EE-TTS), which leverages multi-level linguistic information from syntax and semantics. EE-TTS contains an emphasis predictor that can identify appropriate emphasis positions from text and a conditioned acoustic model to synthesize expressive speech with emphasis and linguistic information. Experimental results indicate that EE-TTS outperforms baseline with MOS improvements of 0.49 and 0.67 in expressiveness and naturalness. EE-TTS also shows strong generalization across different datasets according to AB test results.) <|cite_end|> and emotion, yet low-level aspects have not been paid much attention <|cite_start|> (Reference: Deep Speech Synthesis from Articulatory Representations: In the articulatory synthesis task, speech is synthesized from input features containing information about the physical behavior of the human vocal tract. This task provides a promising direction for speech synthesis research, as the articulatory space is compact, smooth, and interpretable. Current works have highlighted the potential for deep learning models to perform articulatory synthesis. However, it remains unclear whether these models can achieve the efficiency and fidelity of the human speech production system. To help bridge this gap, we propose a time-domain articulatory synthesis methodology and demonstrate its efficacy with both electromagnetic articulography (EMA) and synthetic articulatory feature inputs. Our model is computationally efficient and achieves a transcription word error rate (WER) of 18.5% for the EMA-to-speech task, yielding an improvement of 11.6% compared to prior work. Through interpolation experiments, we also highlight the generalizability and interpretability of our approach.) <|cite_end|> <|cite_start|> (Reference: Integrating Articulatory Information in Deep Learning-Based Text-to-Speech Synthesis: Articulatory information has been shown to be effective in improving the performance of hidden Markov model (HMM)based text-to-speech (TTS) synthesis. Recently, deep learningbased TTS has outperformed HMM-based approaches. However, articulatory information has rarely been integrated in deep learning-based TTS. This paper investigated the effectiveness of integrating articulatory movement data to deep learning-based TTS. The integration of articulatory information was achieved in two ways: (1) direct integration, where articulatory and acoustic features were the output of a deep neural network (DNN), and (2) direct integration plus forward-mapping, where the output articulatory features were mapped to acoustic features by an additional DNN; These forward-mapped acoustic features were then combined with the output acoustic features to produce the final acoustic features. Articulatory (tongue and lip) and acoustic data collected from male and female speakers were used in the experiment. Both objective measures and subjective judgment by human listeners showed the approaches integrated articulatory information outperformed the baseline approach (without using articulatory information) in terms of naturalness and speaker voice identity (voice similarity).) <|cite_end|>. It is noted that low-level and high-level aspects can be independent of speech expression. For example, a professional voice actor is able to speak a sentence with diverse articulation methods but a particular emotion. Reversely, they can also speak with a particular articulation method but diverse emotions. The mid-level style aspect is related to the low-level articulation aspects, however, the former also includes prosodic features beyond the articulation and pronunciation of words. Furthermore, styles are categorical and not easy to manipulate (e.g., transitioning from one style to another, varying a style slightly) by humans or algorithms <|cite_start|> (Reference: The Art Of Voice Acting The Craft And Business Of Performing For Voiceover: ) <|cite_end|> <|cite_start|> (Reference: This is the post-print version of Matamala , Anna ( 2019 ) " Voice-over : practice , research and future prospects ": ) <|cite_end|>. As expressive speech synthesis systems fall short of humans, it can be inspiring to learn what and how most capable humans, i.e., professional voice actors, can perform on speech expressiveness. In fact, they can manipulate many articulatory aspects to achieve the desired expression <|cite_start|> (Reference: Voice for Performance: Training the Actor's Voice: ) <|cite_end|>. For example, voice actors may be asked to change their expression by having a more breathy and softer tone, a more languid pronunciation, and a hint of nasality. They understand how the three requirements correlate with specific vocal techniques to produce a voice that integrates these characteristics. Specifically, a more breathy and softer tone voice correlates with changes in the glottis configuration, a more languid pronunciation implies a variation in the muscle engagement of the articulators during articulating, and a nasal sound is achieved by manipulating the vocal tract shapes and configurations to alter the resonance cavities and acquire desired voice characteristics. In other words, there are a set of articulatory aspects that they can adjust to achieve the desired timbre and speaking style. In fact, throughout their curriculum, they learn to adjust many of these aspects independently and simultaneously. Inspired by the capabilities of voice actors, in this paper, we investigate expressive speech synthesis from the perspective of articulatory phonetics. Specifically, we identify three fundamental dimensions of speech expression at the articulation level, namely Glottalization, Tenseness, and Resonance (GTR). Under this framework, we designed and recorded a high-quality expressive speech dataset comprising 125 distinct GTR types of voice uttered by a single professional voice actor. The consistency of the dataset labels was evaluated through listener tests and automatic classification. Subsequently, we adapted two expressive Text-to-Speech (TTS) models (FastPitch <|cite_start|> (Reference: FastPitch: Parallel Text-to-speech with Pitch Prediction: We present FastPitch, a fully-parallel text-to-speech model based on FastSpeech, conditioned on fundamental frequency contours. The model predicts pitch contours during inference. By altering these predictions, the generated speech can be more expressive, better match the semantic of the utterance, and in the end more engaging to the listener. Uniformly increasing or decreasing pitch with FastPitch generates speech that resembles the voluntary modulation of voice. Conditioning on frequency contours improves the overall quality of synthesized speech, making it comparable to state-of-the-art. It does not introduce an overhead, and FastPitch retains the favorable, fully-parallel Transformer architecture, with over 900x real-time factor for mel-spectrogram synthesis of a typical utterance.) <|cite_end|> and StyleTTS <|cite_start|> (Reference: FluentTTS: Text-dependent Fine-grained Style Control for Multi-style TTS: In this paper, we propose a method to flexibly control the lo-cal prosodic variation of a neural text-to-speech (TTS) model. To provide expressiveness for synthesized speech, conventional TTS models utilize utterance-wise global style embeddings that are obtained by compressing frame-level embeddings along the time axis. However, since utterance-wise global features do not contain sufficient information to represent the characteristics of word-level local features, they are not appropriate for direct use on controlling prosody at a fine scale. In multi-style TTS models, it is very important to have the capability to con-trol local prosody because it plays a key role in finding the most appropriate text-to-speech pair among many one-to-many mapping candidates. To explicitly present local prosodic characteristics to the contextual information of the corresponding input text, we propose a module to predict the fundamental frequency ( F 0 ) of each text by conditioning on the utterance-wise global style embedding. We also estimate multi-style embeddings us-ing a multi-style encoder, which takes as inputs both a global utterance-wise embedding and a local F 0 embedding. Our multi-style embedding enhances the naturalness and expressiveness of synthesized speech and is able to control prosody styles at the word-level or phoneme-level.) <|cite_end|>) to investigate the feasibility of GTR control in expressive TTS. We conducted subjective evaluations on the GTR controllability and generated speech quality and naturalness. Results show that the adapted models exhibit fine control over each dimension of GTR in both Mandarin Chinese (same language) and English (cross-lingual) scenarios. \vspace{10pt} <|paper_end|>
[ "<|reference_start|> EmoDiff: Intensity Controllable Emotional Text-to-Speech with Soft-Label Guidance: Although current neural text-to-speech (TTS) models are able to generate high-quality speech, intensity controllable emotional TTS is still a challenging task. Most existing methods need external optimizations for intensity calculation, leading to suboptimal results or degraded quality. In this paper, we propose EmoDiff, a diffusion-based TTS model where emotion intensity can be manipulated by a proposed soft-label guidance technique derived from classifier guidance. Specifically, instead of being guided with a one-hot vector for the specified emotion, EmoDiff is guided with a soft label where the value of the specified emotion and \\textit{Neutral} is set to $\\alpha$ and $1-\\alpha$ respectively. The $\\alpha$ here represents the emotion intensity and can be chosen from 0 to 1. Our experiments show that EmoDiff can precisely control the emotion intensity while maintaining high voice quality. Moreover, diverse speech with specified emotion intensity can be generated by sampling in the reverse denoising process. <|reference_end|>", "<|reference_start|> Self-supervised Context-aware Style Representation for Expressive Speech Synthesis: Expressive speech synthesis, like audiobook synthesis, is still challenging for style representation learning and prediction. Deriving from reference audio or predicting style tags from text requires a huge amount of labeled data, which is costly to acquire and difficult to define and annotate accurately. In this paper, we propose a novel framework for learning style representation from abundant plain text in a self-supervised manner. It leverages an emotion lexicon and uses contrastive learning and deep clustering. We further integrate the style representation as a conditioned embedding in a multi-style Transformer TTS. Comparing with multi-style TTS by predicting style tags trained on the same dataset but with human annotations, our method achieves improved results according to subjective evaluations on both in-domain and out-of-domain test sets in audiobook speech. Moreover, with implicit context-aware style representation, the emotion transition of synthesized audio in a long paragraph appears more natural. The audio samples are available on the demo web. <|reference_end|>", "<|reference_start|> This is the post-print version of Matamala , Anna ( 2019 ) \" Voice-over : practice , research and future prospects \": <|reference_end|>", "<|reference_start|> Voice for Performance: Training the Actor's Voice: <|reference_end|>" ]
[ 5, 9, 14, 15 ]
{"<|multi_cite_1_1|>": "arxiv-351874", "<|multi_cite_1_2|>": "ss-730060", "<|multi_cite_1_3|>": "arxiv-228766", "<|multi_cite_2_1|>": "ss-1160092", "<|multi_cite_2_2|>": "arxiv-448483", "<|multi_cite_3_1|>": "arxiv-462856", "<|multi_cite_3_3|>": "arxiv-485742", "<|multi_cite_4_1|>": "ss-832115", "<|multi_cite_4_2|>": "arxiv-571265", "<|multi_cite_5_1|>": "arxiv-429559", "<|multi_cite_5_2|>": "ss-1476042", "<|multi_cite_7_1|>": "arxiv-446025", "<|multi_cite_7_2|>": "ss-906283", "<|multi_cite_8_1|>": "ss-1849975", "<|multi_cite_8_2|>": "ss-1849976", "<|cite_9|>": "ss-1849977", "<|cite_10|>": "arxiv-271272", "<|cite_11|>": "ss-2080544"}
1803.01833
<|paper_start|> Title: Marginal Singularity, and the Benefits of Labels in Covariate-Shift Abstract: Marginal Singularity, and the Benefits of Labels in Covariate-Shift: We present new minimax results that concisely capture the relative benefits of source and target labeled data, under covariate-shift. Namely, we show that the benefits of target labels are controlled by a transfer-exponent $\gamma$ that encodes how singular Q is locally w.r.t. P, and interestingly allows situations where transfer did not seem possible under previous insights. In fact, our new minimax analysis - in terms of $\gamma$ - reveals a continuum of regimes ranging from situations where target labels have little benefit, to regimes where target labels dramatically improve classification. We then show that a recently proposed semi-supervised procedure can be extended to adapt to unknown $\gamma$, and therefore requests labels only when beneficial, while achieving minimax transfer rates. Introduction Transfer learning addresses the many practical situations where much labeled data is available from a \emph{source} distribution $P$, but relatively little labeled data is available from a \emph{target} distribution $Q$. The aim is to harness source data to improve prediction on the target $Q$, assuming the source $P$ is informative about $Q$. Naturally, a main theoretical question is in understanding relations (or divergences) between $P$ and $Q$ that allow information transfer, and in particular, that tightly characterize the relative benefits of source and target labeled samples (towards informing practice). We focus on nonparametric classification, i.e., predicting labels $Y$ of future $X$ drawn from $Q$, with minimal assumptions on $P$ and $Q$. The most common setting is that of \emph{covariate-shift} where $P_{Y|X} = Q_{Y|X}$, but $Q_X$ may differ from $P_X$. While equal conditionals may seem restrictive, it is well motivated by common applications of transfer (e.g. image, speech, or document classification). The question is then how to express the changes in marginals $P_X, Q_X$ in the context of transfer. We present new minimax results that concisely capture the relative benefits of source and target labeled data, under {covariate-shift}. Namely, we show that the benefits of target labels are controlled by a \emph{transfer-exponent} $\gamma$ that encodes how \emph{singular} $Q$ is locally w.r.t. $P$, and interestingly allows situations where transfer did not seem possible under previous insights. In fact, our new minimax analysis -- in terms of $\gamma$ -- reveals a \emph{continuum of regimes} ranging from situations where target labels have little benefit, to regimes where target labels dramatically improve classification. The notion of transfer-exponent follows a natural intuition, present in the literature, that transfer is hardest if $P_X$ does not properly cover regions of large $Q_X$ mass. In particular, $\gamma$ parametrizes the behavior of ball-mass ratios $Q(B(x, r))/P(B(x, r))$ as a function of neighborhood size $r$ (see Definition \ref{def:transferCoefficient}), namely, that these ratios behave like $r^{-\gamma}$. We will see, through both lower and upper-bounds, that transfer is easiest as $\gamma \to 0$ and hardest as $\gamma \to \infty$. Interestingly, $\gamma$ is well defined even when $Q$ is singular w.r.t. $P$ -- in which case common notions of \emph{density-ratio} and information-theoretic divergences (KL or Renyi) fail to exist, and common extensions of total-variation can be too large to characterize transfer. We note that singularity of $Q$ w.r.t. $P$ can often be the case in practice where high-dimensional data is often very structured, and transfer often involves going from a generic set of data from a domain $P$ to a more structured subdomain $Q$. Here, our results can directly inform practice: target labels yield greater performance with lower-dimensional $Q$, but are not necessary; if $Q$ were of higher dimension than $P$, the benefits of source labels quickly saturate. Now when $Q$ and $P$ are of the same dimension, even sharing the same support, the notion of $\gamma$ reveals yet a rich set of regimes where transfer is possible at different rates, while usual notions of task-relatedness might indicate otherwise. As alluded to above, a practical question motivating much of this work, is whether, given a large database of source data, acquiring additional target data might further improve classification; this is usually difficult to test given the costs and unavailability of target data. Here, by capturing the interaction of source and target sample sizes in our rates, in terms of $\gamma$, we can sharply characterize those sampling regimes where target or source data are most beneficial. We then show that it is in fact possible to \emph{adapt} to unknown $\gamma$, i.e., request target labels only when beneficial, while also attaining nearly optimal classification rates in terms of unknown distributional parameters. \subsection*{Detailed Results and Related Work} Many interesting notions of divergence have been proposed that successfully capture a general sense of when transfer is possible. In fact, the literature on transfer is by now expansive, and we cannot hope to truly do it justice. A first line of work considers refinements of total-variation that encode changes in error over the classifiers being used (as defined by a hypothesis class $\mathcal{H}$). The most common such measures are the so-called $d_{\mathcal{A}}$-divergence <|cite_start|> (Reference: A theory of learning from different domains: ) <|cite_end|> <|cite_start|> (Reference: {Impossibility Theorems for Domain Adaptation: The domain adaptation problem in machine learning occurs when the test data generating distribution differs from the one that generates the training data. It is clear that the success of learning under such circumstances depends on similarities between the two data distributions. We study assumptions about the relationship between the two distributions that one needed for domain adaptation learning to succeed. We analyze the assumptions in an agnostic PAC-style learning model for a the setting in which the learner can access a labeled training data sample and an unlabeled sample generated by the test data distribution. We focus on three assumptions: (i) similarity between the unlabeled distributions, (ii) existence of a classifier in the hypothesis class with low error on both training and testing distributions, and (iii) the covariate shift assumption. I.e., the assumption that the conditioned label distribution (for each data point) is the same for both the training and test distributions. We show that without either assumption (i) or (ii), the combination of the remaining assumptions is not sufficient to guarantee successful learning. Our negative results hold with respect to any domain adaptation learning algorithm, as long as it does not have access to target labeled examples. In particular, we provide formal proofs that the popular covariate shift assumption is rather weak and does not relieve the necessity of the other assumptions.) <|cite_end|> <|cite_start|> (Reference: Supplementary Material to A PAC-Bayesian Approach for Domain Adaptation with Specialization to Linear Classifiers: In this document, Section 1 contains some lemmas used in subsequent proofs, Section 2 presents an extended proof of the bound on the domain disagreement disρ(DS , DT ) (Theorem 3 of the main paper), Section 3 introduces other PAC-Bayesian bounds for disρ(DS , DT ) and RPT (Gρ), Section 4 shows equations and implementation details about PBDA (our proposed learning algorithm for PAC-Bayesian DA tasks).) <|cite_end|> and $\mathcal{Y}$-discrepancy <|cite_start|> (Reference: Domain Adaptation: Learning Bounds and Algorithms: This paper addresses the general problem of domain adaptation which arises in a variety of applications where the distribution of the labeled sample available somewhat differs from that of the test data. Building on previous work by Ben-David et al. (2007), we introduce a novel distance between distributions, discrepancy distance, that is tailored to adaptation problems with arbitrary loss functions. We give Rademacher complexity bounds for estimating the discrepancy distance from finite samples for different loss functions. Using this distance, we derive novel generalization bounds for domain adaptation for a wide family of loss functions. We also present a series of novel adaptation bounds for large classes of regularization-based algorithms, including support vector machines and kernel ridge regression based on the empirical discrepancy. This motivates our analysis of the problem of minimizing the empirical discrepancy for various loss functions for which we also give novel algorithms. We report the results of preliminary experiments that demonstrate the benefits of our discrepancy minimization algorithms for domain adaptation.) <|cite_end|> <|cite_start|> (Reference: New Analysis and Algorithm for Learning with Drifting Distributions: We present a new analysis of the problem of learning with drifting distributions in the batch setting using the notion of discrepancy. We prove learning bounds based on the Rademacher complexity of the hypothesis set and the discrepancy of distributions both for a drifting PAC scenario and a tracking scenario. Our bounds are always tighter and in some cases substantially improve upon previous ones based on the $L_1$ distance. We also present a generalization of the standard on-line to batch conversion to the drifting scenario in terms of the discrepancy and arbitrary convex combinations of hypotheses. We introduce a new algorithm exploiting these learning guarantees, which we show can be formulated as a simple QP. Finally, we report the results of preliminary experiments demonstrating the benefits of this algorithm.) <|cite_end|> <|cite_start|> (Reference: Adaptation based on generalized discrepancy: We present a new algorithm for domain adaptation improving upon a discrepancy minimization algorithm, (DM), previously shown to outperform a number of algorithms for this problem. Unlike many previously proposed solutions for domain adaptation, our algorithm does not consist of a fixed reweighting of the losses over the training sample. Instead, the reweighting depends on the hypothesis sought. The algorithm is derived from a less conservative notion of discrepancy than the DM algorithm called generalized discrepancy. We present a detailed description of our algorithm and show that it can be formulated as a convex optimization problem. We also give a detailed theoretical analysis of its learning guarantees which helps us select its parameters. Finally, we report the results of experiments demonstrating that it improves upon discrepancy minimization in several tasks.) <|cite_end|>. These notions are the first to capture -- through \emph{differences} in mass over space -- the intuition that transfer is easiest when $P$ has sufficient mass in regions of substantial $Q$-mass. Typical excess-error bounds on classifiers learned from source (and some or no target) data are of the form $o_p(1) + C\cdot \text{divergence}(P, Q)$. In other words, transfer seems impossible when these divergences are large; this is certainly the case in very general situations. However, as we show, there are ranges of reasonable situations ($0\leq \gamma < \infty$) where transfer is possible, even at fast rates (while using only source data), yet the above divergences remain large (see Remark \ref{remark:divergences} of Section \ref{sec:transferexponent}). Also, interestingly, such divergences are symmetric for pairs $(P, Q)$, while our notion of $\gamma$ is not, attesting to the fact that transfer might be possible from $P$ to $Q$, while hard from $Q$ to $P$. Another prominent line or work, which has led to many practical procedures, considers so-called ratios of densities $f_Q/f_P$ or similarly Radon-Nikodym derivatives $dQ/dP$ as a way to capture the similarity between $P$ and $Q$ <|cite_start|> (Reference: {Dataset Shift in Machine Learning: Dataset shift is a common problem in predictive modeling that occurs when the joint distribution of inputs and outputs differs between training and test stages. Covariate shift, a particular case of dataset shift, occurs when only the input distribution changes. Dataset shift is present in most practical applications, for reasons ranging from the bias introduced by experimental design to the irreproducibility of the testing conditions at training time. (An example is -email spam filtering, which may fail to recognize spam that differs in form from the spam the automatic filter has been built on.) Despite this, and despite the attention given to the apparently similar problems of semi-supervised learning and active learning, dataset shift has received relatively little attention in the machine learning community until recently. This volume offers an overview of current efforts to deal with dataset and covariate shift. The chapters offer a mathematical and philosophical introduction to the problem, place dataset shift in relationship to transfer learning, transduction, local learning, active learning, and semi-supervised learning, provide theoretical views of dataset and covariate shift (including decision theoretic and Bayesian perspectives), and present algorithms for covariate shift. Contributors: Shai Ben-David, Steffen Bickel, Karsten Borgwardt, Michael Brckner, David Corfield, Amir Globerson, Arthur Gretton, Lars Kai Hansen, Matthias Hein, Jiayuan Huang, Takafumi Kanamori, Klaus-Robert Mller, Sam Roweis, Neil Rubens, Tobias Scheffer, Marcel Schmittfull, Bernhard Schlkopf, Hidetoshi Shimodaira, Alex Smola, Amos Storkey, Masashi Sugiyama, Choon Hui Teo Neural Information Processing series) <|cite_end|> <|cite_start|> (Reference: Density ratio estimation in machine learning: Machine learning is an interdisciplinary field of science and engineering that studies mathematical theories and practical applications of systems that learn. This book introduces theories, methods, and applications of density ratio estimation, which is a newly emerging paradigm in the machine learning community. Various machine learning problems such as non-stationarity adaptation, outlier detection, dimensionality reduction, independent component analysis, clustering, classification, and conditional density estimation can be systematically solved via the estimation of probability density ratios. The authors offer a comprehensive introduction of various density ratio estimators including methods via density estimation, moment matching, probabilistic classification, density fitting, and density ratio fitting as well as describing how these can be applied to machine learning. The book also provides mathematical theories for density ratio estimation including parametric and non-parametric convergence analysis and numerical stability analysis to complete the first and definitive treatment of the entire framework of density ratio estimation in machine learning.) <|cite_end|>. It is often assumed in such work that $dQ/dP$ is bounded which corresponds to the regime $\gamma = 0$ in our case (see Example \ref{ex:boundedDensity} of Section \ref{sec:transferexponent}). Typical excess-error bounds are dominated by the estimation rates for $dQ/dP$ (see e.g. rates for $\alpha$-H\"older $dQ/dP$, $\alpha\to 0$, in <|cite_start|> (Reference: Lipschitz Density-Ratios, Structured Data, and Data-driven Tuning: Density-ratio estimation (i.e. estimating f = fQ/fP for two unknown distributions Q and P ) has proved useful in many Machine Learning tasks, e.g., risk-calibration in transfer-learning, two-sample tests, and also useful in common techniques such importance sampling and bias correction. While there are many important analyses of this estimation problem, the present paper derives convergence rates in other practical settings that are less understood, namely, extensions of traditional Lipschitz smoothness conditions, and common high-dimensional settings with structured data (e.g. manifold data, sparse data). Various interesting facts, which hold in earlier settings, are shown to extend to these settings. Namely, (1) optimal rates depend only on the smoothness of the ratio f , and not on the densities fQ, fP , supporting the belief that plugging in estimates for fQ, fP is suboptimal; (2) optimal rates depend only on the intrinsic dimension of data, i.e. this problem – unlike density estimation – escapes the curse of dimension. We further show that near-optimal rates are attainable by estimators tuned from data alone, i.e. with no prior distributional information. This last fact is of special interest in unsupervised settings such as this one, where only oracle rates seem to be known, i.e., rates which assume critical distributional information usually unavailable in practice.) <|cite_end|>), which unfortunately could be arbitrarily higher than the minimax rates we establish for that setting with $\gamma = 0$ . Furthermore, as previously mentioned, $dQ/dP$ is inadequate in common scenarios with structured data, or can be unbounded even while $\gamma$ remains small (see Example \ref{ex:unboundedDensity} of Section \ref{sec:transferexponent}). Another line of work, instead considers information-theoretic measures such as KL-divergence or Renyi divergence <|cite_start|> (Reference: Direct Importance Estimation with Model Selection and Its Application to Covariate Shift Adaptation: A situation where training and test samples follow different input distributions is called covariate shift. Under covariate shift, standard learning methods such as maximum likelihood estimation are no longer consistent—weighted variants according to the ratio of test and training input densities are consistent. Therefore, accurately estimating the density ratio, called the importance, is one of the key issues in covariate shift adaptation. A naive approach to this task is to first estimate training and test input densities separately and then estimate the importance by taking the ratio of the estimated densities. However, this naive approach tends to perform poorly since density estimation is a hard task particularly in high dimensional cases. In this paper, we propose a direct importance estimation method that does not involve density estimation. Our method is equipped with a natural cross validation procedure and hence tuning parameters such as the kernel width can be objectively optimized. Simulations illustrate the usefulness of our approach.) <|cite_end|>. In particular, such divergences are closer in spirit to our notion of transfer-exponent $\gamma$ (viewing it as roughly characterizing the log of ratios between $Q_X$ and $P_X$), but are also undefined in typical scenarios with structured data. Our upper-bound are established under the nonparametric classification settings of <|cite_start|> (Reference: Fast Learning Rates for Plug-in Classifiers: It has been recently shown that, under the margin (or low noise) assumption, there exist classifiers attaining fast rates of convergence of the excess Bayes risk, that is, rates faster than n -1/2 . The work on this subject has suggested the following two conjectures: (i) the best achievable fast rate is of the order n -1 , and (ii) the plug-in classifiers generally converge more slowly than the classifiers based on empirical risk minimization. We show that both conjectures are not correct. In particular, we construct plug-in classifiers that can achieve not only fast, but also super-fast rates, that is, rates faster than n -1 . We establish minimax lower bounds showing that the obtained rates cannot be improved.) <|cite_end|>, which parametrize the noise distribution (via smoothness and noise conditions); this allows us to understand the interaction between $\gamma$ and noise parameters, and capture regimes where classification remains easy despite large $\gamma$. Our upper-bounds are established with a generic $k$-NN classifier defined over the combined source and target sample. In particular, our results imply new convergence rates of independent interest for vanilla $k$-NN (see Remark \ref{rem:newKNNbounds}, Section \ref{sec:upperbounds}). Our lower-bounds are established over any learner with access to both source and target samples, and interestingly, which also has access to infinite unlabeled source and target data (i.e., is allowed to know $P_X$ and $Q_X$). In other words, our lower-bound imply that our rates cannot be improved with access to unlabeled data, which is often an important consideration in practice given the cost of target labels <|cite_start|> (Reference: {Correcting Sample Selection Bias by Unlabeled Data: We consider the scenario where training and test data are drawn from different distributions, commonly referred to as sample selection bias. Most algorithms for this setting try to first recover sampling distributions and then make appropriate corrections based on the distribution estimate. We present a nonparametric method which directly produces resampling weights without distribution estimation. Our method works by matching distributions between training and testing sets in feature space. Experimental results demonstrate that our method works well in practice.) <|cite_end|> <|cite_start|> (Reference: On the Hardness of Domain Adaptation and the Utility of Unlabeled Target Samples: ) <|cite_end|>. A related practical consideration, alluded to earlier, are those of \emph{semisupervised} or \emph{active} transfer, where, given unlabeled target data, the goal is to request as few target labels as possible to improve classification over using sourced data alone <|cite_start|> (Reference: Active Supervised Domain Adaptation: ) <|cite_end|> <|cite_start|> (Reference: Co-Training for domain adaptation: Domain adaptation algorithms seek to generalize a model trained in a source domain to a new target domain. In many practical cases, the source and target distributions can differ substantially, and in some cases crucial target features may not have support in the source domain. In this paper we introduce an algorithm that bridges the gap between source and target domains by slowly adding to the training set both the target features and instances in which the current algorithm is the most confident. Our algorithm is a variant of co-training [7], and we name it CODA (Co-training for domain adaptation). Unlike the original co-training work, we do not assume a particular feature split. Instead, for each iteration of co-training, we formulate a single optimization problem which simultaneously learns a target predictor, a split of the feature space into views, and a subset of source and target features to include in the predictor. CODA significantly out-performs the state-of-the-art on the 12-domain benchmark data set of Blitzer et al. [4]. Indeed, over a wide range (65 of 84 comparisons) of target supervision CODA achieves the best performance.) <|cite_end|> <|cite_start|> (Reference: Joint transfer and batch-mode active learning: Active learning and transfer learning are two different methodologies that address the common problem of insufficient labels. Transfer learning addresses this problem by using the knowledge gained from a related and already labeled data source, whereas active learning focuses on selecting a small set of informative samples for manual annotation. Recently, there has been much interest in developing frameworks that combine both transfer and active learning methodologies. A few such frameworks reported in literature perform transfer and active learning in two separate stages. In this work, we present an integrated framework that performs transfer and active learning simultaneously by solving a single convex optimization problem. The proposed framework computes the weights of source domain data and selects the samples from the target domain data simultaneously, by minimizing a common objective of reducing distribution difference between the data set consisting of re-weighted source and the queried target domain data and the set of unlabeled target domain data. Comprehensive experiments on real data demonstrate the superior performance of the proposed approach.) <|cite_end|>. An early theoretical treatment can be found in <|cite_start|> (Reference: A theory of transfer learning with applications to active learning: ) <|cite_end|>, but which however considers a transfer setting with fixed marginal but varying conditionals (labeling functions). The recent work of <|cite_start|> (Reference: Active Nearest Neighbors in Changing Environments: While classic machine learning paradigms assume training and test data are generated from the same process, domain adaptation addresses the more realistic setting in which the learner has large quantities of labeled data from some source task but limited or no labeled data from the target task it is attempting to learn. In this work, we give the first formal analysis showing that using active learning for domain adaptation yields a way to address the statistical challenges inherent in this setting. We propose a novel nonparametric algorithm, ANDA, that combines an active nearest neighbor querying strategy with nearest neighbor prediction. We provide analyses of its querying behavior and of finite sample convergence rates of the resulting classifier under covariate shift. Our experiments show that ANDA successfully corrects for dataset bias in multiclass image categorization.) <|cite_end|> gives a nice first theoretical treatment of the problem under similar nonparametric conditions as ours; however their work is less concerned with a minimax understanding of the problem, and mostly concerned with algorithmic strategies towards minimizing label requests. We will show how to extend their procedure to achieve minimax transfer rates in terms of unknown problem parameters, while requesting target labels only when necessary (as controlled by unknown $\gamma$). <|paper_end|>
[ "<|reference_start|> New Analysis and Algorithm for Learning with Drifting Distributions: We present a new analysis of the problem of learning with drifting distributions in the batch setting using the notion of discrepancy. We prove learning bounds based on the Rademacher complexity of the hypothesis set and the discrepancy of distributions both for a drifting PAC scenario and a tracking scenario. Our bounds are always tighter and in some cases substantially improve upon previous ones based on the $L_1$ distance. We also present a generalization of the standard on-line to batch conversion to the drifting scenario in terms of the discrepancy and arbitrary convex combinations of hypotheses. We introduce a new algorithm exploiting these learning guarantees, which we show can be formulated as a simple QP. Finally, we report the results of preliminary experiments demonstrating the benefits of this algorithm. <|reference_end|>", "<|reference_start|> Adaptation based on generalized discrepancy: We present a new algorithm for domain adaptation improving upon a discrepancy minimization algorithm, (DM), previously shown to outperform a number of algorithms for this problem. Unlike many previously proposed solutions for domain adaptation, our algorithm does not consist of a fixed reweighting of the losses over the training sample. Instead, the reweighting depends on the hypothesis sought. The algorithm is derived from a less conservative notion of discrepancy than the DM algorithm called generalized discrepancy. We present a detailed description of our algorithm and show that it can be formulated as a convex optimization problem. We also give a detailed theoretical analysis of its learning guarantees which helps us select its parameters. Finally, we report the results of experiments demonstrating that it improves upon discrepancy minimization in several tasks. <|reference_end|>", "<|reference_start|> Lipschitz Density-Ratios, Structured Data, and Data-driven Tuning: Density-ratio estimation (i.e. estimating f = fQ/fP for two unknown distributions Q and P ) has proved useful in many Machine Learning tasks, e.g., risk-calibration in transfer-learning, two-sample tests, and also useful in common techniques such importance sampling and bias correction. While there are many important analyses of this estimation problem, the present paper derives convergence rates in other practical settings that are less understood, namely, extensions of traditional Lipschitz smoothness conditions, and common high-dimensional settings with structured data (e.g. manifold data, sparse data). Various interesting facts, which hold in earlier settings, are shown to extend to these settings. Namely, (1) optimal rates depend only on the smoothness of the ratio f , and not on the densities fQ, fP , supporting the belief that plugging in estimates for fQ, fP is suboptimal; (2) optimal rates depend only on the intrinsic dimension of data, i.e. this problem – unlike density estimation – escapes the curse of dimension. We further show that near-optimal rates are attainable by estimators tuned from data alone, i.e. with no prior distributional information. This last fact is of special interest in unsupervised settings such as this one, where only oracle rates seem to be known, i.e., rates which assume critical distributional information usually unavailable in practice. <|reference_end|>", "<|reference_start|> On the Hardness of Domain Adaptation and the Utility of Unlabeled Target Samples: <|reference_end|>" ]
[ 4, 5, 8, 12 ]
{"<|multi_cite_4_1|>": "ss-955420", "<|multi_cite_4_2|>": "ss-773085", "<|multi_cite_4_3|>": "ss-781936", "<|multi_cite_5_1|>": "arxiv-6472", "<|multi_cite_5_2|>": "arxiv-31944", "<|multi_cite_5_3|>": "ss-1511163", "<|multi_cite_6_1|>": "ss-826812", "<|multi_cite_6_2|>": "ss-1262575", "<|cite_1|>": "ss-911621", "<|multi_cite_7_1|>": "ss-1512672", "<|cite_2|>": "ss-1151569", "<|multi_cite_8_1|>": "ss-1222728", "<|multi_cite_8_2|>": "ss-1292761", "<|multi_cite_9_1|>": "ss-1681631", "<|multi_cite_9_2|>": "ss-1072039", "<|multi_cite_9_3|>": "ss-1378329", "<|cite_10|>": "ss-1347973", "<|cite_3|>": "ss-1425943"}
2306.12152-0
<|paper_start|> Title: Exploiting Multimodal Synthetic Data for Egocentric Human-Object Interaction Detection in an Industrial Scenario Abstract: Exploiting Multimodal Synthetic Data for Egocentric Human-Object Interaction Detection in an Industrial Scenario: In this paper, we tackle the problem of Egocentric Human-Object Interaction (EHOI) detection in an industrial setting. To overcome the lack of public datasets in this context, we propose a pipeline and a tool for generating synthetic images of EHOIs paired with several annotations and data signals (e.g., depth maps or segmentation masks). Using the proposed pipeline, we present EgoISM-HOI a new multimodal dataset composed of synthetic EHOI images in an industrial environment with rich annotations of hands and objects. To demonstrate the utility and effectiveness of synthetic EHOI data produced by the proposed tool, we designed a new method that predicts and combines different multimodal signals to detect EHOIs in RGB images. Our study shows that exploiting synthetic data to pre-train the proposed method significantly improves performance when tested on real-world data. Moreover, to fully understand the usefulness of our method, we conducted an in-depth analysis in which we compared and highlighted the superiority of the proposed approach over different state-of-the-art class-agnostic methods. To support research in this field, we publicly release the datasets, source code, and pre-trained models at https://iplab.dmi.unict.it/egoism-hoi. Introduction \label{sec:introduction} In recent years, wearable devices have become increasingly popular as they offer a first-person perspective of how users interact with the world around them. One of the advantages of wearable devices is that they allow the collection and processing of visual information without requiring users to hold any devices with their hands, enabling them to perform their activities in a natural way. Intelligent systems can analyze this visual information to provide services to support humans in different domains such as activities of daily living <|cite_start|> (Reference: You-Do, I-Learn: Discovering Task Relevant Objects and their Modes of Interaction from Multi-User Egocentric Video.: We present a fully unsupervised approach for the discovery of i) task relevant objects and ii) how these objects have been used. A Task Relevant Object (TRO) is an object, or part of an object, with which a person interacts during task performance. Given egocentric video from multiple operators, the approach can discover objects with which the users interact, both static objects such as a coffee machine as well as movable ones such as a cup. Importantly, we also introduce the term Mode of Interaction (MOI) to refer to the different ways in which TROs are used. Say, a cup can be lifted, washed, or poured into. When harvesting interactions with the same object from multiple operators, common MOIs can be found. Setup and Dataset: Using a wearable camera and gaze tracker (Mobile Eye-XG from ASL), egocentric video is collected of users performing tasks, along with their gaze in pixel coordinates. Six locations were chosen: kitchen, workspace, laser printer, corridor with a locked door, cardiac gym and weight-lifting machine. The Bristol Egocentric Object Interactions Dataset is publically available .) <|cite_end|> <|cite_start|> (Reference: Scaling Egocentric Vision: The EPIC-KITCHENS Dataset: First-person vision is gaining interest as it offers a unique viewpoint on people's interaction with objects, their attention, and even intention. However, progress in this challenging domain has been relatively slow due to the lack of sufficiently large datasets. In this paper, we introduce EPIC-KITCHENS, a large-scale egocentric video benchmark recorded by 32 participants in their native kitchen environments. Our videos depict nonscripted daily activities: we simply asked each participant to start recording every time they entered their kitchen. Recording took place in 4 cities (in North America and Europe) by participants belonging to 10 different nationalities, resulting in highly diverse cooking styles. Our dataset features 55 hours of video consisting of 11.5M frames, which we densely labeled for a total of 39.6K action segments and 454.3K object bounding boxes. Our annotation is unique in that we had the participants narrate their own videos (after recording), thus reflecting true intention, and we crowd-sourced ground-truths based on these. We describe our object, action and anticipation challenges, and evaluate several baselines over two test splits, seen and unseen kitchens. Dataset and Project page: http://epic-kitchens.github.io) <|cite_end|> <|cite_start|> (Reference: Ego4D: Around the World in 3,000 Hours of Egocentric Video: We introduce Ego4D, a massive-scale egocentric video dataset and benchmark suite. It offers 3,670 hours of daily-life activity video spanning hundreds of scenarios (household, outdoor, workplace, leisure, etc.) captured by 931 unique camera wearers from 74 worldwide locations and 9 different countries. The approach to collection is designed to uphold rigorous privacy and ethics standards with consenting participants and robust de-identification procedures where relevant. Ego4D dramatically expands the volume of diverse egocentric video footage publicly available to the research community. Portions of the video are accompanied by audio, 3D meshes of the environment, eye gaze, stereo, and/or synchronized videos from multiple egocentric cameras at the same event. Furthermore, we present a host of new benchmark challenges centered around understanding the first-person visual experience in the past (querying an episodic memory), present (analyzing hand-object manipulation, audio-visual conversation, and social interactions), and future (forecasting activities). By publicly sharing this massive annotated dataset and benchmark suite, we aim to push the frontier of first-person perception. Project page: https://ego4d-data.org/) <|cite_end|>, cultural sites <|cite_start|> (Reference: VEDI: Vision Exploitation for Data Interpretation: ) <|cite_end|>and industrial scenarios <|cite_start|> (Reference: Assembly101: A Large-Scale Multi-View Video Dataset for Understanding Procedural Activities: Assembly101 is a new procedural activity dataset featuring 4321 videos of people assembling and disassembling 101 "take-apart" toy vehicles. Participants work without fixed instructions, and the sequences feature rich and natural variations in action ordering, mistakes, and corrections. Assembly101 is the first multi-view action dataset, with simultaneous static (8) and egocentric (4) recordings. Sequences are annotated with more than 100K coarse and 1M fine-grained action segments, and 18M 3D hand poses. We benchmark on three action understanding tasks: recognition, anticipation and temporal segmentation. Additionally, we propose a novel task of detecting mistakes. The unique recording format and rich set of annotations allow us to investigate generalization to new toys, cross-view transfer, long-tailed distributions, and pose vs. appearance. We envision that Assembly101 will serve as a new challenge to investigate various activity understanding problems.) <|cite_end|> <|cite_start|> (Reference: A Wearable Device Application for Human-Object Interactions Detection: : Over the past ten years, wearable technologies have continued to evolve. In the development of wearable technology, smart glasses for augmented and mixed reality are becoming particularly prominent. We believe that it is crucial to incorporate artificial intelligence algorithms that can understand real-world human behavior into these devices if we want them to be able to properly mix the real and virtual worlds and give assistance to the users. In this paper, we present an application for smart glasses that provides assistance to workers in an industrial site recognizing human-object interactions. We propose a system that utilizes a 2D object detector to locate and identify the objects in the scene and classic mixed reality features like plane detector, virtual object anchoring, and hand pose estimation to predict the interaction between a person and the objects placed on a working area in order to avoid the 3D object annotation and detection problem. We have also performed a user study with 25 volunteers who have been asked to complete a questionnaire after using the application to assess the usability and functionality of the developed application.) <|cite_end|>. In particular, egocentric vision can be adopted in the industrial context to understand workers’ behavior, improve workplace safety, and increase overall productivity. For example, by detecting the hands of the workers and determining which objects they are interacting with, it is possible to monitor object usage, provide information on the procedures to be carried out, and improve the safety of workers by issuing reminders when dangerous objects are manipulated. Previous works have investigated the problem of Human-Object Interaction detection (HOI) considering either third-person <|cite_start|> (Reference: Detecting and Recognizing Human-Object Interactions: To understand the visual world, a machine must not only recognize individual object instances but also how they interact. Humans are often at the center of such interactions and detecting human-object interactions is an important practical and scientific problem. In this paper, we address the task of detecting <human, verb, object> triplets in challenging everyday photos. We propose a novel model that is driven by a human-centric approach. Our hypothesis is that the appearance of a person -- their pose, clothing, action -- is a powerful cue for localizing the objects they are interacting with. To exploit this cue, our model learns to predict an action-specific density over target object locations based on the appearance of a detected person. Our model also jointly learns to detect people and objects, and by fusing these predictions it efficiently infers interaction triplets in a clean, jointly trained end-to-end system we call InteractNet. We validate our approach on the recently introduced Verbs in COCO (V-COCO) and HICO-DET datasets, where we show quantitatively compelling results.) <|cite_end|> <|cite_start|> (Reference: PPDM: Parallel Point Detection and Matching for Real-time Human-Object Interaction Detection: We propose a single-stage Human-Object Interaction (HOI) detection method that has outperformed all existing methods on HICO-DET dataset at 37 fps on a single Titan XP GPU. It is the first real-time HOI detection method. Conventional HOI detection methods are composed of two stages, i.e., human-object proposals generation, and proposals classification. Their effectiveness and efficiency are limited by the sequential and separate architecture. In this paper, we propose a Parallel Point Detection and Matching (PPDM) HOI detection framework. In PPDM, an HOI is defined as a point triplet < human point, interaction point, object point>. Human and object points are the center of the detection boxes, and the interaction point is the midpoint of the human and object points. PPDM contains two parallel branches, namely point detection branch and point matching branch. The point detection branch predicts three points. Simultaneously, the point matching branch predicts two displacements from the interaction point to its corresponding human and object points. The human point and the object point originated from the same interaction point are considered as matched pairs. In our novel parallel architecture, the interaction points implicitly provide context and regularization for human and object detection. The isolated detection boxes are unlikely to form meaning HOI triplets are suppressed, which increases the precision of HOI detection. Moreover, the matching between human and object detection boxes is only applied around limited numbers of filtered candidate interaction points, which saves much computational cost. Additionally, we build a new application-oriented database named HOI-A, which severs as a good supplement to the existing datasets. The source code and the dataset will be made publicly available to facilitate the development of HOI detection.) <|cite_end|>or first-person <|cite_start|> (Reference: HOI4D: A 4D Egocentric Dataset for Category-Level Human-Object Interaction: We present HOI4D, a large-scale 4D egocentric dataset with rich annotations, to catalyze the research of category-level human-object interaction. HOI4D consists of 2.4M RGB-D egocentric video frames over 4000 sequences collected by 4 participants interacting with 800 different object instances from 16 categories over 610 different indoor rooms. Frame-wise annotations for panoptic segmentation, motion segmentation, 3D hand pose, category-level object pose and hand action have also been provided, together with reconstructed object meshes and scene point clouds. With HOI4D, we establish three benchmarking tasks to promote category-level HOI from 4D visual signals including semantic segmentation of 4D dynamic point cloud sequences, category-level object pose tracking, and egocentric action segmentation with diverse interaction targets. In-depth analysis shows HOI4D poses great challenges to existing methods and produces great research opportunities.) <|cite_end|> <|cite_start|> (Reference: Fine-Grained Egocentric Hand-Object Segmentation: Dataset, Model, and Applications: Egocentric videos offer fine-grained information for high-fidelity modeling of human behaviors. Hands and interacting objects are one crucial aspect of understanding a viewer's behaviors and intentions. We provide a labeled dataset consisting of 11,243 egocentric images with per-pixel segmentation labels of hands and objects being interacted with during a diverse array of daily activities. Our dataset is the first to label detailed hand-object contact boundaries. We introduce a context-aware compositional data augmentation technique to adapt to out-of-distribution YouTube egocentric video. We show that our robust hand-object segmentation model and dataset can serve as a foundational tool to boost or enable several downstream vision applications, including hand state classification, video activity recognition, 3D mesh reconstruction of hand-object interactions, and video inpainting of hand-object foregrounds in egocentric videos. Dataset and code are available at: https://github.com/owenzlz/EgoHOS) <|cite_end|>points of view. While these works have considered generic scenarios (e.g., COCO objects) or class-agnostic settings <|cite_start|> (Reference: Understanding Human Hands in Contact at Internet Scale: Hands are the central means by which humans manipulate their world and being able to reliably extract hand state information from Internet videos of humans engaged in their hands has the potential to pave the way to systems that can learn from petabytes of video data. This paper proposes steps towards this by inferring a rich representation of hands engaged in interaction method that includes: hand location, side, contact state, and a box around the object in contact. To support this effort, we gather a large-scale dataset of hands in contact with objects consisting of 131 days of footage as well as a 100K annotated hand-contact video frame dataset. The learned model on this dataset can serve as a foundation for hand-contact understanding in videos. We quantitatively evaluate it both on its own and in service of predicting and learning from 3D meshes of human hands.) <|cite_end|>, their use in industrial contexts is still understudied due to the limited availability of public datasets <|cite_start|> (Reference: The MECCANO Dataset: Understanding Human-Object Interactions from Egocentric Videos in an Industrial-like Domain: Wearable cameras allow to collect images and videos of humans interacting with the world. While human-object interactions have been thoroughly investigated in third person vision, the problem has been understudied in egocentric settings and in industrial scenarios. To fill this gap, we introduce MECCANO, the first dataset of egocentric videos to study human-object interactions in industrial-like settings. MECCANO has been acquired by 20 participants who were asked to build a motorbike model, for which they had to interact with tiny objects and tools. The dataset has been explicitly labeled for the task of recognizing human-object interactions from an egocentric perspective. Specifically, each interaction has been labeled both temporally (with action segments) and spatially (with active object bounding boxes). With the proposed dataset, we investigate four different tasks including 1) action recognition, 2) active object detection, 3) active object recognition and 4) egocentric human-object interaction detection, which is a revisited version of the standard human-object interaction detection task. Baseline results show that the MECCANO dataset is a challenging benchmark to study egocentric human-object interactions in industrial-like scenarios. We publicy release the dataset at https://iplab.dmi.unict.it/MECCANO.) <|cite_end|> <|cite_start|> (Reference: Assembly101: A Large-Scale Multi-View Video Dataset for Understanding Procedural Activities: Assembly101 is a new procedural activity dataset featuring 4321 videos of people assembling and disassembling 101 "take-apart" toy vehicles. Participants work without fixed instructions, and the sequences feature rich and natural variations in action ordering, mistakes, and corrections. Assembly101 is the first multi-view action dataset, with simultaneous static (8) and egocentric (4) recordings. Sequences are annotated with more than 100K coarse and 1M fine-grained action segments, and 18M 3D hand poses. We benchmark on three action understanding tasks: recognition, anticipation and temporal segmentation. Additionally, we propose a novel task of detecting mistakes. The unique recording format and rich set of annotations allow us to investigate generalization to new toys, cross-view transfer, long-tailed distributions, and pose vs. appearance. We envision that Assembly101 will serve as a new challenge to investigate various activity understanding problems.) <|cite_end|>. To develop a system capable of detecting Egocentric Human-Object Interactions (EHOI) in this context, it is generally required to collect and label large amounts of domain-specific data, which could be expensive in terms of costs and time and not always possible due to privacy constraints and industrial secrets <|cite_start|> (Reference: The MECCANO Dataset: Understanding Human-Object Interactions from Egocentric Videos in an Industrial-like Domain: Wearable cameras allow to collect images and videos of humans interacting with the world. While human-object interactions have been thoroughly investigated in third person vision, the problem has been understudied in egocentric settings and in industrial scenarios. To fill this gap, we introduce MECCANO, the first dataset of egocentric videos to study human-object interactions in industrial-like settings. MECCANO has been acquired by 20 participants who were asked to build a motorbike model, for which they had to interact with tiny objects and tools. The dataset has been explicitly labeled for the task of recognizing human-object interactions from an egocentric perspective. Specifically, each interaction has been labeled both temporally (with action segments) and spatially (with active object bounding boxes). With the proposed dataset, we investigate four different tasks including 1) action recognition, 2) active object detection, 3) active object recognition and 4) egocentric human-object interaction detection, which is a revisited version of the standard human-object interaction detection task. Baseline results show that the MECCANO dataset is a challenging benchmark to study egocentric human-object interactions in industrial-like scenarios. We publicy release the dataset at https://iplab.dmi.unict.it/MECCANO.) <|cite_end|>. \begin{figure*}[t] \centering \includegraphics[scale=1]{imgs/fig_data_generation_pipeline.pdf} \caption{Synthetic EHOI images generation pipeline. (a) We use 3D scanners to acquire 3D models of the objects and environment. (b) We hence use the proposed data generation tool to create the synthetic dataset.} \label{fig:data_generation_pipeline} \end{figure*} In this paper, we investigate whether the use of synthetic data in first-person vision can mitigate the need for labeled real domain-specific data in model training, which would greatly reduce the cost of gathering a suitable dataset for model development. We propose a pipeline (see Fig.~\ref{fig:data_generation_pipeline}) and a tool that, leveraging 3D models of the target environment and objects, produces a large number of synthetic EHOI image examples, automatically labeled with several annotations, such as hand-object 2D-3D bounding boxes, object categories, hand information (i.e., hand side, contact state, and associated active objects) as well as multimodal signals such as depth maps and instance segmentation masks. Exploiting the proposed pipeline, we present \textit{EgoISM-HOI} (Egocentric Industrial Synthetic Multimodal dataset for Human-Object Interaction detection), a new photo-realistic dataset of EHOIs in an industrial scenario with rich annotations of hands, objects, and active objects (i.e., the objects the user is interacting with), including class labels, depth maps, and instance segmentation masks (see Fig.~\ref{fig:data_generation_pipeline} (b)). To assess the suitability of the synthetic data generated with the proposed protocol to tackle the EHOI detection task on target real data, we further acquired and labeled 42 real egocentric videos in an industrial laboratory in which different subjects perform test and repair operations on electrical boards\footnote{Note that both real and synthetic data were acquired in the same environment and with the same objects}. We annotated all EHOIs instances of the images identifying the frames in which interactions occur and all active objects with a bounding box associated with the related object class. In addition, we labeled the hands and all the objects in the images. We investigated the potential of using the generated synthetic multimodal data, including depth maps and instance segmentation masks, to improve the performance of EHOI detection methods. Specifically, we designed an EHOI detection approach based on the method proposed in <|cite_start|> (Reference: Understanding Human Hands in Contact at Internet Scale: Hands are the central means by which humans manipulate their world and being able to reliably extract hand state information from Internet videos of humans engaged in their hands has the potential to pave the way to systems that can learn from petabytes of video data. This paper proposes steps towards this by inferring a rich representation of hands engaged in interaction method that includes: hand location, side, contact state, and a box around the object in contact. To support this effort, we gather a large-scale dataset of hands in contact with objects consisting of 131 days of footage as well as a 100K annotated hand-contact video frame dataset. The learned model on this dataset can serve as a foundation for hand-contact understanding in videos. We quantitatively evaluate it both on its own and in service of predicting and learning from 3D meshes of human hands.) <|cite_end|>which makes use of the different multimodal signals available within our dataset. Experiments show that the proposed method outperforms baseline approaches based on the exploitation of class-agnostic models trained on out-of-domain real-world data. Indeed, the proposed method achieves good performance when trained with our synthetic data and a very small amount of real-world data. Additional experiments show that, by leveraging multimodal signals, the accuracy and robustness of our EHOI detection system increased. The contributions of this study are the following: 1) we propose a pipeline that exploits 3D models of real objects and environments to generate thousands of domain-specific synthetic egocentric human-object interaction images paired with several labels and modalities; 2) we present \textit{EgoISM-HOI}, a new multimodal dataset of synthetic EHOIs in an industrial scenario with rich annotations of hands and objects. To test the ability of models to generalize to real-world data, we acquire and manually labeled real-world images of EHOIs in the target environment; 3) we design a new method for EHOI detection that exploits additional modalities, such as depth maps and instance segmentation maps to enhance the performance of classic HOI detection approaches; 4) we perform extensive evaluations to highlight the benefit of using synthetic data to pre-train EHOI detection methods, mainly when a limited set of real data is available, and report improvements of our approach over classical class-agnostic state-of-the-art methods; 5) we release the dataset and code publicly at the following link: \url{https://iplab.dmi.unict.it/egoism-hoi}. The remainder of this paper is organized as follows. Section~\ref{sec:related_work} provides a detailed summary of the related work. Section~\ref{sec:proposed_ehoi_generation_pipeline} details the proposed data generation pipeline. Section~\ref{sec:egoism_hoi} describes the proposed dataset. Section~\ref{sec:approach} introduces our multimodal EHOI detection method. Section~\ref{sec:experimental_results} reports and discusses the performed experiments and ablation studies. Finally, Section~\ref{sec:conclusion} concludes the paper. Related Work \label{sec:related_work} In this Section, we discuss datasets and state-of-the-art methods for detecting human-object interactions from images and videos acquired from both third~(TPV) and first-person vision~(FPV). \subsection{Datasets for Human-Object Interaction Detection} Previous works have proposed benchmark datasets to study human-object interactions from a third-person vision. The datasets, such as \textit{PASCAL VOC} <|cite_start|> (Reference: International Journal of Computer Vision manuscript No. (will be inserted by the editor) The PASCAL Visual Object Classes (VOC) Challenge: ) <|cite_end|>, \textit{V-COCO} <|cite_start|> (Reference: Visual Semantic Role Labeling: In this paper we introduce the problem of Visual Semantic Role Labeling: given an image we want to detect people doing actions and localize the objects of interaction. Classical approaches to action recognition either study the task of action classification at the image or video clip level or at best produce a bounding box around the person doing the action. We believe such an output is inadequate and a complete understanding can only come when we are able to associate objects in the scene to the different semantic roles of the action. To enable progress towards this goal, we annotate a dataset of 16K people instances in 10K images with actions they are doing and associate objects in the scene with different semantic roles for each action. Finally, we provide a set of baseline algorithms for this task and analyze error modes providing directions for future work.) <|cite_end|>, \textit{HICO} <|cite_start|> (Reference: HICO: A Benchmark for Recognizing Human-Object Interactions in Images: We introduce a new benchmark "Humans Interacting with Common Objects" (HICO) for recognizing human-object interactions (HOI). We demonstrate the key features of HICO: a diverse set of interactions with common object categories, a list of well-defined, sense-based HOI categories, and an exhaustive labeling of co-occurring interactions with an object category in each image. We perform an in-depth analysis of representative current approaches and show that DNNs enjoy a significant edge. In addition, we show that semantic knowledge can significantly improve HOI recognition, especially for uncommon categories.) <|cite_end|>, \textit{HICO-DET} <|cite_start|> (Reference: Learning to Detect Human-Object Interactions: We study the problem of detecting human-object interactions (HOI) in static images, defined as predicting a human and an object bounding box with an interaction class label that connects them. HOI detection is a fundamental problem in computer vision as it provides semantic information about the interactions among the detected objects. We introduce HICO-DET, a new large benchmark for HOI detection, by augmenting the current HICO classification benchmark with instance annotations. To solve the task, we propose Human-Object Region-based Convolutional Neural Networks (HO-RCNN). At the core of our HO-RCNN is the Interaction Pattern, a novel DNN input that characterizes the spatial relations between two bounding boxes. Experiments on HICO-DET demonstrate that our HO-RCNN, by exploiting human-object spatial relations through Interaction Patterns, significantly improves the performance of HOI detection over baseline approaches.) <|cite_end|>, \textit{AmbiguousHOI} <|cite_start|> (Reference: Detailed 2D-3D Joint Representation for Human-Object Interaction: Human-Object Interaction (HOI) detection lies at the core of action understanding. Besides 2D information such as human/object appearance and locations, 3D pose is also usually utilized in HOI learning since its view-independence. However, rough 3D body joints just carry sparse body information and are not sufficient to understand complex interactions. Thus, we need detailed 3D body shape to go further. Meanwhile, the interacted object in 3D is also not fully studied in HOI learning. In light of these, we propose a detailed 2D-3D joint representation learning method. First, we utilize the single-view human body capture method to obtain detailed 3D body, face and hand shapes. Next, we estimate the 3D object location and size with reference to the 2D human-object spatial configuration and object category priors. Finally, a joint learning framework and cross-modal consistency tasks are proposed to learn the joint HOI representation. To better evaluate the 2D ambiguity processing capacity of models, we propose a new benchmark named Ambiguous-HOI consisting of hard ambiguous images. Extensive experiments in large-scale HOI benchmark and Ambiguous-HOI show impressive effectiveness of our method. Code and data are available at https://github.com/DirtyHarryLYL/DJ-RN.) <|cite_end|>, \textit{HOI-A} <|cite_start|> (Reference: PPDM: Parallel Point Detection and Matching for Real-time Human-Object Interaction Detection: We propose a single-stage Human-Object Interaction (HOI) detection method that has outperformed all existing methods on HICO-DET dataset at 37 fps on a single Titan XP GPU. It is the first real-time HOI detection method. Conventional HOI detection methods are composed of two stages, i.e., human-object proposals generation, and proposals classification. Their effectiveness and efficiency are limited by the sequential and separate architecture. In this paper, we propose a Parallel Point Detection and Matching (PPDM) HOI detection framework. In PPDM, an HOI is defined as a point triplet < human point, interaction point, object point>. Human and object points are the center of the detection boxes, and the interaction point is the midpoint of the human and object points. PPDM contains two parallel branches, namely point detection branch and point matching branch. The point detection branch predicts three points. Simultaneously, the point matching branch predicts two displacements from the interaction point to its corresponding human and object points. The human point and the object point originated from the same interaction point are considered as matched pairs. In our novel parallel architecture, the interaction points implicitly provide context and regularization for human and object detection. The isolated detection boxes are unlikely to form meaning HOI triplets are suppressed, which increases the precision of HOI detection. Moreover, the matching between human and object detection boxes is only applied around limited numbers of filtered candidate interaction points, which saves much computational cost. Additionally, we build a new application-oriented database named HOI-A, which severs as a good supplement to the existing datasets. The source code and the dataset will be made publicly available to facilitate the development of HOI detection.) <|cite_end|>, and \textit{BEHAVE} <|cite_start|> (Reference: BEHAVE: Dataset and Method for Tracking Human Object Interactions: Modelling interactions between humans and objects in natural environments is central to many applications including gaming, virtual and mixed reality, as well as human behavior analysis and human-robot collaboration. This challenging operation scenario requires generalization to vast number of objects, scenes, and human actions. Unfortunately, there exist no such dataset. Moreover, this data needs to be acquired in diverse natural environments, which rules out 4D scanners and marker based capture systems. We present BEHAVE dataset, the first full body human- object interaction dataset with multi-view RGBD frames and corresponding 3D SMPL and object fits along with the annotated contacts between them. We record around 15k frames at 5 locations with 8 subjects performing a wide range of interactions with 20 common objects. We use this data to learn a model that can jointly track humans and objects in natural environments with an easy-to-use portable multi-camera setup. Our key insight is to predict correspondences from the human and the object to a statistical body model to obtain human-object contacts during interactions. Our approach can record and track not just the humans and objects but also their interactions, modeled as surface contacts, in 3D. Our code and data can be found at: http://virtualhumans.mpi-inf.mpg.de/behave) <|cite_end|>, offer diverse annotations and cover a wide range of scenarios. Most related to our study is \textit{100 Days of Hands} <|cite_start|> (Reference: Understanding Human Hands in Contact at Internet Scale: Hands are the central means by which humans manipulate their world and being able to reliably extract hand state information from Internet videos of humans engaged in their hands has the potential to pave the way to systems that can learn from petabytes of video data. This paper proposes steps towards this by inferring a rich representation of hands engaged in interaction method that includes: hand location, side, contact state, and a box around the object in contact. To support this effort, we gather a large-scale dataset of hands in contact with objects consisting of 131 days of footage as well as a 100K annotated hand-contact video frame dataset. The learned model on this dataset can serve as a foundation for hand-contact understanding in videos. We quantitatively evaluate it both on its own and in service of predicting and learning from 3D meshes of human hands.) <|cite_end|>which is a large-scale dataset of human-object interactions containing more than 131 days of video footage acquired from both third and first-person points of view. The authors extracted 100K frames and annotated with bounding boxes 189.6K hands and 110.1K objects involved in interactions. Moreover, for each hand, they annotated the contact state considering five different classes (i.e., \textit{none, self, other-person, non-portable object}, and \textit{portable object}). Differently from previous works, our study focuses on understanding human-object interactions from a first-person point of view with the exploitation of synthetic generated data. Owing to the aforementioned vantage point given by wearable cameras, previous works have proposed datasets to study human-object interactions from first-person vision. \textit{EgoHands} <|cite_start|> (Reference: Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions: Hands appear very often in egocentric video, and their appearance and pose give important cues about what people are doing and what they are paying attention to. But existing work in hand detection has made strong assumptions that work well in only simple scenarios, such as with limited interaction with other people or in lab settings. We develop methods to locate and distinguish between hands in egocentric video using strong appearance models with Convolutional Neural Networks, and introduce a simple candidate region generation approach that outperforms existing techniques at a fraction of the computational cost. We show how these high-quality bounding boxes can be used to create accurate pixelwise hand regions, and as an application, we investigate the extent to which hand segmentation alone can distinguish between different activities. We evaluate these techniques on a new dataset of 48 first-person videos of people interacting in realistic environments, with pixel-level ground truth for over 15,000 hand instances.) <|cite_end|>is a dataset composed of egocentric video pairs of people interacting with their hands in different daily-life contexts, where they are involved in four social situations (i.e., playing cards, playing chess, solving puzzles, and playing Jenga). It is composed of 130,000 frames and 4,800 pixel-level segmentation masks of hands. \textit{EPIC-KITCHENS-100} <|cite_start|> (Reference: {Rescaling Egocentric Vision: Collection, Pipeline and Challenges for EPIC-KITCHENS-100: This paper introduces the pipeline to extend the largest dataset in egocentric vision, EPIC-KITCHENS. The effort culminates in EPIC-KITCHENS-100, a collection of 100 hours, 20M frames, 90K actions in 700 variable-length videos, capturing long-term unscripted activities in 45 environments, using head-mounted cameras. Compared to its previous version (Damen in Scaling egocentric vision: ECCV, 2018), EPIC-KITCHENS-100 has been annotated using a novel pipeline that allows denser (54% more actions per minute) and more complete annotations of fine-grained actions (+128% more action segments). This collection enables new challenges such as action detection and evaluating the “test of time”—i.e. whether models trained on data collected in 2018 can generalise to new footage collected two years later. The dataset is aligned with 6 challenges: action recognition (full and weak supervision), action detection, action anticipation, cross-modal retrieval (from captions), as well as unsupervised domain adaptation for action recognition. For each challenge, we define the task, provide baselines and evaluation metrics.) <|cite_end|>contains over 100 hours, 20 million frames, and 90,000 actions in 700 variable-length videos of unscripted activities in 45 kitchen environments. The authors provide spatial annotations of (1) instance segmentations masks using Mask R-CNN <|cite_start|> (Reference: Mask R-CNN: We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without bells and whistles, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code has been made available at: https://github.com/facebookresearch/Detectron) <|cite_end|>and (2) hand and active object bounding boxes labeled with the system introduced in <|cite_start|> (Reference: Understanding Human Hands in Contact at Internet Scale: Hands are the central means by which humans manipulate their world and being able to reliably extract hand state information from Internet videos of humans engaged in their hands has the potential to pave the way to systems that can learn from petabytes of video data. This paper proposes steps towards this by inferring a rich representation of hands engaged in interaction method that includes: hand location, side, contact state, and a box around the object in contact. To support this effort, we gather a large-scale dataset of hands in contact with objects consisting of 131 days of footage as well as a 100K annotated hand-contact video frame dataset. The learned model on this dataset can serve as a foundation for hand-contact understanding in videos. We quantitatively evaluate it both on its own and in service of predicting and learning from 3D meshes of human hands.) <|cite_end|>. <|cite_start|> (Reference: EPIC-KITCHENS VISOR Benchmark: VIdeo Segmentations and Object Relations: We introduce VISOR, a new dataset of pixel annotations and a benchmark suite for segmenting hands and active objects in egocentric video. VISOR annotates videos from EPIC-KITCHENS, which comes with a new set of challenges not encountered in current video segmentation datasets. Specifically, we need to ensure both short- and long-term consistency of pixel-level annotations as objects undergo transformative interactions, e.g. an onion is peeled, diced and cooked - where we aim to obtain accurate pixel-level annotations of the peel, onion pieces, chopping board, knife, pan, as well as the acting hands. VISOR introduces an annotation pipeline, AI-powered in parts, for scalability and quality. In total, we publicly release 272K manual semantic masks of 257 object classes, 9.9M interpolated dense masks, 67K hand-object relations, covering 36 hours of 179 untrimmed videos. Along with the annotations, we introduce three challenges in video object segmentation, interaction understanding and long-term reasoning. For data, code and leaderboards: http://epic-kitchens.github.io/VISOR) <|cite_end|>proposed \textit{VISOR}, an extension of \textit{EPIC-KITCHENS-100}, which comprises pixel annotations and a benchmark suite for segmenting hands and active objects in egocentric videos. It contains 272,000 manual segmented semantic masks of 257 object classes, 9.9 million interpolated dense masks, and 67,000 hand-object relations. \textit{EGTEA Gaze+} <|cite_start|> (Reference: In the Eye of the Beholder: Gaze and Actions in First Person Video: We address the task of jointly determining what a person is doing and where they are looking based on the analysis of video captured by a headworn camera. To facilitate our research, we first introduce the EGTEA Gaze+ dataset. Our dataset comes with videos, gaze tracking data, hand masks and action annotations, thereby providing the most comprehensive benchmark for First Person Vision (FPV). Moving beyond the dataset, we propose a novel deep model for joint gaze estimation and action recognition in FPV. Our method describes the participant's gaze as a probabilistic variable and models its distribution using stochastic units in a deep network. We further sample from these stochastic units, generating an attention map to guide the aggregation of visual features for action recognition. Our method is evaluated on our EGTEA Gaze+ dataset and achieves a performance level that exceeds the state-of-the-art by a significant margin. More importantly, we demonstrate that our model can be applied to larger scale FPV dataset---EPIC-Kitchens even without using gaze, offering new state-of-the-art results on FPV action recognition.) <|cite_end|>contains more than 28 hours of egocentric video acquired by subjects performing different meal preparation tasks. The authors provide several annotations, including binocular gaze tracking data, frame-level action annotations, and 15K hand segmentation masks. Recognizing EHOIs could be particularly useful in industrial scenarios, for example, to optimize production processes or to increase workplace safety. \textit{MECCANO} <|cite_start|> (Reference: The MECCANO Dataset: Understanding Human-Object Interactions from Egocentric Videos in an Industrial-like Domain: Wearable cameras allow to collect images and videos of humans interacting with the world. While human-object interactions have been thoroughly investigated in third person vision, the problem has been understudied in egocentric settings and in industrial scenarios. To fill this gap, we introduce MECCANO, the first dataset of egocentric videos to study human-object interactions in industrial-like settings. MECCANO has been acquired by 20 participants who were asked to build a motorbike model, for which they had to interact with tiny objects and tools. The dataset has been explicitly labeled for the task of recognizing human-object interactions from an egocentric perspective. Specifically, each interaction has been labeled both temporally (with action segments) and spatially (with active object bounding boxes). With the proposed dataset, we investigate four different tasks including 1) action recognition, 2) active object detection, 3) active object recognition and 4) egocentric human-object interaction detection, which is a revisited version of the standard human-object interaction detection task. Baseline results show that the MECCANO dataset is a challenging benchmark to study egocentric human-object interactions in industrial-like scenarios. We publicy release the dataset at https://iplab.dmi.unict.it/MECCANO.) <|cite_end|> <|cite_start|> (Reference: MECCANO: A Multimodal Egocentric Dataset for Humans Behavior Understanding in the Industrial-like Domain: Wearable cameras allow to acquire images and videos from the user's perspective. These data can be processed to understand humans behavior. Despite human behavior analysis has been thoroughly investigated in third person vision, it is still understudied in egocentric settings and in particular in industrial scenarios. To encourage research in this field, we present MECCANO, a multimodal dataset of egocentric videos to study humans behavior understanding in industrial-like settings. The multimodality is characterized by the presence of gaze signals, depth maps and RGB videos acquired simultaneously with a custom headset. The dataset has been explicitly labeled for fundamental tasks in the context of human behavior understanding from a first person view, such as recognizing and anticipating human-object interactions. With the MECCANO dataset, we explored five different tasks including 1) Action Recognition, 2) Active Objects Detection and Recognition, 3) Egocentric Human-Objects Interaction Detection, 4) Action Anticipation and 5) Next-Active Objects Detection. We propose a benchmark aimed to study human behavior in the considered industrial-like scenario which demonstrates that the investigated tasks and the considered scenario are challenging for state-of-the-art algorithms. To support research in this field, we publicy release the dataset at https://iplab.dmi.unict.it/MECCANO/.) <|cite_end|>is a multimodal dataset of FPV videos for human behavior understanding collected in an industrial-like scenario. It includes gaze signals, depth maps, and several annotations. MECCANO has been explicitly annotated to study EHOIs with bounding boxes around the hands and active objects, and verbs that describe the interactions. \textit{Assembly101} <|cite_start|> (Reference: Assembly101: A Large-Scale Multi-View Video Dataset for Understanding Procedural Activities: Assembly101 is a new procedural activity dataset featuring 4321 videos of people assembling and disassembling 101 "take-apart" toy vehicles. Participants work without fixed instructions, and the sequences feature rich and natural variations in action ordering, mistakes, and corrections. Assembly101 is the first multi-view action dataset, with simultaneous static (8) and egocentric (4) recordings. Sequences are annotated with more than 100K coarse and 1M fine-grained action segments, and 18M 3D hand poses. We benchmark on three action understanding tasks: recognition, anticipation and temporal segmentation. Additionally, we propose a novel task of detecting mistakes. The unique recording format and rich set of annotations allow us to investigate generalization to new toys, cross-view transfer, long-tailed distributions, and pose vs. appearance. We envision that Assembly101 will serve as a new challenge to investigate various activity understanding problems.) <|cite_end|>is a multi-view action dataset of people assembling and disassembling 101 toy vehicles. It contains 4321 video sequences acquired simultaneously from 8 TPV and 4 FPV cameras, 1M fine-grained action segments, and 18 million 3D hand poses. \textit{Ego4D} <|cite_start|> (Reference: Ego4D: Around the World in 3,000 Hours of Egocentric Video: We introduce Ego4D, a massive-scale egocentric video dataset and benchmark suite. It offers 3,670 hours of daily-life activity video spanning hundreds of scenarios (household, outdoor, workplace, leisure, etc.) captured by 931 unique camera wearers from 74 worldwide locations and 9 different countries. The approach to collection is designed to uphold rigorous privacy and ethics standards with consenting participants and robust de-identification procedures where relevant. Ego4D dramatically expands the volume of diverse egocentric video footage publicly available to the research community. Portions of the video are accompanied by audio, 3D meshes of the environment, eye gaze, stereo, and/or synchronized videos from multiple egocentric cameras at the same event. Furthermore, we present a host of new benchmark challenges centered around understanding the first-person visual experience in the past (querying an episodic memory), present (analyzing hand-object manipulation, audio-visual conversation, and social interactions), and future (forecasting activities). By publicly sharing this massive annotated dataset and benchmark suite, we aim to push the frontier of first-person perception. Project page: https://ego4d-data.org/) <|cite_end|>is a multimodal video dataset to study egocentric perception. The dataset contains more than 3,500 video hours of daily life activity captured by 931 subjects and additional modalities such as eye gaze data, audio, and 3D mesh of environments. EGO4D has been annotated with bounding boxes around the hands and objects involved in the interactions. \textit{HOI4D} <|cite_start|> (Reference: HOI4D: A 4D Egocentric Dataset for Category-Level Human-Object Interaction: We present HOI4D, a large-scale 4D egocentric dataset with rich annotations, to catalyze the research of category-level human-object interaction. HOI4D consists of 2.4M RGB-D egocentric video frames over 4000 sequences collected by 4 participants interacting with 800 different object instances from 16 categories over 610 different indoor rooms. Frame-wise annotations for panoptic segmentation, motion segmentation, 3D hand pose, category-level object pose and hand action have also been provided, together with reconstructed object meshes and scene point clouds. With HOI4D, we establish three benchmarking tasks to promote category-level HOI from 4D visual signals including semantic segmentation of 4D dynamic point cloud sequences, category-level object pose tracking, and egocentric action segmentation with diverse interaction targets. In-depth analysis shows HOI4D poses great challenges to existing methods and produces great research opportunities.) <|cite_end|>is a large-scale 4D egocentric dataset for human-object interaction detection. \textit{HOI4D} contains more than 2 million RGB-D egocentric video frames in different indoor environments of people interacting with 800 object instances. Unlike these works, we aim to study the usefulness of synthetic data for training models which need to be deployed in a specific environment. To this aim, we provide \textit{EgoISM-HOI}, a photo-realistic multimodal dataset of synthetic images for understanding human-object interactions acquired in an industrial scenario, paired with labeled real-world images of egocentric human-object interactions in the same target environment. Our dataset contains RGB-D images and rich automatically labeled annotations of hands, objects, and active objects, including bounding boxes, object categories, instance segmentation masks, and interaction information (i.e., hand contact state, hand side, and hand-active object relationships). \subsection{Human-Object Interaction simulators and synthetic datasets} This line of research focused on providing 3D simulators which are able to generate automatically labeled synthetic data <|cite_start|> (Reference: AI2-THOR: An Interactive 3D Environment for Visual AI: We introduce The House Of inteRactions (THOR), a framework for visual AI research, available at http://ai2thor.allenai.org. AI2-THOR consists of near photo-realistic 3D indoor scenes, where AI agents can navigate in the scenes and interact with objects to perform tasks. AI2-THOR enables research in many different domains including but not limited to deep reinforcement learning, imitation learning, learning by interaction, planning, visual question answering, unsupervised representation learning, object detection and segmentation, and learning models of cognition. The goal of AI2-THOR is to facilitate building visually intelligent models and push the research forward in this domain.) <|cite_end|> <|cite_start|> (Reference: Habitat: A Platform for Embodied AI Research: We present Habitat, a platform for research in embodied artificial intelligence (AI). Habitat enables training embodied agents (virtual robots) in highly efficient photorealistic 3D simulation. Specifically, Habitat consists of: (i) Habitat-Sim: a flexible, high-performance 3D simulator with configurable agents, sensors, and generic 3D dataset handling. Habitat-Sim is fast -- when rendering a scene from Matterport3D, it achieves several thousand frames per second (fps) running single-threaded, and can reach over 10,000 fps multi-process on a single GPU. (ii) Habitat-API: a modular high-level library for end-to-end development of embodied AI algorithms -- defining tasks (e.g., navigation, instruction following, question answering), configuring, training, and benchmarking embodied agents. These large-scale engineering contributions enable us to answer scientific questions requiring experiments that were till now impracticable or 'merely' impractical. Specifically, in the context of point-goal navigation: (1) we revisit the comparison between learning and SLAM approaches from two recent works and find evidence for the opposite conclusion -- that learning outperforms SLAM if scaled to an order of magnitude more experience than previous investigations, and (2) we conduct the first cross-dataset generalization experiments {train, test} x {Matterport3D, Gibson} for multiple sensors {blind, RGB, RGBD, D} and find that only agents with depth (D) sensors generalize across datasets. We hope that our open-source platform and these findings will advance research in embodied AI.) <|cite_end|> <|cite_start|> (Reference: {Interactive Gibson Benchmark: A Benchmark for Interactive Navigation in Cluttered Environments: We present <italic>Interactive Gibson Benchmark</italic>, the first comprehensive benchmark for training and evaluating <italic>Interactive Navigation</italic> solutions. Interactive Navigation tasks are robot navigation problems where physical interaction with objects (e.g., pushing) is allowed and even encouraged to reach the goal. Our benchmark comprises two novel elements: 1) a new experimental simulated environment, the <italic>Interactive Gibson Environment</italic>, that generate photo-realistic images of indoor scenes and simulates realistic physical interactions of robots and common objects found in these scenes; 2) the <italic>Interactive Navigation Score</italic>, a novel metric to study the interplay between navigation and physical interaction of Interactive Navigation solutions. We present and evaluate multiple learning-based baselines in Interactive Gibson Benchmark, and provide insights into regimes of navigation with different trade-offs between navigation, path efficiency and disturbance of surrounding objects. We make our benchmark publicly available<xref ref-type="fn" rid="fn1"><sup>1</sup></xref><fn id="fn1"><label><sup>1</sup></label><p>[Online]. Available: <uri>https://sites.google.com/view/interactivegibsonenv</uri>.</p></fn> and encourage researchers from related robotics disciplines (e.g., planning, learning, control) to propose, evaluate, and compare their Interactive Navigation solutions in Interactive Gibson Benchmark.) <|cite_end|> <|cite_start|> (Reference: ElderSim: A Synthetic Data Generation Platform for Human Action Recognition in Eldercare Applications: To train deep learning models for vision-based action recognition of elders' daily activities, we need large-scale activity datasets acquired under various daily living environments and conditions. However, most public datasets used in human action recognition either differ from or have limited coverage of elders' activities in many aspects, making it challenging to recognize elders' daily activities well by only utilizing existing datasets. Recently, such limitations of available datasets have actively been compensated by generating synthetic data from realistic simulation environments and using those data to train deep learning models. In this paper, based on these ideas we develop ElderSim, an action simulation platform that can generate synthetic data on elders' daily activities. For 55 kinds of frequent daily activities of the elders, ElderSim generates realistic motions of synthetic characters with various adjustable data-generating options, and provides different output modalities including RGB videos, two- and three-dimensional skeleton trajectories. We then generate KIST SynADL, a large-scale synthetic dataset of elders' activities of daily living, from ElderSim and use the data in addition to real datasets to train three state-of the-art human action recognition models. From the experiments following several newly proposed scenarios that assume different real and synthetic dataset configurations for training, we observe a noticeable performance improvement by augmenting our synthetic data. We also offer guidance with insights for the effective utilization of synthetic data to help recognize elders' daily activities.) <|cite_end|> <|cite_start|> (Reference: Put Your PPE On: A Tool for Synthetic Data Generation and Related Benchmark in Construction Site Scenarios: : Using Machine Learning algorithms to enforce safety in construction sites has attracted a lot of interest in recent years. Being able to understand if a worker is wearing personal protective equipment, if he has fallen in the ground, or if he is too close to a moving vehicles or a dangerous tool, could be useful to prevent accidents and to take immediate rescue actions. While these problems can be tackled with machine learning algorithms, a large amount of labeled data, difficult and expensive to obtain are required. Motivated by these observations, we propose a pipeline to produce synthetic data in a construction site to mitigate real data scarcity. We present a benchmark to test the usefulness of the generated data, focusing on three different tasks: safety compliance through object detection, fall detection through pose estimation and distance regression from monocular view. Experiments show that the use of synthetic data helps to reduce the amount of needed real data and allow to achieve good performances.) <|cite_end|>. While these tools allow simulating an agent that navigates in an indoor environment, there are fewer choices for simulating object interaction. <|cite_start|> (Reference: Real-time Hand Tracking under Occlusion from an Egocentric RGB-D Sensor: We present an approach for real-time, robust and accurate hand pose estimation from moving egocentric RGB-D cameras in cluttered real environments. Existing methods typically fail for hand-object interactions in cluttered scenes imaged from egocentric viewpoints, common for virtual or augmented reality applications. Our approach uses two subsequently applied Convolutional Neural Networks (CNNs) to localize the hand and regress 3D joint locations. Hand localization is achieved by using a CNN to estimate the 2D position of the hand center in the input, even in the presence of clutter and occlusions. The localized hand position, together with the corresponding input depth value, is used to generate a normalized cropped image that is fed into a second CNN to regress relative 3D hand joint locations in real time. For added accuracy, robustness and temporal stability, we refine the pose estimates using a kinematic pose tracking energy. To train the CNNs, we introduce a new photorealistic dataset that uses a merged reality approach to capture and synthesize large amounts of annotated data of natural hand interaction in cluttered scenes. Through quantitative and qualitative evaluation, we show that our method is robust to self-occlusion and occlusions by objects, particularly in moving egocentric perspectives.) <|cite_end|>proposed a data generation framework that tracks and combines real human hands with virtual objects to generate photorealistic images of hand-object interactions. Using the proposed tool, the authors introduced \textit{SynthHands}, a dataset that contains around 200K RGB-D images of hand-object interactions acquired from 5 FPV virtual cameras. \textit{ManipulaTHOR} <|cite_start|> (Reference: ManipulaTHOR: A Framework for Visual Object Manipulation: The domain of Embodied AI has recently witnessed substantial progress, particularly in navigating agents within their environments. These early successes have laid the building blocks for the community to tackle tasks that require agents to actively interact with objects in their environment. Object manipulation is an established research domain within the robotics community and poses several challenges including manipulator motion, grasping and long-horizon planning, particularly when dealing with oft-overlooked practical setups involving visually rich and complex scenes, manipulation using mobile agents (as opposed to tabletop manipulation), and generalization to unseen environments and objects. We propose a framework for object manipulation built upon the physics-enabled, visually rich AI2-THOR framework and present a new challenge to the Embodied AI community known as ArmPointNav. This task extends the popular point navigation task to object manipulation and offers new challenges including 3D obstacle avoidance, manipulating objects in the presence of occlusion, and multi-object manipulation that necessitates long term planning. Popular learning paradigms that are successful on PointNav challenges show promise, but leave a large room for improvement.) <|cite_end|>is an extension of the \textit{AI2-THOR} framework <|cite_start|> (Reference: AI2-THOR: An Interactive 3D Environment for Visual AI: We introduce The House Of inteRactions (THOR), a framework for visual AI research, available at http://ai2thor.allenai.org. AI2-THOR consists of near photo-realistic 3D indoor scenes, where AI agents can navigate in the scenes and interact with objects to perform tasks. AI2-THOR enables research in many different domains including but not limited to deep reinforcement learning, imitation learning, learning by interaction, planning, visual question answering, unsupervised representation learning, object detection and segmentation, and learning models of cognition. The goal of AI2-THOR is to facilitate building visually intelligent models and push the research forward in this domain.) <|cite_end|>that adds a robotic arm to virtual agents, enabling the interaction with objects. Thanks to this framework, the authors introduced the \textit{Arm POINTNAV} dataset, which contains interactions in 30 kitchen scenes, 150 object categories, and 12 graspable object categories. <|cite_start|> (Reference: Learning joint reconstruction of hands and manipulated objects: Estimating hand-object manipulations is essential for interpreting and imitating human actions. Previous work has made significant progress towards reconstruction of hand poses and object shapes in isolation. Yet, reconstructing hands and objects during manipulation is a more challenging task due to significant occlusions of both the hand and object. While presenting challenges, manipulations may also simplify the problem since the physics of contact restricts the space of valid hand-object configurations. For example, during manipulation, the hand and object should be in contact but not interpenetrate. In this work, we regularize the joint reconstruction of hands and objects with manipulation constraints. We present an end-to-end learnable model that exploits a novel contact loss that favors physically plausible hand-object constellations. Our approach improves grasp quality metrics over baselines, using RGB images as input. To train and evaluate the model, we also propose a new large-scale synthetic dataset, ObMan, with hand-object manipulations. We demonstrate the transferability of ObMan-trained models to real data.) <|cite_end|>introduced the \textit{ObMan} dataset, a large-scale synthetic image dataset of hand-object interactions. The peculiarity of this work is that the authors used the \textit{GraspIt} software <|cite_start|> (Reference: Graspit! A versatile simulator for robotic grasping: A robotic grasping simulator, called Graspit!, is presented as versatile tool for the grasping community. The focus of the grasp analysis has been on force-closure grasps, which are useful for pick-and-place type tasks. This work discusses the different types of world elements and the general robot definition, and presented the robot library. The paper also describes the user interface of Graspit! and present the collision detection and contact determination system. The grasp analysis and visualization method were also presented that allow a user to evaluate a grasp and compute optimal grasping forces. A brief overview of the dynamic simulation system was provided.) <|cite_end|>to improve the photo-realism of the generated interactions. The generated dataset contains more than 20,000 hand-object interactions in which the background is randomized by choosing images from the \textit{LSUN} <|cite_start|> (Reference: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop: While there has been remarkable progress in the performance of visual recognition algorithms, the state-of-the-art models tend to be exceptionally data-hungry. Large labeled training datasets, expensive and tedious to produce, are required to optimize millions of parameters in deep network models. Lagging behind the growth in model capacity, the available datasets are quickly becoming outdated in terms of size and density. To circumvent this bottleneck, we propose to amplify human effort through a partially automated labeling scheme, leveraging deep learning with humans in the loop. Starting from a large set of candidate images for each category, we iteratively sample a subset, ask people to label them, classify the others with a trained model, split the set into positives, negatives, and unlabeled based on the classification confidence, and then iterate with the unlabeled set. To assess the effectiveness of this cascading procedure and enable further progress in visual recognition research, we construct a new image dataset, LSUN. It contains around one million labeled images for each of 10 scene categories and 20 object categories. We experiment with training popular convolutional networks and find that they achieve substantial performance gains when trained on this dataset.) <|cite_end|>and \textit{ImageNet} <|cite_start|> (Reference: ImageNet Large Scale Visual Recognition Challenge: The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements.) <|cite_end|>datasets. <|cite_start|> (Reference: DexGraspNet: A Large-Scale Robotic Dexterous Grasp Dataset for General Objects Based on Simulation: Robotic dexterous grasping is the first step to enable human-like dexterous object manipulation and thus a crucial robotic technology. However, dexterous grasping is much more under-explored than object grasping with parallel grippers, partially due to the lack of a large-scale dataset. In this work, we present a large-scale robotic dexterous grasp dataset, DexGraspNet, generated by our proposed highly efficient synthesis method that can be generally applied to any dexterous hand. Our method leverages a deeply accelerated differentiable force closure estimator and thus can efficiently and robustly synthesize stable and diverse grasps on a large scale. We choose ShadowHand and generate 1.32 million grasps for 5355 objects, covering more than 133 object categories and containing more than 200 diverse grasps for each object instance, with all grasps having been validated by the Isaac Gym simulator. Compared to the previous dataset from Liu et al. generated by GraspIt!, our dataset has not only more objects and grasps, but also higher diversity and quality. Via performing cross-dataset experiments, we show that training several algorithms of dexterous grasp synthesis on our dataset significantly outperforms training on the previous one. To access our data and code, including code for human and Allegro grasp synthesis, please visit our project page: https://pku-epic.github.io/DexGraspNet/.) <|cite_end|>introduced \textit{DexGraspNet}, a large-scale synthetic dataset for robotic dexterous grasping containing 1.32M grasps of 5355 objects among 133 object categories. <|cite_start|> (Reference: Affordance Diffusion: Synthesizing Hand-Object Interactions: Recent successes in image synthesis are powered by large-scale diffusion models. However, most methods are currently limited to either text- or image-conditioned generation for synthesizing an entire image, texture transfer or inserting objects into a user-specified region. In contrast, in this work we focus on synthesizing complex interactions (ie, an articulated hand) with a given object. Given an RGB image of an object, we aim to hallucinate plausible images of a human hand interacting with it. We propose a two-step generative approach: a LayoutNet that samples an articulation-agnostic hand-object-interaction layout, and a ContentNet that synthesizes images of a hand grasping the object given the predicted layout. Both are built on top of a large-scale pretrained diffusion model to make use of its latent representation. Compared to baselines, the proposed method is shown to generalize better to novel objects and perform surprisingly well on out-of-distribution in-the-wild scenes of portable-sized objects. The resulting system allows us to predict descriptive affordance information, such as hand articulation and approaching orientation. Project page: https://judyye.github.io/affordiffusion-www) <|cite_end|>proposed an approach for synthesizing virtual human hands interacting with real-world objects from RGB images. Differently from these works, our generation pipeline has been specifically designed to obtain accurate 3D reconstructions of a target environment and the objects it contains. 3D models of the target environment and objects are used by our tool to generate realistic egocentric hand-object interactions that integrate coherently with the surrounding environment. Moreover, our tool allows the customization of several parameters of the virtual scene, for example, by randomizing the light points, the position of the virtual object in the environment, or the virtual agent's clothing. In addition, the proposed tool is able to output several annotations automatically labeled and data signals, such as 2D-3D bounding boxes, hand labels (i.e., hand contact state and hand side), instance segmentation masks, and depth maps. Another difference with respect to the aforementioned works is that our tool is designed to automatically generate interactions from a first-person point of view without using any additional real-world data or specific hardware devices other than 3D models. \subsection{Methods for Detecting Human-Object Interactions} In the past years, the human-object interaction detection task has been studied from the third-person point of view <|cite_start|> (Reference: Visual Semantic Role Labeling: In this paper we introduce the problem of Visual Semantic Role Labeling: given an image we want to detect people doing actions and localize the objects of interaction. Classical approaches to action recognition either study the task of action classification at the image or video clip level or at best produce a bounding box around the person doing the action. We believe such an output is inadequate and a complete understanding can only come when we are able to associate objects in the scene to the different semantic roles of the action. To enable progress towards this goal, we annotate a dataset of 16K people instances in 10K images with actions they are doing and associate objects in the scene with different semantic roles for each action. Finally, we provide a set of baseline algorithms for this task and analyze error modes providing directions for future work.) <|cite_end|> <|cite_start|> (Reference: HICO: A Benchmark for Recognizing Human-Object Interactions in Images: We introduce a new benchmark "Humans Interacting with Common Objects" (HICO) for recognizing human-object interactions (HOI). We demonstrate the key features of HICO: a diverse set of interactions with common object categories, a list of well-defined, sense-based HOI categories, and an exhaustive labeling of co-occurring interactions with an object category in each image. We perform an in-depth analysis of representative current approaches and show that DNNs enjoy a significant edge. In addition, we show that semantic knowledge can significantly improve HOI recognition, especially for uncommon categories.) <|cite_end|> <|cite_start|> (Reference: Learning to Detect Human-Object Interactions: We study the problem of detecting human-object interactions (HOI) in static images, defined as predicting a human and an object bounding box with an interaction class label that connects them. HOI detection is a fundamental problem in computer vision as it provides semantic information about the interactions among the detected objects. We introduce HICO-DET, a new large benchmark for HOI detection, by augmenting the current HICO classification benchmark with instance annotations. To solve the task, we propose Human-Object Region-based Convolutional Neural Networks (HO-RCNN). At the core of our HO-RCNN is the Interaction Pattern, a novel DNN input that characterizes the spatial relations between two bounding boxes. Experiments on HICO-DET demonstrate that our HO-RCNN, by exploiting human-object spatial relations through Interaction Patterns, significantly improves the performance of HOI detection over baseline approaches.) <|cite_end|>. <|cite_start|> (Reference: Detecting and Recognizing Human-Object Interactions: To understand the visual world, a machine must not only recognize individual object instances but also how they interact. Humans are often at the center of such interactions and detecting human-object interactions is an important practical and scientific problem. In this paper, we address the task of detecting <human, verb, object> triplets in challenging everyday photos. We propose a novel model that is driven by a human-centric approach. Our hypothesis is that the appearance of a person -- their pose, clothing, action -- is a powerful cue for localizing the objects they are interacting with. To exploit this cue, our model learns to predict an action-specific density over target object locations based on the appearance of a detected person. Our model also jointly learns to detect people and objects, and by fusing these predictions it efficiently infers interaction triplets in a clean, jointly trained end-to-end system we call InteractNet. We validate our approach on the recently introduced Verbs in COCO (V-COCO) and HICO-DET datasets, where we show quantitatively compelling results.) <|cite_end|>proposed a method for detecting human-object interactions in the form of \textit{$<$human, verb, object$>$} triplets, where bounding boxes around objects and humans are also predicted. Specifically, they extended the state-of-the-art object detector Faster R-CNN <|cite_start|> (Reference: {{Faster R-CNN: 根据目标检测算法中出现的目标漏检和重复检测问题,本文提出了一种基于双阈值-非极大值抑制的Faster R-CNN改进算法。算法首先利用深层卷积网络架构提取目标的多层卷积特征,然后通过提出的双阈值-非极大值抑制(DT-NMS)算法在RPN阶段提取目标候选区域的深层信息,最后使用了双线性插值方法来改进原RoI pooling层中的最近邻插值法,使算法在检测数据集上对目标的定位更加准确。实验结果表明,DT-NMS算法既有效地平衡了单阈值算法对目标漏检问题和目标误检问题的关系,又针对性地减小了同一目标被多次检测的概率。与soft-NMS算法相比,本文算法在PASCAL VOC2007上的重复检测率降低了2.4%,多次检测的目标错分率降低了2%。与Faster R-CNN算法相比,本文算法在PASCAL VOC2007上检测精度达到74.7%,性能提升了1.5%。在MSCOCO数据集上性能提升了1.4%。同时本文算法具有较快的检测速度,达到16 FPS。) <|cite_end|>with an additional human-centric branch that uses the features extracted by the backbone to predict a score for candidate human-object pairs and an action class.
[ "<|reference_start|> Understanding Human Hands in Contact at Internet Scale: Hands are the central means by which humans manipulate their world and being able to reliably extract hand state information from Internet videos of humans engaged in their hands has the potential to pave the way to systems that can learn from petabytes of video data. This paper proposes steps towards this by inferring a rich representation of hands engaged in interaction method that includes: hand location, side, contact state, and a box around the object in contact. To support this effort, we gather a large-scale dataset of hands in contact with objects consisting of 131 days of footage as well as a 100K annotated hand-contact video frame dataset. The learned model on this dataset can serve as a foundation for hand-contact understanding in videos. We quantitatively evaluate it both on its own and in service of predicting and learning from 3D meshes of human hands. <|reference_end|>", "<|reference_start|> HICO: A Benchmark for Recognizing Human-Object Interactions in Images: We introduce a new benchmark \"Humans Interacting with Common Objects\" (HICO) for recognizing human-object interactions (HOI). We demonstrate the key features of HICO: a diverse set of interactions with common object categories, a list of well-defined, sense-based HOI categories, and an exhaustive labeling of co-occurring interactions with an object category in each image. We perform an in-depth analysis of representative current approaches and show that DNNs enjoy a significant edge. In addition, we show that semantic knowledge can significantly improve HOI recognition, especially for uncommon categories. <|reference_end|>", "<|reference_start|> Learning to Detect Human-Object Interactions: We study the problem of detecting human-object interactions (HOI) in static images, defined as predicting a human and an object bounding box with an interaction class label that connects them. HOI detection is a fundamental problem in computer vision as it provides semantic information about the interactions among the detected objects. We introduce HICO-DET, a new large benchmark for HOI detection, by augmenting the current HICO classification benchmark with instance annotations. To solve the task, we propose Human-Object Region-based Convolutional Neural Networks (HO-RCNN). At the core of our HO-RCNN is the Interaction Pattern, a novel DNN input that characterizes the spatial relations between two bounding boxes. Experiments on HICO-DET demonstrate that our HO-RCNN, by exploiting human-object spatial relations through Interaction Patterns, significantly improves the performance of HOI detection over baseline approaches. <|reference_end|>", "<|reference_start|> ImageNet Large Scale Visual Recognition Challenge: The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements. <|reference_end|>" ]
[ 10, 17, 18, 45 ]
{"<|multi_cite_1_1|>": "ss-779186", "<|multi_cite_1_2|>": "arxiv-154179", "<|multi_cite_1_3|>": "arxiv-373949", "<|cite_2|>": "ss-1658979", "<|multi_cite_3_1|>": "arxiv-408836", "<|multi_cite_3_2|>": "ss-726498", "<|multi_cite_4_1|>": "arxiv-122453", "<|multi_cite_4_2|>": "arxiv-241379", "<|multi_cite_5_1|>": "arxiv-403015", "<|multi_cite_5_2|>": "arxiv-438804", "<|cite_6|>": "arxiv-271193", "<|multi_cite_7_1|>": "arxiv-295591", "<|multi_cite_7_2|>": "arxiv-408836", "<|cite_8|>": "arxiv-295591", "<|cite_35|>": "arxiv-271193", "<|cite_9|>": "ss-757917", "<|cite_10|>": "arxiv-77882", "<|cite_11|>": "ss-1004540", "<|cite_12|>": "arxiv-116807", "<|cite_13|>": "arxiv-259903", "<|cite_14|>": "arxiv-241379", "<|cite_15|>": "arxiv-413141", "<|cite_16|>": "arxiv-271193", "<|cite_17|>": "ss-1448930", "<|cite_18|>": "ss-1207224", "<|cite_19|>": "arxiv-119553", "<|cite_36|>": "arxiv-271193", "<|cite_37|>": "arxiv-449027", "<|cite_20|>": "arxiv-268683", "<|multi_cite_21_1|>": "arxiv-295591", "<|multi_cite_21_2|>": "arxiv-447093", "<|cite_22|>": "arxiv-408836", "<|cite_23|>": "arxiv-373949", "<|cite_24|>": "arxiv-403015", "<|multi_cite_25_1|>": "arxiv-143160", "<|multi_cite_25_2|>": "arxiv-197792", "<|multi_cite_25_3|>": "ss-928508", "<|multi_cite_25_4|>": "arxiv-299771", "<|multi_cite_25_5|>": "ss-2267697", "<|cite_38|>": "arxiv-121107", "<|cite_26|>": "arxiv-336442", "<|cite_27|>": "arxiv-143160", "<|cite_39|>": "arxiv-199519", "<|cite_28|>": "ss-1452779", "<|cite_29|>": "arxiv-79171", "<|cite_30|>": "arxiv-65515", "<|cite_40|>": "arxiv-451479", "<|cite_41|>": "arxiv-491001", "<|multi_cite_31_1|>": "arxiv-77882", "<|multi_cite_31_2|>": "ss-1004540", "<|multi_cite_31_3|>": "arxiv-116807", "<|cite_42|>": "arxiv-122453", "<|cite_32|>": "ss-949521", "<|cite_43|>": "arxiv-241379", "<|cite_44|>": "arxiv-385087", "<|cite_45|>": "arxiv-436978", "<|multi_cite_33_1|>": "arxiv-116807", "<|multi_cite_33_2|>": "arxiv-170829", "<|cite_46|>": "arxiv-474108", "<|cite_47|>": "arxiv-271193", "<|cite_48|>": "arxiv-438804", "<|cite_49|>": "arxiv-376010", "<|cite_50|>": "ss-1474145", "<|cite_34|>": "arxiv-260981", "<|cite_51|>": "arxiv-370414", "<|cite_52|>": "arxiv-271193", "<|multi_cite_53_1|>": "arxiv-370414", "<|multi_cite_53_2|>": "arxiv-438804"}
2311.05624
<|paper_start|> Title: NP-hard problems are not in BQP Abstract: NP-hard problems are not in BQP: Grover's algorithm can solve NP-complete problems on quantum computers faster than all the known algorithms on classical computers. However, Grover's algorithm still needs exponential time. Due to the BBBV theorem, Grover's algorithm is optimal for searches in the domain of a function, when the function is used as a black box. We analyze the NP-complete set \[\{ (\langle M \rangle, 1^n, 1^t ) \mid \text{ TM }M\text{ accepts an }x\in\{0,1\}^n\text{ within }t\text{ steps}\}.\] If $t$ is large enough, then M accepts each word in $L(M)$ with length $n$ within $t$ steps. So, one can use methods from computability theory to show that black box searching is the fastest way to find a solution. Therefore, Grover's algorithm is optimal for NP-complete problems. Introduction \nocite{homeister2008quantum} \nocite{arora2009computational} One can efficiently simulate a classical computer with a quantum computer so that $\cP\subseteq\BQP$. However, there is no efficient quantum algorithm known for \NP-hard problems. For any computable function $f :\{0,1\}^n \to \{0,1\}$ with exactly one element $x$ such that $f(x)=1$, Grover's algorithm <|cite_start|> (Reference: {A fast quantum mechanical algorithm for database search: were proposed in the early 1980’s [Benioff80] and shown to be at least as powerful as classical computers an important but not surprising result, since classical computers, at the deepest level, ultimately follow the laws of quantum mechanics. The description of quantum mechanical computers was formalized in the late 80’s and early 90’s [Deutsch85][BB92] [BV93] [Yao93] and they were shown to be more powerful than classical computers on various specialized problems. In early 1994, [Shor94] demonstrated that a quantum mechanical computer could efficiently solve a well-known problem for which there was no known efficient algorithm using classical computers. This is the problem of integer factorization, i.e. testing whether or not a given integer, N, is prime, in a time which is a finite power of o (logN) . ----------------------------------------------) <|cite_end|> can find this element $x$ in $\Theta(2^{n/2})$ accesses to $f$ <|cite_start|> (Reference: Quantum and classical tradeoffs: ) <|cite_end|>. There are several variants of Grover's algorithm for the case, that the number of values $x$ with $f(x)=1$ is not exactly one <|cite_start|> (Reference: Quantum search algorithms: We review some of quantum algorithms for search problems: Grover's search algorithm, its generalization to amplitude amplification, the applications of amplitude amplification to various problems and the recent quantum algorithms based on quantum walks.) <|cite_end|>. Due to the BBBV theorem <|cite_start|> (Reference: Strengths and Weaknesses of Quantum Computing: Recently a great deal of attention has been focused on quantum computation following a sequence of results [Bernstein and Vazirani, in Proc. 25th Annual ACM Symposium Theory Comput., 1993, pp. 11--20, SIAM J. Comput., 26 (1997), pp. 1277--1339], [Simon, in Proc. 35th Annual IEEE Symposium Foundations Comput. Sci., 1994, pp. 116--123, SIAM J. Comput., 26 (1997), pp. 1340--1349], [Shor, in Proc. 35th Annual IEEE Symposium Foundations Comput. Sci., 1994, pp. 124--134] suggesting that quantum computers are more powerful than classical probabilistic computers. Following Shor's result that factoring and the extraction of discrete logarithms are both solvable in quantum polynomial time, it is natural to ask whether all of $\NP$ can be efficiently solved in quantum polynomial time. In this paper, we address this question by proving that relative to an oracle chosen uniformly at random with probability 1 the class $\NP$ cannot be solved on a quantum Turing machine (QTM) in time $o(2^{n/2})$. We also show that relative to a permutation oracle chosen uniformly at random with probability 1 the class $\NP \cap \coNP$ cannot be solved on a QTM in time $o(2^{n/3})$. The former bound is tight since recent work of Grover [in {\it Proc.\ $28$th Annual ACM Symposium Theory Comput.}, 1996] shows how to accept the class $\NP$ relative to any oracle on a quantum computer in time $O(2^{n/2})$.) <|cite_end|>, Grover's algorithm is optimal for searching with a black box. A quantum computer needs to apply at least $\Omega(2^{n/2})$ accesses to the black box. In this paper, we construct an $\NP$-complete problem that we cannot solve faster than with black-box searching. We get an arbitrary TM $M$ and decide, whether there is an input word with size $n$ that would be accepted within $t$ steps. If $t$ is great enough then any input word with size $n$ in $L(M)$ would be accepted within $t$ steps. The number of steps $t$ would grow faster than any computable function. So, we can use methods from computability theory to prove, that we cannot be faster than in the black box manner. This will infer that every $\NP$-hard problem is not in $\BQP$. We take a look to the \NP-complete set \[\{ (\langle M \rangle, 1^n, 1^t ) \mid \text{ TM }M\text{ accepts an }x\in\{0,1\}^n\text{ within }t\text{ steps}\}.\] The TM $M$ will be fixed, so we denote \[ \boundset{M}=\{\ (1^n, 1^t) \mid \text{ TM }M\text{ accepts an }x\in\B^n \text{ within }t\text{ steps} \}\text{.} \] for an arbitrary but fixed $M$. If a TM accepts an input, then the TM accepts it within a finite number of steps. So, \[ L(M) = \lim_{t\to\infty} \{x \mid M \text{ accepts }x \text{ within }t\text{ steps} \}\]. In section \ref{se:inf}, we analyze the set \[ \infset{M} = \{ 1^n \mid \exists x \in \B^n \text{ with }x\in L(M) \}.\] Obviously, for all $n\in\nat$ : \[ 1^n \in\infset{M} \iff \lim_{t\to\infty} (1^n, 1^t)\in\boundset{M}. \] The set $\infset{M}$ is not computable. But the set is computable relative to the oracle $L(M)$. In this case, we would have to apply the oracle $L(M)$ on each $x\in B^n$, so we need a black box search. We will conclude from the black box complexity of $\infset{M}$ to the complexity of the computable set $\boundset{M}$ in section \ref{se:main}. So, the set $\boundset{M}$ is in \NP{} and not computable faster than with black box search. <|paper_end|>
[ "<|reference_start|> {A fast quantum mechanical algorithm for database search: were proposed in the early 1980’s [Benioff80] and shown to be at least as powerful as classical computers an important but not surprising result, since classical computers, at the deepest level, ultimately follow the laws of quantum mechanics. The description of quantum mechanical computers was formalized in the late 80’s and early 90’s [Deutsch85][BB92] [BV93] [Yao93] and they were shown to be more powerful than classical computers on various specialized problems. In early 1994, [Shor94] demonstrated that a quantum mechanical computer could efficiently solve a well-known problem for which there was no known efficient algorithm using classical computers. This is the problem of integer factorization, i.e. testing whether or not a given integer, N, is prime, in a time which is a finite power of o (logN) . ---------------------------------------------- <|reference_end|>", "<|reference_start|> Quantum and classical tradeoffs: <|reference_end|>", "<|reference_start|> Quantum search algorithms: We review some of quantum algorithms for search problems: Grover's search algorithm, its generalization to amplitude amplification, the applications of amplitude amplification to various problems and the recent quantum algorithms based on quantum walks. <|reference_end|>", "<|reference_start|> Strengths and Weaknesses of Quantum Computing: Recently a great deal of attention has been focused on quantum computation following a sequence of results [Bernstein and Vazirani, in Proc. 25th Annual ACM Symposium Theory Comput., 1993, pp. 11--20, SIAM J. Comput., 26 (1997), pp. 1277--1339], [Simon, in Proc. 35th Annual IEEE Symposium Foundations Comput. Sci., 1994, pp. 116--123, SIAM J. Comput., 26 (1997), pp. 1340--1349], [Shor, in Proc. 35th Annual IEEE Symposium Foundations Comput. Sci., 1994, pp. 124--134] suggesting that quantum computers are more powerful than classical probabilistic computers. Following Shor's result that factoring and the extraction of discrete logarithms are both solvable in quantum polynomial time, it is natural to ask whether all of $\\NP$ can be efficiently solved in quantum polynomial time. In this paper, we address this question by proving that relative to an oracle chosen uniformly at random with probability 1 the class $\\NP$ cannot be solved on a quantum Turing machine (QTM) in time $o(2^{n/2})$. We also show that relative to a permutation oracle chosen uniformly at random with probability 1 the class $\\NP \\cap \\coNP$ cannot be solved on a QTM in time $o(2^{n/3})$. The former bound is tight since recent work of Grover [in {\\it Proc.\\ $28$th Annual ACM Symposium Theory Comput.}, 1996] shows how to accept the class $\\NP$ relative to any oracle on a quantum computer in time $O(2^{n/2})$. <|reference_end|>" ]
[ 0, 1, 2, 3 ]
{"<|cite_1|>": "ss-679651", "<|cite_2|>": "ss-2279748", "<|cite_3|>": "arxiv-677316", "<|cite_4|>": "ss-1366315"}
2210.05159
<|paper_start|> Title: Can Language Models Be Specific? How? Abstract: Can Language Models Be Specific? How?: "He is a person", "Paris is located on the earth". Both statements are correct but meaningless - due to lack of specificity. In this paper, we propose to measure how specific the language of pre-trained language models (PLMs) is. To achieve this, we introduce a novel approach to build a benchmark for specificity testing by forming masked token prediction tasks with prompts. For instance, given "Toronto is located in [MASK].", we want to test whether a more specific answer will be better filled in by PLMs, e.g., Ontario instead of Canada. From our evaluations, we show that existing PLMs have only a slight preference for more specific answers. We identify underlying factors affecting the specificity and design two prompt-based methods to improve the specificity. Results show that the specificity of the models can be improved by the proposed methods without additional training. We hope this work can bring to awareness the notion of specificity of language models and encourage the research community to further explore this important but understudied problem. Introduction Pre-trained language models (PLMs) such as BERT <|cite_start|> (Reference: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).) <|cite_end|> and GPT-2/3 <|cite_start|> (Reference: Language models are unsupervised multitask learners: Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.) <|cite_end|> <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|> have achieved quite impressive results in various natural language processing tasks. Recent works show that the parameters of these models contain significant amounts of knowledge <|cite_start|> (Reference: Language Models as Knowledge Bases?: Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as "fill-in-the-blank" cloze statements. Language models have many advantages over structured knowledge bases: they require no schema engineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answering against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to recall factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at https://github.com/facebookresearch/LAMA.) <|cite_end|> <|cite_start|> (Reference: How Much Knowledge Can You Pack Into the Parameters of a Language Model?: It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.) <|cite_end|> <|cite_start|> (Reference: X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained Language Models: Language models (LMs) have proven surprisingly successful at capturing factual knowledge by completing cloze-style fill-in-the-blank questions such as "Punta Cana is located in _." However, while knowledge is both written and queried in many languages, studies on LMs' factual representation ability have almost invariably been performed on English. To assess factual knowledge retrieval in LMs in different languages, we create a multilingual benchmark of cloze-style probes for 23 typologically diverse languages. To properly handle language variations, we expand probing methods from single- to multi-word entities, and develop several decoding algorithms to generate multi-token predictions. Extensive experimental results provide insights about how well (or poorly) current state-of-the-art LMs perform at this task in languages with more or fewer available resources. We further propose a code-switching-based method to improve the ability of multilingual LMs to access knowledge, and verify its effectiveness on several benchmark languages. Benchmark data and code have been released at https://x-factr.github.io.) <|cite_end|> <|cite_start|> (Reference: How Can We Know What Language Models Know?: Recent work has presented intriguing results examining the knowledge contained in language models (LM) by having the LM fill in the blanks of prompts such as "Obama is a _ by profession". These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as "Obama worked as a _" may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know. We have released the code and the resulting LM Prompt And Query Archive (LPAQA) at https://github.com/jzbjyb/LPAQA.) <|cite_end|> <|cite_start|> (Reference: Language Models are Open Knowledge Graphs: This paper shows how to construct knowledge graphs (KGs) from pre-trained language models (e.g., BERT, GPT-2/3), without human supervision. Popular KGs (e.g, Wikidata, NELL) are built in either a supervised or semi-supervised manner, requiring humans to create knowledge. Recent deep language models automatically acquire knowledge from large-scale corpora via pre-training. The stored knowledge has enabled the language models to improve downstream NLP tasks, e.g., answering questions, and writing code and articles. In this paper, we propose an unsupervised method to cast the knowledge contained within language models into KGs. We show that KGs are constructed with a single forward pass of the pre-trained language models (without fine-tuning) over the corpora. We demonstrate the quality of the constructed KGs by comparing to two KGs (Wikidata, TAC KBP) created by humans. Our KGs also provide open factual knowledge that is new in the existing KGs. Our code and KGs will be made publicly available.) <|cite_end|>, and knowledge stored in PLMs can be extracted by predicting the mask token(s) using prompts. For instance, given prompt ``J. K. Rowling was born in [MASK].'', PLMs can predict the birthplace of Rowling based on its knowledge. However, there may exist multiple answers for a query, while not all answers are equally specific. \revise{In many situations, we desire a specific answer.} For the example above, the masked token can be replaced by \textit{Yate} (a town), \textit{Gloucestershire} (a county), or \textit{England} (a country). \revise{To acquire the maximum knowledge (in this example, the town, the county, and the country where Rowling was born), we may prefer the model to fill in \textit{Yate} since \textit{Gloucestershire} and \textit{England} can be further predicted using prompts, e.g., ``Yate is located in [MASK].''} \revise{This means, if the prediction is more specific, we can retrieve more fine-grained information from language models, and further acquire more information.} Besides, sometimes, the less specific answer is not useful. For instance, it is well known that \textit{Chicago} is located in \textit{the USA}, users will not get additional information if the model only predicts \textit{Chicago} is located in \textit{the USA} instead of \textit{Illinois}. More examples are shown in Figure \ref{fig:intro}. To make an analogy: \revisenew{A good speaker not only needs to be correct, but also has the ability to be specific when desired. The same is true for language models.} \begin{figure}[tp!] \centerline{\includegraphics[width=0.8\linewidth]{intro.pdf}} \vspace{-1mm} \caption{Examples of language modeling that lack specificity. More specific descriptions could be: \uline{feline}, \uline{poet}, and \uline{in Ontario}, respectively.} \label{fig:intro} \vspace{-3.5mm} \end{figure} Although there are works on measuring how much knowledge is stored in PLMs or improving the \textit{correctness} of the predictions <|cite_start|> (Reference: Language Models as Knowledge Bases?: Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as "fill-in-the-blank" cloze statements. Language models have many advantages over structured knowledge bases: they require no schema engineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answering against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to recall factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at https://github.com/facebookresearch/LAMA.) <|cite_end|> <|cite_start|> (Reference: How Much Knowledge Can You Pack Into the Parameters of a Language Model?: It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.) <|cite_end|> <|cite_start|> (Reference: How Can We Know What Language Models Know?: Recent work has presented intriguing results examining the knowledge contained in language models (LM) by having the LM fill in the blanks of prompts such as "Obama is a _ by profession". These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as "Obama worked as a _" may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know. We have released the code and the resulting LM Prompt And Query Archive (LPAQA) at https://github.com/jzbjyb/LPAQA.) <|cite_end|>, few attempted to measure or improve the \textit{specificity} of predictions made by PLMs. Noteworthy exceptions include the work by <|cite_start|> (Reference: Towards a Human-like Open-Domain Chatbot: We present Meena, a multi-turn open-domain chatbot trained end-to-end on data mined and filtered from public domain social media conversations. This 2.6B parameter neural network is simply trained to minimize perplexity of the next token. We also propose a human evaluation metric called Sensibleness and Specificity Average (SSA), which captures key elements of a human-like multi-turn conversation. Our experiments show strong correlation between perplexity and SSA. The fact that the best perplexity end-to-end trained Meena scores high on SSA (72% on multi-turn evaluation) suggests that a human-level SSA of 86% is potentially within reach if we can better optimize perplexity. Additionally, the full version of Meena (with a filtering mechanism and tuned decoding) scores 79% SSA, 23% higher in absolute SSA than the existing chatbots we evaluated.) <|cite_end|> <|cite_start|> (Reference: LaMDA: Language Models for Dialog Applications: We present LaMDA: Language Models for Dialog Applications. LaMDA is a family of Transformer-based neural language models specialized for dialog, which have up to 137B parameters and are pre-trained on 1.56T words of public dialog data and web text. While model scaling alone can improve quality, it shows less improvements on safety and factual grounding. We demonstrate that fine-tuning with annotated data and enabling the model to consult external knowledge sources can lead to significant improvements towards the two key challenges of safety and factual grounding. The first challenge, safety, involves ensuring that the model's responses are consistent with a set of human values, such as preventing harmful suggestions and unfair bias. We quantify safety using a metric based on an illustrative set of human values, and we find that filtering candidate responses using a LaMDA classifier fine-tuned with a small amount of crowdworker-annotated data offers a promising approach to improving model safety. The second challenge, factual grounding, involves enabling the model to consult external knowledge sources, such as an information retrieval system, a language translator, and a calculator. We quantify factuality using a groundedness metric, and we find that our approach enables the model to generate responses grounded in known sources, rather than responses that merely sound plausible. Finally, we explore the use of LaMDA in the domains of education and content recommendations, and analyze their helpfulness and role consistency.) <|cite_end|>, who evaluated the specificity of conversational language models. In their research, specificity was defined and measured within a conversational context -- for instance, the response ``Me too. I love Eurovision songs'' is deemed more specific than simply ``Me too'' to the statement ``I love Eurovision''. Understanding how specific the language of PLMs is can help us better understand the behavior of language models and facilitate downstream applications such as question answering, text generation, and information extraction <|cite_start|> (Reference: Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing: This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning". Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x' that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: it allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this paper we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g.the choice of pre-trained models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts, but also release other resources, e.g., a website http://pretrain.nlpedia.ai/ including constantly-updated survey, and paperlist.) <|cite_end|> <|cite_start|> (Reference: UnifiedQA: Crossing Format Boundaries With a Single QA System: Question answering (QA) tasks have been posed using a variety of formats, such as extractive span selection, multiple choice, etc. This has led to format-specialized models, and even to an implicit division in the QA community. We argue that such boundaries are artificial and perhaps unnecessary, given the reasoning abilities we seek to teach are not governed by the format. As evidence, we use the latest advances in language modeling to build a single pre-trained QA model, UnifiedQA, that performs surprisingly well across 17 QA datasets spanning 4 diverse formats. UnifiedQA performs on par with 9 different models that were trained on individual datasets themselves. Even when faced with 12 unseen datasets of observed formats, UnifiedQA performs surprisingly well, showing strong generalization from its out-of-format training data. Finally, simply fine-tuning this pre-trained QA model into specialized models results in a new state of the art on 6 datasets, establishing UnifiedQA as a strong starting point for building QA systems.) <|cite_end|> <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|> <|cite_start|> (Reference: Language Models are Open Knowledge Graphs: This paper shows how to construct knowledge graphs (KGs) from pre-trained language models (e.g., BERT, GPT-2/3), without human supervision. Popular KGs (e.g, Wikidata, NELL) are built in either a supervised or semi-supervised manner, requiring humans to create knowledge. Recent deep language models automatically acquire knowledge from large-scale corpora via pre-training. The stored knowledge has enabled the language models to improve downstream NLP tasks, e.g., answering questions, and writing code and articles. In this paper, we propose an unsupervised method to cast the knowledge contained within language models into KGs. We show that KGs are constructed with a single forward pass of the pre-trained language models (without fine-tuning) over the corpora. We demonstrate the quality of the constructed KGs by comparing to two KGs (Wikidata, TAC KBP) created by humans. Our KGs also provide open factual knowledge that is new in the existing KGs. Our code and KGs will be made publicly available.) <|cite_end|>, e.g., making the generated answers/sentences or extracted information more specific or fine-grained. Therefore, we propose to build a benchmark to measure the specificity of the language of PLMs. For reducing human effort and easier to further expand the dataset (e.g., to specific domains), we introduce a novel way to construct test data automatically based on transitive relations in Wikidata. Specifically, we extract reasoning paths from Wikidata, e.g., (\text{J. K. Rowling}, \textit{birthplace}, \text{Yate}, \textit{location}, \text{Gloucestershire}, \textit{location}, \textit{England}). Based on the average distance of each object to the subject and the property of transitive relations, we form masked-token-prediction based probing tasks to measure the specificity, e.g., whether the masked token in ``J. K. Rowling was born in [MASK].'' is better filled by \textit{Yate} than \textit{England} by PLMs. The resulting benchmark dataset contains more than \texttt{20,000} probes covering queries from 5 different categories. The quality of the benchmark is high, where the judgment \revise{on which answer is more specific} is $\sim97\%$ consistent with humans. We provide in-depth analyses on model specificity and study two factors that affect the specificity with our benchmark. As shown by our evaluations in Section \ref{sec:analysis}, existing PLMs, e.g., BERT and GPT-2, similarly have only a slight preference for more specific answers (in only about $60\%$ of cases where a more specific answer is preferred). We also show that, in general, PLMs prefer less specific answers without subjects given, and they only have a weak ability to differentiate coarse-grained/fine-grained objects by measuring their similarities to subjects. The results indicate that specificity was neglected by existing research on language models. \revisenew{How to improve and control it is undoubtedly an interesting and valuable problem.} Based on our observations and analyses, we propose two techniques to improve the specificity of the predictions by modifying the prompts without additional training: \textbf{\textit{Few-shot Prompting}}, where demonstrations with more specific answers are provided to guide the models to produce more specific answers; and \textbf{\textit{Cascade Prompting}}, where \textit{which clauses} are added as suffixes to bias the predictions to be more specific. Results show that Few-shot Prompting can improve the specificity for unidirectional language models like GPT-2 well, while Cascade Prompting works well for bidirectional language models such as BERT. The main contributions of our work are summarized as follows: \begin{itemize}[nolistsep] \item We propose a novel automatic approach to build a benchmark for specificity testing based on the property of transitive relations. \item \revisenew{We analyze the specificity of several existing PLMs and study two factors that affect the specificity.} \item We propose two methods to improve the specificity by modifying the prompts without additional training. \item We provide in-depth analyses and discussions, suggesting further works to explore and further improve the specificity. \end{itemize} Related Work {\flushleft \textbf{Pre-Trained Language Models}:} Pre-trained language models (PLMs) are language models pre-trained on large corpora. In this paper, we will cover two types of pre-trained language models: unidirectional language models, such as GPT-2 <|cite_start|> (Reference: Language models are unsupervised multitask learners: Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.) <|cite_end|>, where the prediction of the current token is only based on previous tokens; and bidirectional language models, such as BERT <|cite_start|> (Reference: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).) <|cite_end|> and RoBERTa <|cite_start|> (Reference: RoBERTa: A Robustly Optimized BERT Pretraining Approach: Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.) <|cite_end|>, where both left and right contexts are utilized to predict the current token. {\flushleft \textbf{Knowledge Retrieval from LMs and Prompting}:} Previous works have worked on extracting factual knowledge from PLMs without incorporating external knowledge, which is usually achieved by creating prompts and letting PLMs predict the masked token(s) <|cite_start|> (Reference: Language Models as Knowledge Bases?: Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as "fill-in-the-blank" cloze statements. Language models have many advantages over structured knowledge bases: they require no schema engineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answering against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to recall factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at https://github.com/facebookresearch/LAMA.) <|cite_end|> <|cite_start|> (Reference: Inducing Relational Knowledge from BERT: One of the most remarkable properties of word embeddings is the fact that they capture certain types of semantic and syntactic relationships. Recently, pre-trained language models such as BERT have achieved groundbreaking results across a wide range of Natural Language Processing tasks. However, it is unclear to what extent such models capture relational knowledge beyond what is already captured by standard word embeddings. To explore this question, we propose a methodology for distilling relational knowledge from a pre-trained language model. Starting from a few seed instances of a given relation, we first use a large text corpus to find sentences that are likely to express this relation. We then use a subset of these extracted sentences as templates. Finally, we fine-tune a language model to predict whether a given word pair is likely to be an instance of some relation, when given an instantiated template for that relation as input.) <|cite_end|> <|cite_start|> (Reference: X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained Language Models: Language models (LMs) have proven surprisingly successful at capturing factual knowledge by completing cloze-style fill-in-the-blank questions such as "Punta Cana is located in _." However, while knowledge is both written and queried in many languages, studies on LMs' factual representation ability have almost invariably been performed on English. To assess factual knowledge retrieval in LMs in different languages, we create a multilingual benchmark of cloze-style probes for 23 typologically diverse languages. To properly handle language variations, we expand probing methods from single- to multi-word entities, and develop several decoding algorithms to generate multi-token predictions. Extensive experimental results provide insights about how well (or poorly) current state-of-the-art LMs perform at this task in languages with more or fewer available resources. We further propose a code-switching-based method to improve the ability of multilingual LMs to access knowledge, and verify its effectiveness on several benchmark languages. Benchmark data and code have been released at https://x-factr.github.io.) <|cite_end|> <|cite_start|> (Reference: How Can We Know What Language Models Know?: Recent work has presented intriguing results examining the knowledge contained in language models (LM) by having the LM fill in the blanks of prompts such as "Obama is a _ by profession". These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as "Obama worked as a _" may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know. We have released the code and the resulting LM Prompt And Query Archive (LPAQA) at https://github.com/jzbjyb/LPAQA.) <|cite_end|> <|cite_start|> (Reference: Language Models are Open Knowledge Graphs: This paper shows how to construct knowledge graphs (KGs) from pre-trained language models (e.g., BERT, GPT-2/3), without human supervision. Popular KGs (e.g, Wikidata, NELL) are built in either a supervised or semi-supervised manner, requiring humans to create knowledge. Recent deep language models automatically acquire knowledge from large-scale corpora via pre-training. The stored knowledge has enabled the language models to improve downstream NLP tasks, e.g., answering questions, and writing code and articles. In this paper, we propose an unsupervised method to cast the knowledge contained within language models into KGs. We show that KGs are constructed with a single forward pass of the pre-trained language models (without fine-tuning) over the corpora. We demonstrate the quality of the constructed KGs by comparing to two KGs (Wikidata, TAC KBP) created by humans. Our KGs also provide open factual knowledge that is new in the existing KGs. Our code and KGs will be made publicly available.) <|cite_end|>. They demonstrated that PLMs contain a significant amount of knowledge. By creating appropriate prompts with some additional training, such methods can even achieve performance comparable to SOTA for some specific tasks <|cite_start|> (Reference: Eliciting knowledge from language models using automatically generated prompts: The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining. Reformulating tasks as fill-in-the-blanks problems (e.g., cloze tests) is a natural approach for gauging such knowledge, however, its usage is limited by the manual effort and guesswork required to write suitable prompts. To address this, we develop AutoPrompt, an automated method to create prompts for a diverse set of tasks, based on a gradient-guided search. Using AutoPrompt, we show that masked language models (MLMs) have an inherent capability to perform sentiment analysis and natural language inference without additional parameters or finetuning, sometimes achieving performance on par with recent state-of-the-art supervised models. We also show that our prompts elicit more accurate factual knowledge from MLMs than the manually created prompts on the LAMA benchmark, and that MLMs can be used as relation extractors more effectively than supervised relation extraction models. These results demonstrate that automatically generated prompts are a viable parameter-free alternative to existing probing methods, and as pretrained LMs become more sophisticated and capable, potentially a replacement for finetuning.) <|cite_end|> <|cite_start|> (Reference: GPT Understands, Too: Prompting a pretrained language model with natural language patterns has been proved effective for natural language understanding (NLU). However, our preliminary study reveals that manual discrete prompts often lead to unstable performance -- e.g., changing a single word in the prompt might result in substantial performance drop. We propose a novel method P-Tuning that employs trainable continuous prompt embeddings in concatenation with discrete prompts. Empirically, P-Tuning not only stabilizes training by minimizing the gap between various discrete prompts, but also improves performance by a sizeable margin on a wide range of NLU tasks including LAMA and SuperGLUE. P-Tuning is generally effective for both frozen and tuned language models, under both the fully-supervised and few-shot settings.) <|cite_end|>. Our work is inspired by these works; but different from these works, where the focus is to measure or improve the \textit{correctness} of the predictions, our work focuses on measuring and improving the \textit{specificity} of the predictions. <|paper_end|>
[ "<|reference_start|> UnifiedQA: Crossing Format Boundaries With a Single QA System: Question answering (QA) tasks have been posed using a variety of formats, such as extractive span selection, multiple choice, etc. This has led to format-specialized models, and even to an implicit division in the QA community. We argue that such boundaries are artificial and perhaps unnecessary, given the reasoning abilities we seek to teach are not governed by the format. As evidence, we use the latest advances in language modeling to build a single pre-trained QA model, UnifiedQA, that performs surprisingly well across 17 QA datasets spanning 4 diverse formats. UnifiedQA performs on par with 9 different models that were trained on individual datasets themselves. Even when faced with 12 unseen datasets of observed formats, UnifiedQA performs surprisingly well, showing strong generalization from its out-of-format training data. Finally, simply fine-tuning this pre-trained QA model into specialized models results in a new state of the art on 6 datasets, establishing UnifiedQA as a strong starting point for building QA systems. <|reference_end|>", "<|reference_start|> Language Models as Knowledge Bases?: Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as \"fill-in-the-blank\" cloze statements. Language models have many advantages over structured knowledge bases: they require no schema engineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answering against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to recall factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at https://github.com/facebookresearch/LAMA. <|reference_end|>", "<|reference_start|> X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained Language Models: Language models (LMs) have proven surprisingly successful at capturing factual knowledge by completing cloze-style fill-in-the-blank questions such as \"Punta Cana is located in _.\" However, while knowledge is both written and queried in many languages, studies on LMs' factual representation ability have almost invariably been performed on English. To assess factual knowledge retrieval in LMs in different languages, we create a multilingual benchmark of cloze-style probes for 23 typologically diverse languages. To properly handle language variations, we expand probing methods from single- to multi-word entities, and develop several decoding algorithms to generate multi-token predictions. Extensive experimental results provide insights about how well (or poorly) current state-of-the-art LMs perform at this task in languages with more or fewer available resources. We further propose a code-switching-based method to improve the ability of multilingual LMs to access knowledge, and verify its effectiveness on several benchmark languages. Benchmark data and code have been released at https://x-factr.github.io. <|reference_end|>", "<|reference_start|> How Can We Know What Language Models Know?: Recent work has presented intriguing results examining the knowledge contained in language models (LM) by having the LM fill in the blanks of prompts such as \"Obama is a _ by profession\". These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as \"Obama worked as a _\" may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know. We have released the code and the resulting LM Prompt And Query Archive (LPAQA) at https://github.com/jzbjyb/LPAQA. <|reference_end|>" ]
[ 14, 20, 22, 23 ]
{"<|cite_1|>": "arxiv-175879", "<|multi_cite_2_1|>": "ss-1237666", "<|multi_cite_2_2|>": "ss-832115", "<|multi_cite_3_1|>": "arxiv-221588", "<|multi_cite_3_2|>": "arxiv-249483", "<|multi_cite_3_3|>": "arxiv-295823", "<|multi_cite_3_4|>": "arxiv-236700", "<|multi_cite_3_5|>": "arxiv-298458", "<|multi_cite_4_1|>": "arxiv-221588", "<|multi_cite_4_2|>": "arxiv-249483", "<|multi_cite_4_3|>": "arxiv-236700", "<|multi_cite_12_1|>": "arxiv-245143", "<|multi_cite_12_2|>": "arxiv-393798", "<|multi_cite_5_1|>": "arxiv-357741", "<|multi_cite_5_2|>": "arxiv-263002", "<|multi_cite_5_3|>": "ss-832115", "<|multi_cite_5_4|>": "arxiv-298458", "<|cite_7|>": "ss-1237666", "<|cite_8|>": "arxiv-175879", "<|cite_9|>": "arxiv-216284", "<|multi_cite_10_1|>": "arxiv-221588", "<|multi_cite_10_2|>": "arxiv-236771", "<|multi_cite_10_3|>": "arxiv-295823", "<|multi_cite_10_4|>": "arxiv-236700", "<|multi_cite_10_5|>": "arxiv-298458", "<|multi_cite_11_1|>": "ss-1355246", "<|multi_cite_11_2|>": "arxiv-328337"}
2405.14899-1
<|cite_start|> (Reference: Studying Large Language Model Generalization with Influence Functions: When trying to gain better visibility into a machine learning model in order to understand and mitigate the associated risks, a potentially valuable source of evidence is: which training examples most contribute to a given behavior? Influence functions aim to answer a counterfactual: how would the model's parameters (and hence its outputs) change if a given sequence were added to the training set? While influence functions have produced insights for small models, they are difficult to scale to large language models (LLMs) due to the difficulty of computing an inverse-Hessian-vector product (IHVP). We use the Eigenvalue-corrected Kronecker-Factored Approximate Curvature (EK-FAC) approximation to scale influence functions up to LLMs with up to 52 billion parameters. In our experiments, EK-FAC achieves similar accuracy to traditional influence function estimators despite the IHVP computation being orders of magnitude faster. We investigate two algorithmic techniques to reduce the cost of computing gradients of candidate training sequences: TF-IDF filtering and query batching. We use influence functions to investigate the generalization patterns of LLMs, including the sparsity of the influence patterns, increasing abstraction with scale, math and programming abilities, cross-lingual generalization, and role-playing behavior. Despite many apparently sophisticated forms of generalization, we identify a surprising limitation: influences decay to near-zero when the order of key phrases is flipped. Overall, influence functions give us a powerful new tool for studying the generalization properties of LLMs.) <|cite_end|> <|cite_start|> (Reference: Data i Movias busser: Movia producerer hver dag 20.000 ture med tilsammen 700.000 afgange. For hver afgang bliver prognoser dannet, som lobende genberegnes og distribueres til tusindvis af elektroniske skilte pa stoppesteder og i koretojer, samt til rejseplanen.dk. Kom og hor hvordan busserne leverer positioner hvert sekund og hvordan disse omsaettes til prognoser og live maps. Hor ogsa hvordan Movia arbejder med standarder, som muliggor deling af plan- og realtidsdata.) <|cite_end|> <|cite_start|> (Reference: LESS: Selecting Influential Data for Targeted Instruction Tuning: Instruction tuning has unlocked powerful capabilities in large language models (LLMs), effectively using combined datasets to develop generalpurpose chatbots. However, real-world applications often require a specialized suite of skills (e.g., reasoning). The challenge lies in identifying the most relevant data from these extensive datasets to effectively develop specific capabilities, a setting we frame as targeted instruction tuning. We propose LESS, an optimizer-aware and practically efficient algorithm to effectively estimate data influences and perform Low-rank gradiEnt Similarity Search for instruction data selection. Crucially, LESS adapts existing influence formulations to work with the Adam optimizer and variable-length instruction data. LESS first constructs a highly reusable and transferable gradient datastore with low-dimensional gradient features and then selects examples based on their similarity to few-shot examples embodying a specific capability. Experiments show that training on a LESS-selected 5% of the data can often outperform training on the full dataset across diverse downstream tasks. Furthermore, the selected data is highly transferable: smaller models can be leveraged to select useful data for larger models and models from different families. Our qualitative analysis shows that our method goes beyond surface form cues to identify data that exemplifies the necessary reasoning skills for the intended downstream application.) <|cite_end|>applied influence to pre-training and fine-tuning data of LLMs. <|cite_start|> (Reference: IDEAL: Influence-Driven Selective Annotations Empower In-Context Learners in Large Language Models: In-context learning is a promising paradigm that utilizes in-context examples as prompts for the predictions of large language models. These prompts are crucial for achieving strong performance. However, since the prompts need to be sampled from a large volume of annotated examples, finding the right prompt may result in high annotation costs. To address this challenge, this paper introduces an influence-driven selective annotation method that aims to minimize annotation costs while improving the quality of in-context examples. The essence of our method is to select a pivotal subset from a large-scale unlabeled data pool to annotate for the subsequent sampling of prompts. Specifically, a directed graph is first constructed to represent unlabeled data. Afterward, the influence of candidate unlabeled subsets is quantified with a diffusion process. A simple yet effective greedy algorithm for unlabeled data selection is lastly introduced. It iteratively selects the data if it provides a maximum marginal gain with respect to quantified influence. Compared with previous efforts on selective annotations, our influence-driven method works in an end-to-end manner, avoids an intractable explicit balance between data diversity and representativeness, and enjoys theoretical support. Experiments confirm the superiority of the proposed method on various benchmarks, achieving better performance under lower time consumption during subset selection. The project page is available at https://skzhang1.github.io/IDEAL/.) <|cite_end|>used influence to select demonstration inputs for annotation. <|cite_start|> (Reference: In-Context Learning Demonstration Selection via Influence Analysis: Large Language Models (LLMs) have showcased their In-Context Learning (ICL) capabilities, enabling few-shot learning without the need for gradient updates. Despite its advantages, the effectiveness of ICL heavily depends on the choice of demonstrations. Selecting the most effective demonstrations for ICL remains a significant research challenge. To tackle this issue, we propose a demonstration selection method named InfICL, which utilizes influence functions to analyze impacts of training samples. By identifying the most influential training samples as demonstrations, InfICL aims to enhance the ICL generalization performance. To keep InfICL cost-effective, we only use the LLM to generate sample input embeddings, avoiding expensive fine-tuning. Through empirical studies on various real-world datasets, we demonstrate advantages of InfICL compared to state-of-the-art baselines.) <|cite_end|>builds a classifier on the embeddings of demonstrations using a small LLM and computes influence w.r.t.~the classifier for demonstration selection. In contrast, we demonstrate various use cases of our method including on-the-fly demonstration curation, reordering, and noisy demonstration detection. A contemporary work that shares technical similarity <|cite_start|> (Reference: In-Context Learning Demonstration Selection via Influence Analysis: Large Language Models (LLMs) have showcased their In-Context Learning (ICL) capabilities, enabling few-shot learning without the need for gradient updates. Despite its advantages, the effectiveness of ICL heavily depends on the choice of demonstrations. Selecting the most effective demonstrations for ICL remains a significant research challenge. To tackle this issue, we propose a demonstration selection method named InfICL, which utilizes influence functions to analyze impacts of training samples. By identifying the most influential training samples as demonstrations, InfICL aims to enhance the ICL generalization performance. To keep InfICL cost-effective, we only use the LLM to generate sample input embeddings, avoiding expensive fine-tuning. Through empirical studies on various real-world datasets, we demonstrate advantages of InfICL compared to state-of-the-art baselines.) <|cite_end|>focuses on demonstration selection whereas we focus on attribution and <|cite_start|> (Reference: In-Context Learning Demonstration Selection via Influence Analysis: Large Language Models (LLMs) have showcased their In-Context Learning (ICL) capabilities, enabling few-shot learning without the need for gradient updates. Despite its advantages, the effectiveness of ICL heavily depends on the choice of demonstrations. Selecting the most effective demonstrations for ICL remains a significant research challenge. To tackle this issue, we propose a demonstration selection method named InfICL, which utilizes influence functions to analyze impacts of training samples. By identifying the most influential training samples as demonstrations, InfICL aims to enhance the ICL generalization performance. To keep InfICL cost-effective, we only use the LLM to generate sample input embeddings, avoiding expensive fine-tuning. Through empirical studies on various real-world datasets, we demonstrate advantages of InfICL compared to state-of-the-art baselines.) <|cite_end|>is shown to be less effective than our method in \cref{sec:exp_application}. Additionally, compared to prior works leveraging influence to address specific problems, we apply influence function to provide a \textit{general attribution} for demonstrations, with many applications that we empirically show. <|paper_end|>
[ "<|reference_start|> LESS: Selecting Influential Data for Targeted Instruction Tuning: Instruction tuning has unlocked powerful capabilities in large language models (LLMs), effectively using combined datasets to develop generalpurpose chatbots. However, real-world applications often require a specialized suite of skills (e.g., reasoning). The challenge lies in identifying the most relevant data from these extensive datasets to effectively develop specific capabilities, a setting we frame as targeted instruction tuning. We propose LESS, an optimizer-aware and practically efficient algorithm to effectively estimate data influences and perform Low-rank gradiEnt Similarity Search for instruction data selection. Crucially, LESS adapts existing influence formulations to work with the Adam optimizer and variable-length instruction data. LESS first constructs a highly reusable and transferable gradient datastore with low-dimensional gradient features and then selects examples based on their similarity to few-shot examples embodying a specific capability. Experiments show that training on a LESS-selected 5% of the data can often outperform training on the full dataset across diverse downstream tasks. Furthermore, the selected data is highly transferable: smaller models can be leveraged to select useful data for larger models and models from different families. Our qualitative analysis shows that our method goes beyond surface form cues to identify data that exemplifies the necessary reasoning skills for the intended downstream application. <|reference_end|>", "<|reference_start|> IDEAL: Influence-Driven Selective Annotations Empower In-Context Learners in Large Language Models: In-context learning is a promising paradigm that utilizes in-context examples as prompts for the predictions of large language models. These prompts are crucial for achieving strong performance. However, since the prompts need to be sampled from a large volume of annotated examples, finding the right prompt may result in high annotation costs. To address this challenge, this paper introduces an influence-driven selective annotation method that aims to minimize annotation costs while improving the quality of in-context examples. The essence of our method is to select a pivotal subset from a large-scale unlabeled data pool to annotate for the subsequent sampling of prompts. Specifically, a directed graph is first constructed to represent unlabeled data. Afterward, the influence of candidate unlabeled subsets is quantified with a diffusion process. A simple yet effective greedy algorithm for unlabeled data selection is lastly introduced. It iteratively selects the data if it provides a maximum marginal gain with respect to quantified influence. Compared with previous efforts on selective annotations, our influence-driven method works in an end-to-end manner, avoids an intractable explicit balance between data diversity and representativeness, and enjoys theoretical support. Experiments confirm the superiority of the proposed method on various benchmarks, achieving better performance under lower time consumption during subset selection. The project page is available at https://skzhang1.github.io/IDEAL/. <|reference_end|>", "<|reference_start|> In-Context Learning Demonstration Selection via Influence Analysis: Large Language Models (LLMs) have showcased their In-Context Learning (ICL) capabilities, enabling few-shot learning without the need for gradient updates. Despite its advantages, the effectiveness of ICL heavily depends on the choice of demonstrations. Selecting the most effective demonstrations for ICL remains a significant research challenge. To tackle this issue, we propose a demonstration selection method named InfICL, which utilizes influence functions to analyze impacts of training samples. By identifying the most influential training samples as demonstrations, InfICL aims to enhance the ICL generalization performance. To keep InfICL cost-effective, we only use the LLM to generate sample input embeddings, avoiding expensive fine-tuning. Through empirical studies on various real-world datasets, we demonstrate advantages of InfICL compared to state-of-the-art baselines. <|reference_end|>", "<|reference_start|> In-Context Learning Demonstration Selection via Influence Analysis: Large Language Models (LLMs) have showcased their In-Context Learning (ICL) capabilities, enabling few-shot learning without the need for gradient updates. Despite its advantages, the effectiveness of ICL heavily depends on the choice of demonstrations. Selecting the most effective demonstrations for ICL remains a significant research challenge. To tackle this issue, we propose a demonstration selection method named InfICL, which utilizes influence functions to analyze impacts of training samples. By identifying the most influential training samples as demonstrations, InfICL aims to enhance the ICL generalization performance. To keep InfICL cost-effective, we only use the LLM to generate sample input embeddings, avoiding expensive fine-tuning. Through empirical studies on various real-world datasets, we demonstrate advantages of InfICL compared to state-of-the-art baselines. <|reference_end|>" ]
[ 2, 3, 4, 5 ]
{"<|multi_cite_1_1|>": "arxiv-361235", "<|multi_cite_1_2|>": "ss-832115", "<|multi_cite_1_3|>": "arxiv-411079", "<|cite_2|>": "ss-832115", "<|multi_cite_3_1|>": "ss-809544", "<|multi_cite_3_2|>": "arxiv-582884", "<|multi_cite_4_1|>": "arxiv-395344", "<|multi_cite_4_2|>": "ss-1860033", "<|cite_5|>": "ss-832115", "<|multi_cite_7_1|>": "ss-1868083", "<|multi_cite_7_2|>": "arxiv-119030", "<|multi_cite_7_3|>": "ss-949043", "<|cite_8|>": "arxiv-118182", "<|cite_9|>": "ss-1177807", "<|cite_10|>": "arxiv-361235", "<|multi_cite_11_1|>": "arxiv-521391", "<|multi_cite_11_2|>": "arxiv-335363", "<|multi_cite_12_1|>": "arxiv-65503", "<|multi_cite_12_2|>": "arxiv-118182", "<|cite_13|>": "arxiv-119030", "<|multi_cite_14_1|>": "arxiv-465723", "<|multi_cite_14_2|>": "arxiv-590729", "<|multi_cite_14_3|>": "arxiv-469638", "<|cite_15|>": "arxiv-119030", "<|cite_16|>": "ss-689112", "<|multi_cite_17_1|>": "arxiv-511477", "<|multi_cite_17_2|>": "arxiv-465723", "<|multi_cite_17_3|>": "ss-832115", "<|multi_cite_17_4|>": "ss-682457", "<|multi_cite_17_5|>": "arxiv-475237", "<|multi_cite_17_6|>": "arxiv-448503", "<|multi_cite_17_7|>": "arxiv-469638", "<|multi_cite_17_9|>": "arxiv-516437", "<|cite_22|>": "ss-832115", "<|cite_23|>": "arxiv-448503", "<|multi_cite_25_1|>": "arxiv-465723", "<|multi_cite_25_2|>": "ss-682457", "<|cite_26|>": "arxiv-475237", "<|multi_cite_27_1|>": "arxiv-469638", "<|multi_cite_27_2|>": "arxiv-538349", "<|multi_cite_27_3|>": "arxiv-516437", "<|multi_cite_28_1|>": "arxiv-511477", "<|multi_cite_28_2|>": "arxiv-516437", "<|multi_cite_18_1|>": "ss-941733", "<|multi_cite_18_2|>": "ss-1177807", "<|multi_cite_18_3|>": "ss-1868083", "<|multi_cite_18_4|>": "arxiv-529133", "<|multi_cite_18_5|>": "arxiv-119030", "<|multi_cite_18_6|>": "ss-949043", "<|multi_cite_19_1|>": "arxiv-469785", "<|multi_cite_19_2|>": "arxiv-498317", "<|multi_cite_19_3|>": "arxiv-586738", "<|multi_cite_19_4|>": "arxiv-484600", "<|multi_cite_19_5|>": "arxiv-503844", "<|cite_29|>": "arxiv-498317", "<|multi_cite_30_1|>": "arxiv-469785", "<|multi_cite_30_2|>": "arxiv-503844", "<|cite_31|>": "arxiv-484600", "<|cite_32|>": "arxiv-586738", "<|multi_cite_20_1|>": "ss-1177807", "<|multi_cite_20_2|>": "ss-1868083", "<|cite_33|>": "arxiv-484600", "<|multi_cite_21_1|>": "arxiv-529133", "<|multi_cite_21_2|>": "ss-1371336", "<|multi_cite_21_3|>": "arxiv-483276", "<|multi_cite_21_4|>": "arxiv-586752", "<|multi_cite_21_5|>": "arxiv-582884", "<|multi_cite_21_6|>": "arxiv-549698", "<|cite_34|>": "arxiv-483276", "<|multi_cite_35_1|>": "arxiv-529133", "<|multi_cite_35_2|>": "ss-1371336", "<|multi_cite_35_3|>": "arxiv-582884", "<|cite_36|>": "arxiv-549698", "<|cite_37|>": "arxiv-586752", "<|cite_38|>": "arxiv-586752", "<|cite_39|>": "arxiv-586752"}
2011.00596
<|paper_start|> Title: Bracketing Encodings for 2-Planar Dependency Parsing Abstract: Bracketing Encodings for 2-Planar Dependency Parsing: We present a bracketing-based encoding that can be used to represent any 2-planar dependency tree over a sentence of length n as a sequence of n labels, hence providing almost total coverage of crossing arcs in sequence labeling parsing. First, we show that existing bracketing encodings for parsing as labeling can only handle a very mild extension of projective trees. Second, we overcome this limitation by taking into account the well-known property of 2-planarity, which is present in the vast majority of dependency syntactic structures in treebanks, i.e., the arcs of a dependency tree can be split into two planes such that arcs in a given plane do not cross. We take advantage of this property to design a method that balances the brackets and that encodes the arcs belonging to each of those planes, allowing for almost unrestricted non-projectivity (round 99.9% coverage) in sequence labeling parsing. The experiments show that our linearizations improve over the accuracy of the original bracketing encoding in highly non-projective treebanks (on average by 0.4 LAS), while achieving a similar speed. Also, they are especially suitable when PoS tags are not used as input parameters to the models. Introduction In the last few years, approaches that cast syntactic parsing as the task of finding a sequence have gained traction for both dependency and constituency parsing. In sequence-to-sequence (seq2seq) parsing <|cite_start|> (Reference: Grammar as a Foreign Language: Syntactic constituency parsing is a fundamental problem in natural language processing and has been the subject of intensive research and engineering for decades. As a result, the most accurate parsers are domain specific, complex, and inefficient. In this paper we show that the domain agnostic attention-enhanced sequence-to-sequence model achieves state-of-the-art results on the most widely used syntactic constituency parsing dataset, when trained on a large synthetic corpus that was annotated using existing parsers. It also matches the performance of standard parsers when trained only on a small human-annotated dataset, which shows that this model is highly data-efficient, in contrast to sequence-to-sequence models without the attention mechanism. Our parser is also fast, processing over a hundred sentences per second with an unoptimized CPU implementation.) <|cite_end|> <|cite_start|> (Reference: Seq2seq dependency parsing: This paper presents a sequence to sequence (seq2seq) dependency parser by directly predicting the relative position of head for each given word, which therefore results in a truly end-to-end seq2seq dependency parser for the first time. Enjoying the advantage of seq2seq modeling, we enrich a series of embedding enhancement, including firstly introduced subword and node2vec augmentation. Meanwhile, we propose a beam search decoder with tree constraint and subroot decomposition over the sequence to furthermore enhance our seq2seq parser. Our parser is evaluated on benchmark treebanks, being on par with the state-of-the-art parsers by achieving 94.11% UAS on PTB and 88.78% UAS on CTB, respectively.) <|cite_end|>, parse trees are represented as arbitrary-length sequences, where the attention mechanism can be seen as an abstraction of the stack and the buffer in transition-based systems that decides what words are relevant to make a decision at a given time step. In sequence labeling parsing <|cite_start|> (Reference: Constituent Parsing as Sequence Labeling: We introduce a method to reduce constituent parsing to sequence labeling. For each word w_t, it generates a label that encodes: (1) the number of ancestors in the tree that the words w_t and w_{t+1} have in common, and (2) the nonterminal symbol at the lowest common ancestor. We first prove that the proposed encoding function is injective for any tree without unary branches. In practice, the approach is made extensible to all constituency trees by collapsing unary branches. We then use the PTB and CTB treebanks as testbeds and propose a set of fast baselines. We achieve 90.7% F-score on the PTB test set, outperforming the Vinyals et al. (2015) sequence-to-sequence parser. In addition, sacrificing some accuracy, our approach achieves the fastest constituent parsing speeds reported to date on PTB by a wide margin.) <|cite_end|> <|cite_start|> (Reference: Viable Dependency Parsing as Sequence Labeling: We recast dependency parsing as a sequence labeling problem, exploring several encodings of dependency trees as labels. While dependency parsing by means of sequence labeling had been attempted in existing work, results suggested that the technique was impractical. We show instead that with a conventional BiLSTM-based model it is possible to obtain fast and accurate parsers. These parsers are conceptually simple, not needing traditional parsing algorithms or auxiliary structures. However, experiments on the PTB and a sample of UD treebanks show that they provide a good speed-accuracy tradeoff, with results competitive with more complex approaches.) <|cite_end|>, the tree for a sentence of length $n$ is represented as a sequence of $n$ labels, one per word, so the parsing process is word-synchronous <|cite_start|> (Reference: Tetra-Tagging: Word-Synchronous Parsing with Linear-Time Inference: We present a constituency parsing algorithm that, like a supertagger, works by assigning labels to each word in a sentence. In order to maximally leverage current neural architectures, the model scores each word's tags in parallel, with minimal task-specific structure. After scoring, a left-to-right reconciliation phase extracts a tree in (empirically) linear time. Our parser achieves 95.4 F1 on the WSJ test set while also achieving substantial speedups compared to current state-of-the-art parsers with comparable accuracies.) <|cite_end|> and can be addressed by frameworks traditionally used for other natural language processing tasks, such as part-of-speech tagging or named-entity recognition. Current sequence labeling parsers combine competitive accuracy with high computational efficiency, while providing extra simplicity using off-the-shelf sequence labeling software without the need for ad-hoc parsing algorithms. In the realm of dependency parsing, pioneering work dates back to \newcite{Spoustova}, who used a relative PoS-tag based encoding to represent trees as label sequences, but the resulting accuracy was not practical even for the standards of the time, probably due to the inability of pre-deep-learning architectures to successfully learn the representation. Using more modern architectures with the ability to contextualize words based on the sentence, and various tree encodings, \newcite{strzyz-etal-2019-viable} were the first to show that competitive accuracy could be reached. Subsequently, this accuracy has been improved further by techniques like the use of multi-task learning to parse dependencies and constituents together <|cite_start|> (Reference: Sequence Labeling Parsing by Learning Across Representations: We use parsing as sequence labeling as a common framework to learn across constituency and dependency syntactic abstractions. To do so, we cast the problem as multitask learning (MTL). First, we show that adding a parsing paradigm as an auxiliary loss consistently improves the performance on the other paradigm. Secondly, we explore an MTL sequence labeling model that parses both representations, at almost no cost in terms of performance and speed. The results across the board show that on average MTL models with auxiliary losses for constituency parsing outperform single-task ones by 1.14 F1 points, and for dependency parsing by 0.62 UAS points.) <|cite_end|> and of contextualized embeddings <|cite_start|> (Reference: Parsing as Pretraining: Recent analyses suggest that encoders pretrained for language modeling capture certain morpho-syntactic structure. However, probing frameworks for word vectors still do not report results on standard setups such as constituent and dependency parsing. This paper addresses this problem and does full parsing (on English) relying only on pretraining architectures -- and no decoding. We first cast constituent and dependency parsing as sequence tagging. We then use a single feed-forward layer to directly map word vectors to labels that encode a linearized tree. This is used to: (i) see how far we can reach on syntax modelling with just pretrained encoders, and (ii) shed some light about the syntax-sensitivity of different word vectors (by freezing the weights of the pretraining network during training). For evaluation, we use bracketing F1-score and LAS, and analyze in-depth differences across representations for span lengths and dependency displacements. The overall results surpass existing sequence tagging parsers on the PTB (93.5%) and end-to-end EN-EWT UD (78.8%).) <|cite_end|>. \begin{figure}[] \begin{subfigure}{\columnwidth} \centering \begin{subfigure}[b]{0.5\linewidth} \begin{dependency}[hide label, theme=simple] \begin{deptext}[column sep=1em] $w_{0}$ \&$w_{1}$ \& $w_{2}$ \& $w_{3}$ \& $w_{4}$ \& $w_{5}$ \& $w_{6}$ \& \\ \texttt{root}\& $\emptyset$\&\lSL\lSL\lSL\rT\&\lSL\rT\&\lSL\rT\&\rT\&\rT \\ \end{deptext} \depedge{2}{3}{} \depedge{2}{4}{} \depedge{2}{6}{} \depedge{3}{5}{} \depedge{4}{7}{} \depedge{1}{2}{} \end{dependency} \end{subfigure} \caption{Projective encoding restricted to a single plane. Infeasible to reconstruct a non-projective sentence.} \label{fig:bracketing} \centering \begin{subfigure}[b]{0.5\linewidth} \begin{dependency}[hide label, theme=simple] \begin{deptext}[column sep=1em] $w_{0}$ \&$w_{1}$ \& $w_{2}$ \& $w_{3}$ \& $w_{4}$ \& $w_{5}$ \& $w_{6}$ \& \\ \texttt{root}\& $\emptyset$\&\lSL\lSL\lSL\rT\&\textcolor{red}{\bf $\lSL^{\bf *}$}\rT\&\textcolor{red}{\bf $\rT^{*}$}\&\rT\&$\emptyset$ \\ \end{deptext} \depedge{2}{3}{} \depedge{2}{4}{} \depedge{2}{6}{} \depedge[edge style={red,densely dotted}]{3}{5}{} \depedge{1}{2}{} \end{dependency} \end{subfigure} \caption{Non-projective 2-planar encoding with second-plane-averse greedy plane assignment. The arc $w_3 \rightarrow w_6$ is not assigned a plane because it would cross arcs belonging to both planes, which is forbidden by the 2-planar constraint.} \label{fig:2-greedy} \centering \begin{subfigure}[b]{0.5\linewidth} \begin{dependency}[hide label, theme=simple] \begin{deptext}[column sep=1em] $w_{0}$ \& $w_{1}$ \& $w_{2}$ \& $w_{3}$ \& $w_{4}$ \& $w_{5}$ \& $w_{6}$ \& \\ \texttt{root} \& $\emptyset$\&\textcolor{red}{\bf $\lSL^{*}$}\lSL\lSL\rT\&\textcolor{red}{\bf $\lSL^{*}$}\rT\&\lSL\textcolor{red}{\bf $\rT^{*}$}\&\textcolor{red}{\bf $\rT^{*}$}\&\rT \\ \end{deptext} \depedge{2}{3}{} \depedge{2}{4}{} \depedge[edge style={red,densely dotted}]{2}{6}{} \depedge[edge style={red,densely dotted}]{3}{5}{} \depedge{4}{7}{} \depedge{1}{2}{} \end{dependency} \end{subfigure} \caption{Non-projective 2-planar encoding with plane assignment based on restriction propagation on the crossings graph.} \label{fig:2-prop} \end{subfigure} \caption{Bracketing-based encodings with their plane assignment strategies for a non-projective sentence. The red, dotted lines refer to the arcs represented in the second plane, denoted by * in the encoding label.} \label{fig:fin} \end{figure} While parsing as sequence labeling does not need specific parsing algorithms or data structures, as in graph-based or transition-based parsing, the responsibility of providing suitable parsing representations with reasonable coverage and learnability falls instead on the encoding used to represent trees as sequences of labels. \newcite{strzyz-etal-2019-viable} used four different encodings that obtained substantially different parsing accuracies in the experiments, with two encodings achieving competitive accuracy: the relative PoS tag (rel-PoS) encoding of \newcite{Spoustova} and a new encoding based on balanced brackets, inspired by \newcite{yli-jyra-gomez-rodriguez-2017-generic}. While the encoding of \newcite{Spoustova} achieved a good accuracy, and it has full coverage of non-projective dependency trees, it requires PoS tags to encode the dependency arcs. This can be seen as a weakness, not just because computing and feeding PoS tags increases the latency, but also because the traditional assumption that PoS tagging is needed for parsing is being increasingly called into question <|cite_start|> (Reference: From raw text to universal dependencies - look, no tags!: We present the Uppsala submission to the CoNLL 2017 shared task on parsing from raw text to universal dependencies. Our system is a simple pipeline consisting of two components. The first performs joint word and sentence segmentation on raw text; the second predicts dependency trees from raw words. The parser bypasses the need for part-of-speech tagging, but uses word embeddings based on universal tag distributions. We achieved a macro-averaged LAS F1 of 65.11 in the official test run, which improved to 70.49 after bug fixes. We obtained the 2nd best result for sentence segmentation with a score of 89.03.) <|cite_end|> <|cite_start|> (Reference: An Investigation of the Interactions Between Pre-Trained Word Embeddings, Character Models and POS Tags in Dependency Parsing: We provide a comprehensive analysis of the interactions between pre-trained word embeddings, character models and POS tags in a transition-based dependency parser. While previous studies have shown POS information to be less important in the presence of character models, we show that in fact there are complex interactions between all three techniques. In isolation each produces large improvements over a baseline system using randomly initialised word embeddings only, but combining them quickly leads to diminishing returns. We categorise words by frequency, POS tag and language in order to systematically investigate how each of the techniques affects parsing quality. For many word categories, applying any two of the three techniques is almost as good as the full combined system. Character models tend to be more important for low-frequency open-class words, especially in morphologically rich languages, while POS tags can help disambiguate high-frequency function words. We also show that large character embedding sizes help even for languages with small character sets, especially in morphologically rich languages.) <|cite_end|> <|cite_start|> (Reference: Constituency Parsing with a Self-Attentive Encoder: We demonstrate that replacing an LSTM encoder with a self-attentive architecture can lead to improvements to a state-of-the-art discriminative constituency parser. The use of attention makes explicit the manner in which information is propagated between different locations in the sentence, which we use to both analyze our model and propose potential improvements. For example, we find that separating positional and content information in the encoder can lead to improved parsing accuracy. Additionally, we evaluate different approaches for lexical representation. Our parser achieves new state-of-the-art results for single models trained on the Penn Treebank: 93.55 F1 without the use of any external data, and 95.13 F1 when using pre-trained word representations. Our parser also outperforms the previous best-published accuracy figures on 8 of the 9 languages in the SPMRL dataset.) <|cite_end|> <|cite_start|> (Reference: On the Frailty of Universal POS Tags for Neural UD Parsers: We present an analysis on the effect UPOS accuracy has on parsing performance. Results suggest that leveraging UPOS tags as features for neural parsers requires a prohibitively high tagging accuracy and that the use of gold tags offers a non-linear increase in performance, suggesting some sort of exceptionality. We also investigate what aspects of predicted UPOS tags impact parsing accuracy the most, highlighting some potentially meaningful linguistic facets of the problem.) <|cite_end|>. Low-frequency PoS tags can cause sparsity in the encoding, and low-quality PoS tags could be a potential source of errors in low-resource languages. For this reason, \newcite{lacroix-2019-dependency} proposed two alternative encodings with the same relative indexing philosophy, but without using PoS tags. However, these encodings require a composition of two sequence labeling processes instead of one. On the other hand, the bracketing encoding inspired in <|cite_start|> (Reference: Generic Axiomatization of Families of Noncrossing Graphs in Dependency Parsing: We present a simple encoding for unlabeled noncrossing graphs and show how its latent counterpart helps us to represent several families of directed and undirected graphs used in syntactic and semantic parsing of natural language as context-free languages. The families are separated purely on the basis of forbidden patterns in latent encoding, eliminating the need to differentiate the families of non-crossing graphs in inference algorithms: one algorithm works for all when the search space can be controlled in parser input.) <|cite_end|> represents the trees independently of PoS tags or any other previous tagging step, but it has the limitation of being restricted to a very mild extension of projective trees. \paragraph{Contribution.} In this paper, we extend the idea of the bracketing-based encoding to non-projective parsing by defining a variant that can encode all 2-planar dependency trees <|cite_start|> (Reference: Multiplanarity -- a model for dependency structures in treebanks: The number of treebanks available for different languages is growing steadily. A considerable portion of the recent treebanks use annotation schemes that are based on dependency syntax. In this paper, we give a model for linguistically adequate classes of dependency structures in treebanks. Our model is tested using the Danish Dependency Treebank [13]. The modern dependency syntax was pioneered by Tesniere [27]. His core concepts, binary dependencies and unique heads, are mostly shared in the recent dependency syntactic theories [6, 9, 18, 24, 26]. These theories stress the functional structure, while paying much less attention to linear word order. Lecerf’s projectivity hypothesis [15, 17] assumes a constraint on linear wordorder in dependency analyses. It says that if a word A depends directly on a word B and some word C intervenes between them in linear order, then C depends directly on A or on B or some other intervening word[23]. The projectivity constraint has been a popular simplification in many computational dependency grammars since 1960’s [4]. It does not only make parsing algorithms simple and efficient [16], but also equips us with neat ways to visualize analyses, with an equivalent, constituent based representation for dependency trees [4, 7], and with a criterion for stylistically marked analyses [27, 22] and for abnormal information structure [10]. The tendency that sentences admit projective analyses has been observed in many languages, including French [15], Swedish [20], Finnish [1], and Turkish [21]. Unfortunately, projectivity does not lend itself to adequate treatment of certain non-local syntactic phenomena which are extensively studied in the literature of constituent-based theories such as TG, GB, GPSG, TAG, and LFG. Among these) <|cite_end|>. 2-planar dependency trees have been shown to cover the vast majority of non-projective trees in attested sentences <|cite_start|> (Reference: Squibs: Restricted Non-Projectivity: Coverage vs. Efficiency: In the last decade, various restricted classes of non-projective dependency trees have been proposed with the goal of achieving a good tradeoff between parsing efficiency and coverage of the syntactic structures found in natural languages. We perform an extensive study measuring the coverage of a wide range of such classes on corpora of 30 languages under two different syntactic annotation criteria. The results show that, among the currently known relaxations of projectivity, the best tradeoff between coverage and computational complexity of exact parsing is achieved by either 1-endpoint-crossing trees or MHk trees, depending on the level of coverage desired. We also present some properties of the relation of MHk trees to other relevant classes of trees.) <|cite_end|> and have been used in transition-based parsing <|cite_start|> (Reference: Divisible transition systems and multiplanar dependency parsing: Transition-based parsing is a widely used approach for dependency parsing that combines high efficiency with expressive feature models. Many different transition systems have been proposed, often formalized in slightly different frameworks. In this article, we show that a large number of the known systems for projective dependency parsing can be viewed as variants of the same stack-based system with a small set of elementary transitions that can be composed into complex transitions and restricted in different ways. We call these systems divisible transition systems and prove a number of theoretical results about their expressivity and complexity. In particular, we characterize an important subclass called efficient divisible transition systems that parse planar dependency graphs in linear time. We go on to show, first, how this system can be restricted to capture exactly the set of planar dependency trees and, secondly, how the system can be generalized to k-planar trees by making use of multiple stacks. Using the first known efficient test for k-planarity, we investigate the coverage of k-planar trees in available dependency treebanks and find a very good fit for 2-planar trees. We end with an experimental evaluation showing that our 2-planar parser gives significant improvements in parsing accuracy over the corresponding 1-planar and projective parsers for data sets with non-projective dependency trees and performs on a par with the widely used arc-eager pseudo-projective parser.) <|cite_end|> <|cite_start|> (Reference: A Dynamic Oracle for Linear-Time 2-Planar Dependency Parsing: We propose an efficient dynamic oracle for training the 2-Planar transition-based parser, a linear-time parser with over 99% coverage on non-projective syntactic corpora. This novel approach outperforms the static training strategy in the vast majority of languages tested and scored better on most datasets than the arc-hybrid parser enhanced with the SWAP transition, which can handle unrestricted non-projectivity.) <|cite_end|>. We show that our encoding provides better parsing accuracy than the original bracketing-based encoding on highly non-projective UD treebanks; and than the rel-PoS encoding when assuming PoS tags are not fed as input parameters to the models. The source code is available at \url{https://github.com/mstrise/dep2label}. <|paper_end|>
[ "<|reference_start|> Grammar as a Foreign Language: Syntactic constituency parsing is a fundamental problem in natural language processing and has been the subject of intensive research and engineering for decades. As a result, the most accurate parsers are domain specific, complex, and inefficient. In this paper we show that the domain agnostic attention-enhanced sequence-to-sequence model achieves state-of-the-art results on the most widely used syntactic constituency parsing dataset, when trained on a large synthetic corpus that was annotated using existing parsers. It also matches the performance of standard parsers when trained only on a small human-annotated dataset, which shows that this model is highly data-efficient, in contrast to sequence-to-sequence models without the attention mechanism. Our parser is also fast, processing over a hundred sentences per second with an unoptimized CPU implementation. <|reference_end|>", "<|reference_start|> An Investigation of the Interactions Between Pre-Trained Word Embeddings, Character Models and POS Tags in Dependency Parsing: We provide a comprehensive analysis of the interactions between pre-trained word embeddings, character models and POS tags in a transition-based dependency parser. While previous studies have shown POS information to be less important in the presence of character models, we show that in fact there are complex interactions between all three techniques. In isolation each produces large improvements over a baseline system using randomly initialised word embeddings only, but combining them quickly leads to diminishing returns. We categorise words by frequency, POS tag and language in order to systematically investigate how each of the techniques affects parsing quality. For many word categories, applying any two of the three techniques is almost as good as the full combined system. Character models tend to be more important for low-frequency open-class words, especially in morphologically rich languages, while POS tags can help disambiguate high-frequency function words. We also show that large character embedding sizes help even for languages with small character sets, especially in morphologically rich languages. <|reference_end|>", "<|reference_start|> Multiplanarity -- a model for dependency structures in treebanks: The number of treebanks available for different languages is growing steadily. A considerable portion of the recent treebanks use annotation schemes that are based on dependency syntax. In this paper, we give a model for linguistically adequate classes of dependency structures in treebanks. Our model is tested using the Danish Dependency Treebank [13]. The modern dependency syntax was pioneered by Tesniere [27]. His core concepts, binary dependencies and unique heads, are mostly shared in the recent dependency syntactic theories [6, 9, 18, 24, 26]. These theories stress the functional structure, while paying much less attention to linear word order. Lecerf’s projectivity hypothesis [15, 17] assumes a constraint on linear wordorder in dependency analyses. It says that if a word A depends directly on a word B and some word C intervenes between them in linear order, then C depends directly on A or on B or some other intervening word[23]. The projectivity constraint has been a popular simplification in many computational dependency grammars since 1960’s [4]. It does not only make parsing algorithms simple and efficient [16], but also equips us with neat ways to visualize analyses, with an equivalent, constituent based representation for dependency trees [4, 7], and with a criterion for stylistically marked analyses [27, 22] and for abnormal information structure [10]. The tendency that sentences admit projective analyses has been observed in many languages, including French [15], Swedish [20], Finnish [1], and Turkish [21]. Unfortunately, projectivity does not lend itself to adequate treatment of certain non-local syntactic phenomena which are extensively studied in the literature of constituent-based theories such as TG, GB, GPSG, TAG, and LFG. Among these <|reference_end|>", "<|reference_start|> Squibs: Restricted Non-Projectivity: Coverage vs. Efficiency: In the last decade, various restricted classes of non-projective dependency trees have been proposed with the goal of achieving a good tradeoff between parsing efficiency and coverage of the syntactic structures found in natural languages. We perform an extensive study measuring the coverage of a wide range of such classes on corpora of 30 languages under two different syntactic annotation criteria. The results show that, among the currently known relaxations of projectivity, the best tradeoff between coverage and computational complexity of exact parsing is achieved by either 1-endpoint-crossing trees or MHk trees, depending on the level of coverage desired. We also present some properties of the relation of MHk trees to other relevant classes of trees. <|reference_end|>" ]
[ 0, 8, 12, 13 ]
{"<|multi_cite_1_1|>": "arxiv-70752", "<|multi_cite_1_2|>": "ss-1181983", "<|multi_cite_2_1|>": "arxiv-177019", "<|multi_cite_2_2|>": "arxiv-193197", "<|cite_3|>": "arxiv-200968", "<|cite_4|>": "arxiv-212608", "<|cite_5|>": "arxiv-246513", "<|multi_cite_6_1|>": "ss-1297601", "<|multi_cite_6_2|>": "arxiv-170440", "<|multi_cite_6_3|>": "arxiv-157167", "<|multi_cite_6_4|>": "arxiv-293870", "<|cite_7|>": "arxiv-126499", "<|cite_8|>": "ss-1297602", "<|cite_9|>": "ss-784179", "<|multi_cite_10_1|>": "ss-1705671", "<|multi_cite_10_2|>": "arxiv-158364"}
2304.06862
<|paper_start|> Title: The Longest Subsequence-Repeated Subsequence Problem Abstract: The Longest Subsequence-Repeated Subsequence Problem: Motivated by computing duplication patterns in sequences, a new fundamental problem called the longest subsequence-repeated subsequence (LSRS) is proposed. Given a sequence $S$ of length $n$, a letter-repeated subsequence is a subsequence of $S$ in the form of $x_1^{d_1}x_2^{d_2}\cdots x_k^{d_k}$ with $x_i$ a subsequence of $S$, $x_j\neq x_{j+1}$ and $d_i\geq 2$ for all $i$ in $[k]$ and $j$ in $[k-1]$. We first present an $O(n^6)$ time algorithm to compute the longest cubic subsequences of all the $O(n^2)$ substrings of $S$, improving the trivial $O(n^7)$ bound. Then, an $O(n^6)$ time algorithm for computing the longest subsequence-repeated subsequence (LSRS) of $S$ is obtained. Finally we focus on two variants of this problem. We first consider the constrained version when $\Sigma$ is unbounded, each letter appears in $S$ at most $d$ times and all the letters in $\Sigma$ must appear in the solution. We show that the problem is NP-hard for $d=4$, via a reduction from a special version of SAT (which is obtained from 3-COLORING). We then show that when each letter appears in $S$ at most $d=3$ times, then the problem is solvable in $O(n^5)$ time. Introduction Finding patterns in long sequences is a fundamental problem in string algorithms, combinatorial pattern matching and computational biology. In this paper we are interested in long patterns occurring at a global level, which has also been considered previously. One prominent example is to compute the longest square subsequence of a string $S$ of length $n$, which was solved by Kosowski in $O(n^2)$ time in 2004 <|cite_start|> (Reference: An Efficient Algorithm for the Longest Tandem Scattered Subsequence Problem: ) <|cite_end|>. The bound is conditionally optimal as any $o(n^{2-\varepsilon})$ solution would lead to a subquadratic bound for the traditional Longest Common Subsequence (LCS) problem, which is not possible unless the SETH conjecture fails <|cite_start|> (Reference: Quadratic Conditional Lower Bounds for String Problems and Dynamic Time Warping: Classic similarity measures of strings are longest common subsequence and Levenshtein distance (i.e., the classic edit distance). A classic similarity measure of curves is dynamic time warping. These measures can be computed by simple $O(n^2)$ dynamic programming algorithms, and despite much effort no algorithms with significantly better running time are known. We prove that, even restricted to binary strings or one-dimensional curves, respectively, these measures do not have strongly subquadratic time algorithms, i.e., no algorithms with running time $O(n^{2-\varepsilon})$ for any $\varepsilon > 0$, unless the Strong Exponential Time Hypothesis fails. We generalize the result to edit distance for arbitrary fixed costs of the four operations (deletion in one of the two strings, matching, substitution), by identifying trivial cases that can be solved in constant time, and proving quadratic-time hardness on binary strings for all other cost choices. This improves and generalizes the known hardness result for Levenshtein distance [Backurs, Indyk STOC'15] by the restriction to binary strings and the generalization to arbitrary costs, and adds important problems to a recent line of research showing conditional lower bounds for a growing number of quadratic time problems. As our main technical contribution, we introduce a framework for proving quadratic-time hardness of similarity measures. To apply the framework it suffices to construct a single gadget, which encapsulates all the expressive power necessary to emulate a reduction from satisfiability. Finally, we prove quadratic-time hardness for longest palindromic subsequence and longest tandem subsequence via reductions from longest common subsequence, showing that conditional lower bounds based on the Strong Exponential Time Hypothesis also apply to string problems that are not necessarily similarity measures.) <|cite_end|>. Nonetheless, a slight improvement was presented by Tiskin <|cite_start|> (Reference: Semi-local string comparison: algorithmic techniques and applications: A classical measure of string comparison is given by the longest common subsequence (LCS) problem on a pair of strings. We consider its generalisation, called the semi-local LCS problem, which arises naturally in many string-related problems. The semi-local LCS problem asks for the LCS scores for each of the input strings against every substring of the other input string, and for every prefix of each input string against every suffix of the other input string. Such a comparison pattern provides a much more detailed picture of string similarity than a single LCS score; it also arises naturally in many string-related problems. In fact, the semi-local LCS problem turns out to be fundamental for string comparison, providing a powerful and flexible alternative to classical dynamic programming. It is especially useful when the input to a string comparison problem may not be available all at once: for example, comparison of dynamically changing strings; comparison of compressed strings; parallel string comparison. The same approach can also be applied to permutation strings, providing efficient solutions for local versions of the longest increasing subsequence (LIS) problem, and for the problem of computing a maximum clique in a circle graph. Furthermore, the semi-local LCS problem turns out to have surprising connections in a few seemingly unrelated fields, such as computational geometry and algebra of semigroups. This work is devoted to exploring the structure of the semi-local LCS problem, its efficient solutions, and its applications in string comparison and other related areas, including computational molecular biology.) <|cite_end|>; and Inoue et al. recently tried to solve the problem by introducing the parameter $M$ (which is the number of matched pairs in $S$) and $r$ (which is the length of the solution) <|cite_start|> (Reference: Longest Square Subsequence Problem Revisited: The longest square subsequence (LSS) problem consists of computing a longest subsequence of a given string $S$ that is a square, i.e., a longest subsequence of form $XX$ appearing in $S$. It is known that an LSS of a string $S$ of length $n$ can be computed using $O(n^2)$ time [Kosowski 2004], or with (model-dependent) polylogarithmic speed-ups using $O(n^2 (\log \log n)^2 / \log^2 n)$ time [Tiskin 2013]. We present the first algorithm for LSS whose running time depends on other parameters, i.e., we show that an LSS of $S$ can be computed in $O(r \min\{n, M\}\log \frac{n}{r} + n + M \log n)$ time with $O(M)$ space, where $r$ is the length of an LSS of $S$ and $M$ is the number of matching points on $S$.) <|cite_end|>. In biology, it was found by Szostak and Wu as early as in 1980 that gene duplication is the driving force of evolution <|cite_start|> (Reference: Unequal crossing over in the ribosomal DNA of Saccharomyces cerevisiae: ) <|cite_end|>. There are two kinds of duplications: arbitrary segmental duplications (i.e., an arbitrary segment is selected and pasted at somewhere else) and tandem duplications (i.e., in the form of $X\rightarrow XX$, where $X$ is any segment of the input sequence). It is known that the former duplications occur frequently in cancer genomes <|cite_start|> (Reference: Emerging landscape of oncogenic signatures across human cancers: ) <|cite_end|> <|cite_start|> (Reference: Integrated Genomic Analyses of Ovarian Carcinoma: ) <|cite_end|> <|cite_start|> (Reference: Segmental duplications and copy-number variation in the human genome: The human genome contains numerous blocks of highly homologous duplicated sequence. This higher-order architecture provides a substrate for recombination and recurrent chromosomal rearrangement associated with genomic disease. However, an assessment of the role of segmental duplications in normal variation has not yet been made. On the basis of the duplication architecture of the human genome, we defined a set of 130 potential rearrangement hotspots and constructed a targeted bacterial artificial chromosome (BAC) microarray (with 2,194 BACs) to assess copy-number variation in these regions by array comparative genomic hybridization. Using our segmental duplication BAC microarray, we screened a panel of 47 normal individuals, who represented populations from four continents, and we identified 119 regions of copy-number polymorphism (CNP), 73 of which were previously unreported. We observed an equal frequency of duplications and deletions, as well as a 4-fold enrichment of CNPs within hotspot regions, compared with control BACs (P < .000001), which suggests that segmental duplications are a major catalyst of large-scale variation in the human genome. Importantly, segmental duplications themselves were also significantly enriched >4-fold within regions of CNP. Almost without exception, CNPs were not confined to a single population, suggesting that these either are recurrent events, having occurred independently in multiple founders, or were present in early human populations. Our study demonstrates that segmental duplications define hotspots of chromosomal rearrangement, likely acting as mediators of normal variation as well as genomic disease, and it suggests that the consideration of genomic architecture can significantly improve the ascertainment of large-scale rearrangements. Our specialized segmental duplication BAC microarray and associated database of structural polymorphisms will provide an important resource for the future characterization of human genomic disorders.) <|cite_end|>. On the other hand, the latter are common under different scenarios; for example, it is known that the tandem duplication of 3 nucleotides {\tt CAG} is closely related to the Huntington disease <|cite_start|> (Reference: A novel gene containing a trinucleotide repeat that is expanded and unstable on Huntington's disease chromosomes: ) <|cite_end|>. In addition, tandem duplications can occur at the genome level (acrossing different genes) for certain types of cancer <|cite_start|> (Reference: Reconstructing cancer genomes from paired-end sequencing data: ) <|cite_end|>. As duplication is common in biology, it was not a surprise that in the first sequenced human genome around 3\% of the genetic contents are in the form of tandem repeats <|cite_start|> (Reference: Initial sequencing and analysis of the human genome: ) <|cite_end|>. In 2004, Leupold et al. posed a fundamental question regarding tandem duplications: what is the complexity to compute the minimum tandem duplication distance between two sequences $A$ and $B$ (i.e., the minimum number of tandem duplications to convert $A$ to $B$). In 2020, Lafond et al. answered this open question by proving that this problem is NP-hard for an unbounded alphabet <|cite_start|> (Reference: The Tandem Duplication Distance is NP-hard: In computational biology, tandem duplication is an important biological phenomenon which can occur either at the genome or at the DNA level. A tandem duplication takes a copy of a genome segment and inserts it right after the segment - this can be represented as the string operation $AXB \Rightarrow AXXB$. For example, Tandem exon duplications have been found in many species such as human, fly or worm, and have been largely studied in computational biology. The Tandem Duplication (TD) distance problem we investigate in this paper is defined as follows: given two strings $S$ and $T$ over the same alphabet, compute the smallest sequence of tandem duplications required to convert $S$ to $T$. The natural question of whether the TD distance can be computed in polynomial time was posed in 2004 by Leupold et al. and had remained open, despite the fact that tandem duplications have received much attention ever since. In this paper, we prove that this problem is NP-hard. We further show that this hardness holds even if all characters of $S$ are distinct. This is known as the exemplar TD distance, which is of special relevance in bioinformatics. One of the tools we develop for the reduction is a new problem called the Cost-Effective Subgraph, for which we obtain W[1]-hardness results that might be of independent interest. We finally show that computing the exemplar TD distance between $S$ and $T$ is fixed-parameter tractable. Our results open the door to many other questions, and we conclude with several open problems.) <|cite_end|>. Later in <|cite_start|> (Reference: Computing the Tandem Duplication Distance is NP-Hard: ) <|cite_end|>, Lafond et al. proved that the problem is NP-hard even if $|\Sigma|\geq 4$ by encoding each letter in the unbounded alphabet proof with a square-free string over a new alphabet of size 4 (modified from Leech's construction <|cite_start|> (Reference: 2726. A problem on strings of beads: 2726. A problem on strings of beads. A problem which is of some interest in the theory of semigroups and in other contexts may be stated thus. It is required to construct a string of beads of three colours such that there is no " local repetition " in the pattern, i.e., there do not exist integers m >0, n such that the (n + 1)th to the (n +m)th beads have the same respective colours as the (n + m + l)th to the (n + 2m)th. Thus if the beads are represented by the digits 0 1 2 according to their colour, the string must not contain consecutive beads 0 0 of the same colour, or consecutive pairs 01 0 1, etc., anywhere in the sequence. The sequence is to extend to infinity in both directions. The only solutions known to me * are indirect to the extent that they involve constructing an auxiliary sequence (of binary and ternary digits respectively) from which the required sequence is derived by a further construction. I give here a construction which is direct, producing the required sequence without any intermediate constructon, and which is substantially different from either of the solutions mentioned above. Unlike these latter, it treats the three colours essentially equivalently. The present solution consists of constructing three finite blocks of beads of equal length, denoted by A, B, C, which are such that given any permissible arrangement of the beads, the corresponding arrangement of the blocks is also permissible. As this arrangement of the blocks gives a longer string of beads than the original one, the process may be repeated to give arbitrarily long strings of beads. If in particular the initial arrangement of the beads is that of one of the blocks, each new string of beads is an extension of the previous one, and this string can therefore be extended without limit. Consider the blocks) <|cite_end|>), which covers the case most relevant with biology, i.e., when $\Sigma=\{{\tt A},{\tt C},{\tt G},{\tt T}\}$ or $\Sigma=\{{\tt A},{\tt C},{\tt G},{\tt U}\}$ <|cite_start|> (Reference: Computing the Tandem Duplication Distance is NP-Hard: ) <|cite_end|>. Independently, Cicalese and Pilati showed that the problem is NP-hard for $|\Sigma|=5$ using a different encoding method <|cite_start|> (Reference: The Tandem Duplication Distance Problem is hard over bounded alphabets: A tandem duplication denotes the process of inserting a copy of a segment of DNA adjacent to its original position. More formally, a tandem duplication can be thought of as an operation that converts a string $S = AXB$ into a string $T = AXXB.$ As they appear to be involved in genetic disorders, tandem duplications are widely studied in computational biology. Also, tandem duplication mechanisms have been recently studied in different contexts, from formal languages, to information theory, to error-correcting codes for DNA storage systems. The problem of determining the complexity of computing the tandem duplication distance between two given strings was proposed by [Leupold et al., 2004] and, very recently, it was shown to be NP-hard for the case of unbounded alphabets [Lafond et al., STACS2020]. In this paper, we significantly improve this result and show that the tandem duplication distance problem is NP-hard already for the case of strings over an alphabet of size $\leq 5.$ We also study some special classes of strings were it is possible to give linear time solutions to the existence problem: given strings $S$ and $T$ over the same alphabet, decide whether there exists a sequence of duplications converting $S$ into $T$. A polynomial time algorithm that solves the existence problem was only known for the case of the binary alphabet.) <|cite_end|>. Besides duplication, another driving force in evolution is certainly mutation. As a simple example, suppose we have a toy singleton genome ${\tt ACGT}$ (note that a real genome certainly would have a much larger alphabet) and it evolves through two tandem duplications ${\tt ACGT}\cdot {\tt ACGT} \cdot {\tt ACGT}$ then another one on the second ${\tt GTA}$ to have $H={\tt ACGT}\cdot {\tt AC}\cdot {\tt GTA}\cdot {\tt GTA} \cdot {\tt CGT}$. If in $H$ some mutation occurs, e.g., the first $G$ is deleted and the second $G$ is changed to $T$ to have $H'={\tt ACT}\cdot {\tt AC}\cdot {\tt TTA}\cdot {\tt GTA}\cdot {\tt CGT}$, then it is difficult to retrieve the tandem duplications from $H'$. Motivated by the above applications, Lai et al. <|cite_start|> (Reference: Beyond the Longest Letter-duplicated Subsequence Problem: Given a sequence $S$ of length $n$, a letter-duplicated subsequence is a subsequence of $S$ in the form of $x_1^{d_1}x_2^{d_2}\cdots x_k^{d_k}$ with $x_i\in\Sigma$, $x_j\neq x_{j+1}$ and $d_i\geq 2$ for all $i$ in $[k]$ and $j$ in $[k-1]$. A linear time algorithm for computing the longest letter-duplicated subsequence (LLDS) of $S$ can be easily obtained. In this paper, we focus on two variants of this problem. We first consider the constrained version when $\Sigma$ is unbounded, each letter appears in $S$ at least 6 times and all the letters in $\Sigma$ must appear in the solution. We show that the problem is NP-hard (a further twist indicates that the problem does not admit any polynomial time approximation). The reduction is from possibly the simplest version of SAT that is NP-complete, $(\leq 2,1,\leq 3)$-SAT, where each variable appears at most twice positively and exact once negatively, and each clause contains at most three literals and some clauses must contain exactly two literals. (We hope that this technique will serve as a general tool to help us proving the NP-hardness for some more tricky sequence problems involving only one sequence -- much harder than with at least two input sequences, which we apply successfully at the end of the paper on some extra variations of the LLDS problem.) We then show that when each letter appears in $S$ at most 3 times, then the problem admits a factor $1.5-O(\frac{1}{n})$ approximation. Finally, we consider the weighted version, where the weight of a block $x_i^{d_i} (d_i\geq 2)$ could be any positive function which might not grow with $d_i$. We give a non-trivial $O(n^2)$ time dynamic programming algorithm for this version, i.e., computing an LD-subsequence of $S$ whose weight is maximized.) <|cite_end|> recently proposed the following problem called the {\em Longest Letter-Duplicated Subsequence}: Given a sequence $S$ of length $n$, compute a longest letter-duplicated subsequence (LLDS) of $S$, i.e., a subsequence of $S$ in the form $x_1^{d_1}x_2^{d_2}\cdots x_k^{d_k}$ with $x_i\in\Sigma$, where $x_j\neq x_{j+1}$ and $d_i\geq 2$ for all $i$ in $[k]$, $j$ in $[k-1]$ and $\sum_{i\in [k]}d_i$ is maximized. A simple linear time algorithm can be obtained to solve LLDS. But some constrained variation, i.e., all letters in $\Sigma$ must appear in the solution, is shown to be NP-hard. In this paper, we extend the work by Lai et al. by looking at a more general version of LLDS, namely, the Longest Subsequence-repeated Subsequence (LSRS) problem of $S$, which follows very much the same definition as above except that each $x_i$ is a subsequence of $S$ (instead of a letter). As a comparison, for the sequence $H'$, one of the optimal LLDS solutions is ${\tt AATTTT}={\tt A}^2{\tt T}^4$ while the LSRS solution is ${\tt ACAC}\cdot {\tt TAGTAG}=({\tt AC})^2({\tt TAG})^2$ which clearly gives more information about the duplication histories. This motivates us studying LSRS and related problems in this paper. Let $d$ be the maximum occurrence of any letter in the input string $S$, with $|S|=n$. Let \emph{LSDS+}$(d)$ be the constrained version that all letters in $\Sigma$ must appear in the solution, and the maximum occurrence of any letter in $S$ is at most $d$. We summarize the results of this paper as follows. \begin{enumerate} \item We show that the longest cubic subsequences of all substrings of $S$ can be solved in $O(n^6)$ time, improving the trivial $O(n^7)$ bound. \item We show that LSRS can be solved in $O(n^6)$ time. \item When $d\geq 4$, \emph{LSRS+}$(d)$ is NP-complete. \item When $d=3$, \emph{LSRS+}$(3)$ can be solved in $O(n^4)$ time. \end{enumerate} Note that the parameter $d$, i.e., the maximum duplication number, is practically meaningful in bioinformatics, since whole genome duplication is a rare event in many genomes and the number of duplicates is usually small. For example, it is known that plants have undergone up to three rounds of whole genome duplications, resulting in a number of duplicates bounded by 8 <|cite_start|> (Reference: Gene loss under neighborhood selection following whole genome duplication and the reconstruction of the ancestral populus genome.: We develop criteria to detect neighborhood selection effects on gene loss following whole genome duplication, and apply them to the recently sequenced poplar (Populus trichocarpa) genome. We improve on guided genome halving algorithms so that several thousand gene sets, each containing two paralogs in the descendant T of the doubling event and their single ortholog from an undoubled reference genome R, can be analyzed to reconstruct the ancestor A of T at the time of doubling. At the same time, large numbers of defective gene sets, either missing one paralog from T or missing their ortholog in R, may be incorporated into the analysis in a consistent way. We apply this genomic rearrangement distance-based approach to the poplar and grapevine (Vitis vinifera) genomes, as T and R respectively. We conclude that, after chromosome doubling, the "choice" of which paralogous gene pairs will lose copies is random, but that the retention of strings of single-copy genes on one chromosome versus the other is decidedly non-random.) <|cite_end|>. It should also be noted that our LSRS and LSRS+ problems seem to be related to the recently studied problems Longest Run Subsequence (LRS) <|cite_start|> (Reference: Using the longest run subsequence problem within homology-based scaffolding: ) <|cite_end|>, which is NP-hard; and Longest (Sub-)Periodic Subsequence <|cite_start|> (Reference: Longest (Sub-)Periodic Subsequence: We present an algorithm computing the longest periodic subsequence of a string of length $n$ in $O(n^7)$ time with $O(n^4)$ words of space. We obtain improvements when restricting the exponents or extending the search allowing the reported subsequence to be subperiodic down to $O(n^3)$ time and $O(n^2)$ words of space.) <|cite_end|>, which is polynomially solvable. But these two problems are different from our LSRS and LSRS+ problems. For instance, in an LRS solution a letter can appear in at most one run while in our LSRS and LSRS+ solutions, say ${\tt ACAC}\cdot {\tt TAGTAG}$ for the input string $H'$, a substring (e.g., {\tt AC}) can appear many times, hence a letter (e.g., {\tt A}) could appear many times but non-consecutively in LSRS and LSRS+ solutions. On the other hand, in the Longest (Sub-)Periodic Subsequence problem one is very much only looking for the repetition of a single subsequence of the input string, while obviously in our LSRS and LSRS+ problems we need to find the repetitions of multiple subsequences of the input string (e.g., {\tt AC} and {\tt TAG}). This paper is organized as follows. In Section 2 we give necessary definitions. In Section 3 we give an $O(n^6)$ time algorithm for computing the longest cubic subsequences of all substrings of $S$, as well as the solution for LSRS. In Section 4 we prove that $\emph{LSRS+}(4)$ is NP-hard and then we show that $\emph{LSRS+}(3$) can be solved in polynomial time. We conclude the paper in Section 5. <|paper_end|>
[ "<|reference_start|> Integrated Genomic Analyses of Ovarian Carcinoma: <|reference_end|>", "<|reference_start|> The Tandem Duplication Distance is NP-hard: In computational biology, tandem duplication is an important biological phenomenon which can occur either at the genome or at the DNA level. A tandem duplication takes a copy of a genome segment and inserts it right after the segment - this can be represented as the string operation $AXB \\Rightarrow AXXB$. For example, Tandem exon duplications have been found in many species such as human, fly or worm, and have been largely studied in computational biology. The Tandem Duplication (TD) distance problem we investigate in this paper is defined as follows: given two strings $S$ and $T$ over the same alphabet, compute the smallest sequence of tandem duplications required to convert $S$ to $T$. The natural question of whether the TD distance can be computed in polynomial time was posed in 2004 by Leupold et al. and had remained open, despite the fact that tandem duplications have received much attention ever since. In this paper, we prove that this problem is NP-hard. We further show that this hardness holds even if all characters of $S$ are distinct. This is known as the exemplar TD distance, which is of special relevance in bioinformatics. One of the tools we develop for the reduction is a new problem called the Cost-Effective Subgraph, for which we obtain W[1]-hardness results that might be of independent interest. We finally show that computing the exemplar TD distance between $S$ and $T$ is fixed-parameter tractable. Our results open the door to many other questions, and we conclude with several open problems. <|reference_end|>", "<|reference_start|> Beyond the Longest Letter-duplicated Subsequence Problem: Given a sequence $S$ of length $n$, a letter-duplicated subsequence is a subsequence of $S$ in the form of $x_1^{d_1}x_2^{d_2}\\cdots x_k^{d_k}$ with $x_i\\in\\Sigma$, $x_j\\neq x_{j+1}$ and $d_i\\geq 2$ for all $i$ in $[k]$ and $j$ in $[k-1]$. A linear time algorithm for computing the longest letter-duplicated subsequence (LLDS) of $S$ can be easily obtained. In this paper, we focus on two variants of this problem. We first consider the constrained version when $\\Sigma$ is unbounded, each letter appears in $S$ at least 6 times and all the letters in $\\Sigma$ must appear in the solution. We show that the problem is NP-hard (a further twist indicates that the problem does not admit any polynomial time approximation). The reduction is from possibly the simplest version of SAT that is NP-complete, $(\\leq 2,1,\\leq 3)$-SAT, where each variable appears at most twice positively and exact once negatively, and each clause contains at most three literals and some clauses must contain exactly two literals. (We hope that this technique will serve as a general tool to help us proving the NP-hardness for some more tricky sequence problems involving only one sequence -- much harder than with at least two input sequences, which we apply successfully at the end of the paper on some extra variations of the LLDS problem.) We then show that when each letter appears in $S$ at most 3 times, then the problem admits a factor $1.5-O(\\frac{1}{n})$ approximation. Finally, we consider the weighted version, where the weight of a block $x_i^{d_i} (d_i\\geq 2)$ could be any positive function which might not grow with $d_i$. We give a non-trivial $O(n^2)$ time dynamic programming algorithm for this version, i.e., computing an LD-subsequence of $S$ whose weight is maximized. <|reference_end|>", "<|reference_start|> Using the longest run subsequence problem within homology-based scaffolding: <|reference_end|>" ]
[ 6, 11, 16, 18 ]
{"<|cite_1|>": "ss-2083622", "<|cite_2|>": "arxiv-72507", "<|cite_3|>": "arxiv-846", "<|cite_4|>": "arxiv-268515", "<|cite_5|>": "ss-2491461", "<|multi_cite_6_1|>": "ss-1940852", "<|multi_cite_6_2|>": "ss-2491462", "<|multi_cite_6_3|>": "ss-2491463", "<|cite_7|>": "ss-2491464", "<|cite_8|>": "ss-2491465", "<|cite_9|>": "ss-899829", "<|cite_10|>": "arxiv-209445", "<|cite_11|>": "ss-2491466", "<|cite_12|>": "ss-2491467", "<|cite_13|>": "ss-2491466", "<|cite_14|>": "arxiv-257604", "<|cite_15|>": "arxiv-386713", "<|cite_16|>": "ss-2491468", "<|cite_17|>": "ss-2491469", "<|cite_18|>": "arxiv-399194"}
2004.04000
<|paper_start|> Title: Learning from Learners: Adapting Reinforcement Learning Agents to be Competitive in a Card Game Abstract: Learning from Learners: Adapting Reinforcement Learning Agents to be Competitive in a Card Game: Learning how to adapt to complex and dynamic environments is one of the most important factors that contribute to our intelligence. Endowing artificial agents with this ability is not a simple task, particularly in competitive scenarios. In this paper, we present a broad study on how popular reinforcement learning algorithms can be adapted and implemented to learn and to play a real-world implementation of a competitive multiplayer card game. We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style. Finally, we pinpoint how the behavior of each agent derives from their learning style and create a baseline for future research on this scenario. Introduction With the current interest in reinforcement learning caused by the development of deep reinforcement learning techniques <|cite_start|> (Reference: Playing Atari with Deep Reinforcement Learning: We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.) <|cite_end|>, novel methods and mechanisms have been developed in recent years. Such mechanisms allow an artificial agent to map between state and actions within highly complex state representations and in an end-to-end learning manner, reducing the need for strong and well-defined prior knowledge. In recent cases, reinforcement learning agents have been used for guiding autonomous cars <|cite_start|> (Reference: Deep Reinforcement Learning framework for Autonomous Driving: Reinforcement learning is considered to be a strong AI paradigm which can be used to teach machines through interaction with the environment and learning from their mistakes. Despite its perceived utility, it has not yet been successfully applied in automotive applications. Motivated by the successful demonstrations of learning of Atari games and Go by Google DeepMind, we propose a framework for autonomous driving using deep reinforcement learning. This is of particular relevance as it is difficult to pose autonomous driving as a supervised learning problem due to strong interactions with the environment including other vehicles, pedestrians and roadworks. As it is a relatively new area of research for autonomous driving, we provide a short overview of deep reinforcement learning and then describe our proposed framework. It incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios. It also integrates the recent work on attention models to focus on relevant information, thereby reducing the computational complexity for deployment on embedded hardware. The framework was tested in an open source 3D car racing simulator called TORCS. Our simulation results demonstrate learning of autonomous maneuvering in a scenario of complex road curvatures and simple interaction of other vehicles.) <|cite_end|> <|cite_start|> (Reference: Navigating Occluded Intersections with Autonomous Vehicles using Deep Reinforcement Learning: Providing an efficient strategy to navigate safely through unsignaled intersections is a difficult task that requires determining the intent of other drivers. We explore the effectiveness of Deep Reinforcement Learning to handle intersection problems. Using recent advances in Deep RL, we are able to learn policies that surpass the performance of a commonly-used heuristic approach in several metrics including task completion time and goal success rate and have limited ability to generalize. We then explore a system's ability to learn active sensing behaviors to enable navigating safely in the case of occlusions. Our analysis, provides insight into the intersection handling problem, the solutions learned by the network point out several shortcomings of current rule-based methods, and the failures of our current deep reinforcement learning system point to future research directions.) <|cite_end|>, predicting the stock exchange impact <|cite_start|> (Reference: Using Reinforcement Learning in the Algorithmic Trading Problem: The development of reinforced learning methods has extended application to many areas including algorithmic trading. In this paper trading on the stock exchange is interpreted into a game with a Markov property consisting of states, actions, and rewards. A system for trading the fixed volume of a financial instrument is proposed and experimentally tested; this is based on the asynchronous advantage actor-critic method with the use of several neural network architectures. The application of recurrent layers in this approach is investigated. The experiments were performed on real anonymized data. The best architecture demonstrated a trading strategy for the RTS Index futures (MOEX:RTSI) with a profitability of 66% per annum accounting for commission. The project source code is available via the following link: http://github.com/evgps/a3c_trading.) <|cite_end|> <|cite_start|> (Reference: Reinforcement learning in financial markets: Recently there has been an exponential increase in the use of artificial intelligence for trading in financial markets such as stock and forex. Reinforcement learning has become of particular interest to financial traders ever since the program AlphaGo defeated the strongest human contemporary Go board game player Lee Sedol in 2016. We systematically reviewed all recent stock/forex prediction or trading articles that used reinforcement learning as their primary machine learning method. All reviewed articles had some unrealistic assumptions such as no transaction costs, no liquidity issues and no bid or ask spread issues. Transaction costs had significant impacts on the profitability of the reinforcement learning algorithms compared with the baseline algorithms tested. Despite showing statistically significant profitability when reinforcement learning was used in comparison with baseline models in many studies, some showed no meaningful level of profitability, in particular with large changes in the price pattern between the system training and testing data. Furthermore, few performance comparisons between reinforcement learning and other sophisticated machine/deep learning models were provided. The impact of transaction costs, including the bid/ask spread on profitability has also been assessed. In conclusion, reinforcement learning in stock/forex trading is still in its early development and further research is needed to make it a reliable method in this domain.) <|cite_end|>, and coordinating a swarm of robots to protect the environment <|cite_start|> (Reference: {Distributed Deep Reinforcement Learning for Fighting Forest Fires with a Network of Aerial Robots: This paper proposes a distributed deep reinforcement learning (RL) based strategy for a team of Unmanned Aerial Vehicles (UAVs) to autonomously fight forest fires. We first model the forest fire as a Markov decision process (MDP) with a factored structure. We consider optimally controlling the forest fire without agents using dynamic programming, and show any exact solution and many approximate solutions are computationally intractable. Given the problem complexity, we consider a deep RL approach in which each agent learns a policy requiring only local information. We show with Monte Carlo simulations that the deep RL policy outperforms a hand-tuned heuristic, and scales well for various forest sizes and different numbers of UAVs as well as variations in model parameters. Experimental demonstrations with mobile robots fighting a simulated forest fire in the Robotarium at the Georgia Institute of Technology are also presented.) <|cite_end|> <|cite_start|> (Reference: Reinforcement learning and convolutional neural network system for firefighting rescue robot: In this paper, we combine the machine learning and neural network to build some modules for the fire rescue robot application. In our research, we build the robot legs module with Q-learning. We also finish the face detection with color sensors and infrared sensors. It is usual that image fusion is done when we want to use two kinds of sensors. Kalman filter is chosen to meet our requirement. After we finish some indispensable steps, we use sliding windows to choose our region of interest to make the system’s calculation lower. The least step is convolutional neural network. We design a seven layers neural network to find the face feature and distinguish it or not.) <|cite_end|>. Most of these solutions, although having real-world-inspired scenarios, focus on a direct space-action-reward mapping between the agent's action and the environment state. That translates to agents that can adapt to dynamic scenarios, but, when applied to competitive scenarios, they fail to address the impact of the opponents. In most cases, when these agents choose an action, they do not take into consideration how other agents can affect the state of the scenario. In this regard, competitive reinforcement learning is still behind the mainstream applications and demonstrations of the last years. In competitive scenarios, the agents have to learn decisions that a) maximize their goal, and b) minimize their adversaries' goals. Besides dealing with complex scenarios, they usually have to deal with the dynamics between the agents themselves. Some of the most common applications for competitive reinforcement learning involve multi-agent simulations, such as multiple autonomous vehicles <|cite_start|> (Reference: DeepTraffic: Crowdsourced Hyperparameter Tuning of Deep Reinforcement Learning Systems for Multi-Agent Dense Traffic Navigation: We present a traffic simulation named DeepTraffic where the planning systems for a subset of the vehicles are handled by a neural network as part of a model-free, off-policy reinforcement learning process. The primary goal of DeepTraffic is to make the hands-on study of deep reinforcement learning accessible to thousands of students, educators, and researchers in order to inspire and fuel the exploration and evaluation of deep Q-learning network variants and hyperparameter configurations through large-scale, open competition. This paper investigates the crowd-sourced hyperparameter tuning of the policy network that resulted from the first iteration of the DeepTraffic competition where thousands of participants actively searched through the hyperparameter space.) <|cite_end|>, life-simulation/resources gathering <|cite_start|> (Reference: Hierarchical Deep Reinforcement Learning Agent with Counter Self-play on Competitive Games: Deep Reinforcement Learning algorithms lead to agents that can solve difficult decision making problems in complex environments. However, many difficult multi-agent competitive games, especially real-time strategy games are still considered beyond the capability of current deep reinforcement learning algorithms, although there has been a recent effort to change this (OpenAI, 2017; Vinyals et al., 2017). Moreover, when the opponents in a competitive game are suboptimal, the current Nash Equilibrium seeking, selfplay algorithms are often unable to generalize their strategies to opponents that play strategies vastly different from their own. This suggests that a learning algorithm that is beyond conventional self-play is necessary. We develop Hierarchical Agent with Self-Play , a learning approach for obtaining hierarchically structured policies that can achieve higher performance than conventional self-play on competitive games through the use of a diverse pool of sub-policies we get from Counter Self-Play (CSP). We demonstrate that the ensemble policy generated by Hierarchical Agent with Self-Play can achieve better performance while facing unseen opponents that use sub-optimal policies. On a motivating iterated Rock-Paper-Scissor game and a partially observable real-time strategic game (http://generals.io/), we are led to the conclusion that Hierarchical Agent with Self-Play can perform better than conventional self-play as well as achieve 77% win rate against FloBot, an open-source agent which has ranked at position number 2 on the online leaderboards.) <|cite_end|>, pursuer/pursued scenarios <|cite_start|> (Reference: Competitive Multi-Agent Deep Reinforcement Learning with Counterfactual Thinking: Counterfactual thinking describes a psychological phenomenon that people re-infer the possible results with different solutions about things that have already happened. It helps people to gain more experience from mistakes and thus to perform better in similar future tasks. This paper investigates the counterfactual thinking for agents to find optimal decision-making strategies in multi-agent reinforcement learning environments. In particular, we propose a multi-agent deep reinforcement learning model with a structure which mimics the human-psychological counterfactual thinking process to improve the competitive abilities for agents. To this end, our model generates several possible actions (intent actions) with a parallel policy structure and estimates the rewards and regrets for these intent actions based on its current understanding of the environment. Our model incorporates a scenario-based framework to link the estimated regrets with its inner policies. During the iterations, our model updates the parallel policies and the corresponding scenario-based regrets for agents simultaneously. To verify the effectiveness of our proposed model, we conduct extensive experiments on two different environments with real-world applications. Experimental results show that counterfactual thinking can actually benefit the agents to obtain more accumulative rewards from the environments with fair information by comparing to their opponents while keeping high performing efficiency.) <|cite_end|>), and multi-player games <|cite_start|> (Reference: Competitive Reinforcement Learning in Atari Games: ) <|cite_end|>. The recent development and popular interest in deep reinforcement learning have contributed, however, to the design, implementation, and evaluation of only a few competitive learning solutions. The implementation of a counterfactual thinking solution <|cite_start|> (Reference: Competitive Multi-Agent Deep Reinforcement Learning with Counterfactual Thinking: Counterfactual thinking describes a psychological phenomenon that people re-infer the possible results with different solutions about things that have already happened. It helps people to gain more experience from mistakes and thus to perform better in similar future tasks. This paper investigates the counterfactual thinking for agents to find optimal decision-making strategies in multi-agent reinforcement learning environments. In particular, we propose a multi-agent deep reinforcement learning model with a structure which mimics the human-psychological counterfactual thinking process to improve the competitive abilities for agents. To this end, our model generates several possible actions (intent actions) with a parallel policy structure and estimates the rewards and regrets for these intent actions based on its current understanding of the environment. Our model incorporates a scenario-based framework to link the estimated regrets with its inner policies. During the iterations, our model updates the parallel policies and the corresponding scenario-based regrets for agents simultaneously. To verify the effectiveness of our proposed model, we conduct extensive experiments on two different environments with real-world applications. Experimental results show that counterfactual thinking can actually benefit the agents to obtain more accumulative rewards from the environments with fair information by comparing to their opponents while keeping high performing efficiency.) <|cite_end|>, based on a classic psychological phenomenon, obtained a good performance on a simple multi-agent resource gathering life-simulation water world scenario <|cite_start|> (Reference: Cooperative Multi-agent Control Using Deep Reinforcement Learning: ) <|cite_end|>. The model is certainly interesting but became very complex to scale to realistic scenarios as it implements an extra counterfactual policy network that is extremely sensitive to hyperparameters change. In another direction, a centralized learning mechanism was introduced by Tampuu et al. <|cite_start|> (Reference: Multiagent Cooperation and Competition with Deep Reinforcement Learning: Multiagent systems appear in most social, economical, and political situations. In the present work we extend the Deep Q-Learning Network architecture proposed by Google DeepMind to multiagent environments and investigate how two agents controlled by independent Deep Q-Networks interact in the classic videogame Pong. By manipulating the classical rewarding scheme of Pong we demonstrate how competitive and collaborative behaviors emerge. Competitive agents learn to play and score efficiently. Agents trained under collaborative rewarding schemes find an optimal strategy to keep the ball in the game as long as possible. We also describe the progression from competitive to collaborative behavior. The present work demonstrates that Deep Q-Networks can become a practical tool for studying the decentralized learning of multiagent systems living in highly complex environments.) <|cite_end|>. This presents an effective way of learning competitive actions, but it demands the learner to have total control of the environment, which restricts its applications. Moreover, all of these models were evaluated using very limited simulations of real-world events and most of the time do not scale well to real-world problems <|cite_start|> (Reference: A Survey and Critique of Multiagent Deep Reinforcement Learning: Deep reinforcement learning (RL) has achieved outstanding results in recent years. This has led to a dramatic increase in the number of applications and methods. Recent works have explored learning beyond single-agent scenarios and have considered multiagent learning (MAL) scenarios. Initial results report successes in complex multiagent domains, although there are several challenges to be addressed. The primary goal of this article is to provide a clear overview of current multiagent deep reinforcement learning (MDRL) literature. Additionally, we complement the overview with a broader analysis: (i) we revisit previous key components, originally presented in MAL and RL, and highlight how they have been adapted to multiagent deep reinforcement learning settings. (ii) We provide general guidelines to new practitioners in the area: describing lessons learned from MDRL works, pointing to recent benchmarks, and outlining open avenues of research. (iii) We take a more critical tone raising practical challenges of MDRL (e.g., implementation and computational demands). We expect this article will help unify and motivate future research to take advantage of the abundant literature that exists (e.g., RL and MAL) in a joint effort to promote fruitful research in the multiagent community.) <|cite_end|>. To better assess how popular reinforcement learning methods perform in a real-world competitive scenario, we propose a broad study on how different reinforcement learning agents learn and behave when deployed in such an environment. We investigate how three reinforcement learning models (Deep Q-Learning - DQL <|cite_start|> (Reference: Deep Reinforcement Learning with Double Q-learning: The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.) <|cite_end|>, Advantage Actor-Critic - A2C <|cite_start|> (Reference: Asynchronous Methods for Deep Reinforcement Learning: We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.) <|cite_end|>, and Proximal Policy Optimization - PPO <|cite_start|> (Reference: Proximal Policy Optimization Algorithms: We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.) <|cite_end|>) can learn a competitive multiplayer card game, and evaluate how their emerged behavior affect their own decisions towards winning the game. By focusing on these three implementations, we aim to provide the training, analysis and performance baseline for the competitive Chef's Hat card game <|cite_start|> (Reference: It's Food Fight! Introducing the Chef's Hat Card Game for Affective-Aware HRI: Emotional expressions and their changes during an interaction affect heavily how we perceive and behave towards other persons. To design an HRI scenario that makes possible to observe, understand, and model affective interactions and generate the appropriate responses or initiations of a robot is a very challenging task. In this paper, we report our efforts in designing such a scenario, and to propose a modeling strategy of affective interaction by artificial intelligence deployed in autonomous robots. Overall, we present a novel HRI game scenario that was designed to comply with the specific requirements that will allow us to develop the next wave of affective-aware social robots that provide adequate emotional responses.) <|cite_end|>, without the need of a centralized learner or overly-complex solutions. Our goal is to understand how these established models behave in a real-world inspired competitive scenario. To maintain our scenario as close to real-world as possible, we implement in full the Chef's Hat card game, which has been designed to be used in Human-Robot Interactions (HRI). The game contains specific mechanics that allow complex dynamics between the players to be used in the development of a winning game strategy. We use the OpenAI Gym-based Chef's Hat simulation environment <|cite_start|> (Reference: The Chef's Hat Simulation Environment for Reinforcement-Learning-Based Agents: To achieve social interactions within Human-Robot Interaction (HRI) environments is a very challenging task. Most of the current research focuses on Wizard-of-Oz approaches, which neglect the recent development of intelligent robots. On the other hand, real-world scenarios usually do not provide the necessary control and reproducibility which are needed for learning algorithms. In this paper, we propose a virtual simulation environment that implements the Chef's Hat card game, designed to be used in HRI scenarios, to provide a controllable and reproducible scenario for reinforcement-learning algorithms.) <|cite_end|> to emulate, in a 1:1 scale, all the possible game mechanics. A card game scenario allows us to have a naturally-constrained environment and yet obtain responses that are the same as the real-world counter-part application. It additionally helps us to better understand the decision-making process of the agents and to better illustrate the strategies learned by each agent and how they affect each other. For each of the three reinforcement learning methods, we introduce adaptations to the learning mechanisms of each agent, including a novel greedy policy for action selection. We perform three main competitive learning tasks: first, each of these agents is trained against random agents, to evaluate their capability to learn a game strategy. Second, we deploy a self-play routine that allows each agent to further improve its strategies by playing with evolving versions of itself. Third, once all the agents are trained, we choose the best of them and perform an inter-method competition, where the best agents of each learning method play against each other. We compare the performance of these agents by measuring the number of wins they have in a series of games, and to better understand and explain their learned strategies, we evaluate their action-selection behavior over time. We explain our results in terms of how the agents learn gaming strategies, and discuss how their specific learning mechanisms affect their learning behavior. \begin{figure*} \begin{center} \includegraphics[width=0.75\linewidth]{realWorld-Simulation.png} \end{center} \caption{Chef's Hat in real-life gameplay and 1:1 rendered simulation environment.} \label{fig:gameExample} \end{figure*} <|paper_end|>
[ "<|reference_start|> Playing Atari with Deep Reinforcement Learning: We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them. <|reference_end|>", "<|reference_start|> DeepTraffic: Crowdsourced Hyperparameter Tuning of Deep Reinforcement Learning Systems for Multi-Agent Dense Traffic Navigation: We present a traffic simulation named DeepTraffic where the planning systems for a subset of the vehicles are handled by a neural network as part of a model-free, off-policy reinforcement learning process. The primary goal of DeepTraffic is to make the hands-on study of deep reinforcement learning accessible to thousands of students, educators, and researchers in order to inspire and fuel the exploration and evaluation of deep Q-learning network variants and hyperparameter configurations through large-scale, open competition. This paper investigates the crowd-sourced hyperparameter tuning of the policy network that resulted from the first iteration of the DeepTraffic competition where thousands of participants actively searched through the hyperparameter space. <|reference_end|>", "<|reference_start|> Cooperative Multi-agent Control Using Deep Reinforcement Learning: <|reference_end|>", "<|reference_start|> Proximal Policy Optimization Algorithms: We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a \"surrogate\" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time. <|reference_end|>" ]
[ 0, 7, 12, 17 ]
{"<|cite_1|>": "arxiv-54263", "<|multi_cite_2_1|>": "arxiv-121204", "<|multi_cite_2_2|>": "arxiv-123136", "<|multi_cite_3_1|>": "arxiv-250574", "<|multi_cite_3_2|>": "ss-1207827", "<|multi_cite_4_1|>": "ss-894077", "<|multi_cite_4_2|>": "ss-1950952", "<|cite_5|>": "arxiv-144980", "<|cite_6|>": "ss-1950953", "<|cite_7|>": "arxiv-218533", "<|cite_8|>": "ss-1642881", "<|cite_9|>": "arxiv-218533", "<|cite_10|>": "ss-843480", "<|cite_11|>": "arxiv-88143", "<|cite_12|>": "arxiv-176093", "<|cite_13|>": "arxiv-84365", "<|cite_14|>": "arxiv-91622", "<|cite_15|>": "arxiv-129813", "<|cite_16|>": "arxiv-250547", "<|cite_17|>": "arxiv-253422"}
2010.08139
<|paper_start|> Title: A non-intrusive data-driven ROM framework for hemodynamics problems Abstract: A non-intrusive data-driven ROM framework for hemodynamics problems: Reduced order modeling (ROM) techniques are numerical methods that approximate the solution of parametric partial differential equation (PDE) by properly combining the high-fidelity solutions of the problem obtained for several configurations, i.e. for several properly chosen values of the physical/geometrical parameters characterizing the problem. In this contribution, we propose an efficient non-intrusive data-driven framework involving ROM techniques in computational fluid dynamics (CFD) for hemodynamics applications. By starting from a database of high-fidelity solutions related to a certain values of the parameters, we apply the proper orthogonal decomposition with interpolation (PODI) and then reconstruct the variables of interest for new values of the parameters, i.e. different values from the ones included in the database. Furthermore, we present a preliminary web application through which one can run the ROM with a very user-friendly approach, without the need of having expertise in the numerical analysis and scientific computing field. The case study we have chosen to test the efficiency of our algorithm is represented by the aortic blood flow pattern in presence of a Left Ventricular Assist Device (LVAD) when varying the pump flow rate. Introduction \label{sec:intro} Reduced order modeling (ROM) (see, e.g., <|cite_start|> (Reference: {Model Order Reduction: 報告番号: ; 学位授与年月日: 2012-03-22 ; 学位の種別: 修士 ; 学位の種類: 修士(環境学) ; 学位記番号: 修創域第4421号 ; 研究科・専攻: 新領域創成科学研究科環境学研究系人間環境学専攻) <|cite_end|>) is a well-spread technique used both in academia and in industry. It has been introduced as an efficient tool to approximate full order systems by significantly reducing the computational cost required to obtain numerical solutions in a parametric setting. ROM consists in two main stages: an \emph{offline} phase that can be carried out on high performance computing facilities, and an \emph{online} one that hinges on a system of reduced dimensionality to perform the parametric computation on portable devices. In the \emph{offline} phase, the reduced order space is built starting from full order complex simulations computed for certain values of the physical and/or geometrical parameters. In this work, we employ the proper orthogonal decomposition (POD) for the detection of the reduced basis functions that span this new reduced space. After the creation of such a space, in the \emph{online} phase a new parametric solution is obtained as a linear combination of the precomputed reduced basis functions, by means of an interpolation carried out by using RBF functions <|cite_start|> (Reference: Radial basis functions for the multivariate interpolation of large scattered data sets: ) <|cite_end|>. The resulting ROM is thus called proper orthogonal decomposition with interpolation (PODI) <|cite_start|> (Reference: Aerodynamic Data Reconstruction and Inverse Design Using Proper Orthogonal Decomposition: The application of proper orthogonal decomposition for incomplete (gappy) data for compressible external aerodynamic problems has been demonstrated successfully in this paper for the first time. Using this approach, it is possible to construct entire aerodynamic flowfields from the knowledge of computed aerodynamic flow data or measured flow data specified on the aerodynamic surface, thereby demonstrating a means to effectively combine experimental and computational data. The sensitivity of flow reconstruction results to available measurements and to experimental error is analyzed. Another new extension of this approach allows one to cast the problem of inverse airfoil design as a gappy data problem. The gappy methodology demonstrates a great simplification for the inverse airfoil design problem and is found to work well on a range of examples, including both subsonic and transonic cases.) <|cite_end|>. The aim of this work is the development of an efficient non-intrusive data-driven reduced order model to be used within hemodynamics framework. The reader can find examples of the ROM application in the hemodynamics field in <|cite_start|> (Reference: Fast simulations of patient-specific haemodynamics of coronary artery bypass grafts based on a POD-Galerkin method and a vascular shape parametrization: ) <|cite_end|> <|cite_start|> (Reference: Numerical modeling of hemodynamics scenarios of patient-specific coronary artery bypass grafts: ) <|cite_end|> <|cite_start|> (Reference: Combined Parameter and Model Reduction of Cardiovascular Problems by Means of Active Subspaces and POD-Galerkin Methods: ) <|cite_end|> <|cite_start|> (Reference: Reduced order methods for parametric optimal flow control in coronary bypass grafts, toward patient‐specific data assimilation: Coronary artery bypass grafts (CABG) surgery is an invasive procedure performed to circumvent partial or complete blood flow blockage in coronary artery disease. In this work, we apply a numerical optimal flow control model to patient‐specific geometries of CABG, reconstructed from clinical images of real‐life surgical cases, in parameterized settings. The aim of these applications is to match known physiological data with numerical hemodynamics corresponding to different scenarios, arisen by tuning some parameters. Such applications are an initial step toward matching patient‐specific physiological data in patient‐specific vascular geometries as best as possible. Two critical challenges that reportedly arise in such problems are: (a) lack of robust quantification of meaningful boundary conditions required to match known data as best as possible and (b) high computational cost. In this work, we utilize unknown control variables in the optimal flow control problems to take care of the first challenge. Moreover, to address the second challenge, we propose a time‐efficient and reliable computational environment for such parameterized problems by projecting them onto a low‐dimensional solution manifold through proper orthogonal decomposition‐Galerkin.) <|cite_end|> <|cite_start|> (Reference: Non-intrusive PODI-ROM for patient-specific aortic blood flow in presence of a LVAD device: Left ventricular assist devices (LVADs) are used to provide haemodynamic support to patients with critical cardiac failure. Severe complications can occur because of the modifications of the blood flow in the aortic region. In this work, the effect of a continuous flow LVAD device on the aortic flow is investigated by means of a non-intrusive reduced order model (ROM) built using the proper orthogonal decomposition with interpolation (PODI) method. The full order model (FOM) is represented by the incompressible Navier-Stokes equations discretized by using a Finite Volume (FV) technique, coupled with three-element Windkessel models to enforce outlet boundary conditions in a multi-scale approach. A patient-specific framework is proposed: a personalized geometry reconstructed from Computed Tomography (CT) images is used and the individualization of the coefficients of the three-element Windkessel models is based on experimental data provided by the Right Heart Catheterization (RCH) and Echocardiography (ECHO) tests. Pre-surgery configuration is also considered at FOM level in order to further validate the model. A parametric study with respect to the LVAD flow rate is considered. The accuracy of the reduced order model is assessed against results obtained with the full order model.) <|cite_end|>. We highlight that the online evaluation of the data-driven approach used here is based only on data and does not require knowledge about the governing equations that describe the system. It is also non-intrusive, i.e. no modification of the simulation software is carried out. For this reason it is particularly versatile thanks to its capability to be coupled with commercial solvers as well. It should be noted that many efforts are making in order to integrate ROM and technological innovation. From this viewpoint, a crucial step is the web server ARGOS, developed by mathLab group at SISSA that will make possible the exploitation of reduced order models to a wide category of people working in industrial and biomedical contexts. Through specific web applications the user will be able to solve many complex problems without the need of being an expert in numerical analysis and scientific computing. In particular, it is expected that the ATLAS project will collect all cardiovascular applications. In this framework, we present a preliminary web application through which one can run the ROM by using a very user-friendly GUI interface. The benchmark we have chosen to test the efficiency of our algorithm is represented by the aortic blood flow pattern in presence of a Left Ventricular Assist Device (LVAD) (see, e.g., <|cite_start|> (Reference: Advanced Heart Failure Treated with Continuous-Flow Left Ventricular Assist Device: BACKGROUND Patients with advanced heart failure have improved survival rates and quality of life when treated with implanted pulsatile-flow left ventricular assist devices as compared with medical therapy. New continuous-flow devices are smaller and may be more durable than the pulsatile-flow devices. METHODS In this randomized trial, we enrolled patients with advanced heart failure who were ineligible for transplantation, in a 2:1 ratio, to undergo implantation of a continuous-flow device (134 patients) or the currently approved pulsatile-flow device (66 patients). The primary composite end point was, at 2 years, survival free from disabling stroke and reoperation to repair or replace the device. Secondary end points included survival, frequency of adverse events, the quality of life, and functional capacity. RESULTS Preoperative characteristics were similar in the two treatment groups, with a median age of 64 years (range, 26 to 81), a mean left ventricular ejection fraction of 17%, and nearly 80% of patients receiving intravenous inotropic agents. The primary composite end point was achieved in more patients with continuous-flow devices than with pulsatile-flow devices (62 of 134 [46%] vs. 7 of 66 [11%]; P<0.001; hazard ratio, 0.38; 95% confidence interval, 0.27 to 0.54; P<0.001), and patients with continuous-flow devices had superior actuarial survival rates at 2 years (58% vs. 24%, P=0.008). Adverse events and device replacements were less frequent in patients with the continuous-flow device. The quality of life and functional capacity improved significantly in both groups. CONCLUSIONS Treatment with a continuous-flow left ventricular assist device in patients with advanced heart failure significantly improved the probability of survival free from stroke and device failure at 2 years as compared with a pulsatile device. Both devices significantly improved the quality of life and functional capacity. (ClinicalTrials.gov number, NCT00121485.)) <|cite_end|> <|cite_start|> (Reference: Eighth annual INTERMACS report: Special focus on framing the impact of adverse events.: ) <|cite_end|> <|cite_start|> (Reference: Aortic valve noncoronary cusp thrombosis after implantation of a nonpulsatile, continuous-flow pump.: Different institutions have different strategies for managing both native and prosthetic aortic valves in recipients of left ventricular assist devices (LVADs). Anticoagulation protocols and pump-flow algorithms remain nonstandardized. We describe our institutional experience with thrombotic complications and our evolving approach to this important clinical problem. We report the cases of 4 HeartMate II LVAD recipients in whom, despite an anticoagulative regimen, thrombus formed on the noncoronary cusp of the aortic valve. The management of the closed aortic valve in LVAD-supported patients remains problematic.) <|cite_end|> <|cite_start|> (Reference: Aortic valve thrombosis after implantation of temporary left ventricular assist device: The use of assist devices for ventricular support after myocardial infarction with cardiogenic shock has become common practice. Thrombosis, bleeding, and infection are common complications. However, native valve thrombosis is a rare complication. We present a case of aortic valve thrombosis after implantation of a left ventricular assist device (LVAD) treated with thrombus removal at time of device exchange.) <|cite_end|> <|cite_start|> (Reference: Myocardial infarction after left ventricular assist device implantation: clinical course, role of aortic root thrombus, and outcomes.: ) <|cite_end|> <|cite_start|> (Reference: Aortic Valve Thrombus in a Patient With an Extracorporeal Left Ventricular Assist Device: The Dilemma of Management: PATIENTS with severely compromised left ventricular output undergo left ventricular assist device (LVAD) placement for a variety of reasons. Intracorporeal (totally implantable) devices are placed either as bridge therapy to cardiac transplantation or as destination therapy in patients ineligible to undergo transplantation. Most modern devices are small and rely on impellers to produce axial, nonpulsatile flow in series with the left ventricle (LV). An example of such a device is the HeartMate II (Thoratec Corporation), which is approved by the FDA for both bridge-to-transplantation and destination therapy. In the setting of cardiogenic shock such as acute myocardial infarction (AMI) or significant insult to the LV during cardiopulmonary bypass (CPB), temporary extracorporeal devices may be used for circulatory support. In a patient with a sternotomy, typical cannulation sites are the left atrium and ascending aorta, resulting in flow that is in parallel to native LV flow. These devices are meant to be used as a bridge to recovery of myocardial function or as a bridge to a more definitive decision (such as intracorporeal LVAD or termination of care). An example of a device used in this setting is the Centrimags (Thoratecs Corporation) VAD, which also can be used for biventricular support if indicated. High flow through both types of devices can overpower and completely bypass the flow through the native left ventricular outflow tract. As a result, stasis of flow in the ascending aorta can occur, with reduced AV cusp excursion. This can contribute to perivalvular thrombus formation. Possible sequelae may include compromised coronary blood flow, heart failure, and embolic events such as AMI or cerebrovascular accident (CVA). The guidelines released in 2013 for mechanical circulatory support recommend warfarin with the optional addition of aspirin to prevent thrombotic complications in LVAD patients. On the flip side, mechanical shear stress from ventricular assist devices leads to an acquired von Willebrand factor deficiency and platelet dysfunction. This, along with pharmacologic anticoagulation, can place these patients at an increased risk for bleeding complications. Karimi et al have suggested that antiplatelet therapy titration with the utilization of thromboelastography may reduce bleeding risk in this population.) <|cite_end|>) when varying the pump flow rate (see, e.g., <|cite_start|> (Reference: Non-intrusive PODI-ROM for patient-specific aortic blood flow in presence of a LVAD device: Left ventricular assist devices (LVADs) are used to provide haemodynamic support to patients with critical cardiac failure. Severe complications can occur because of the modifications of the blood flow in the aortic region. In this work, the effect of a continuous flow LVAD device on the aortic flow is investigated by means of a non-intrusive reduced order model (ROM) built using the proper orthogonal decomposition with interpolation (PODI) method. The full order model (FOM) is represented by the incompressible Navier-Stokes equations discretized by using a Finite Volume (FV) technique, coupled with three-element Windkessel models to enforce outlet boundary conditions in a multi-scale approach. A patient-specific framework is proposed: a personalized geometry reconstructed from Computed Tomography (CT) images is used and the individualization of the coefficients of the three-element Windkessel models is based on experimental data provided by the Right Heart Catheterization (RCH) and Echocardiography (ECHO) tests. Pre-surgery configuration is also considered at FOM level in order to further validate the model. A parametric study with respect to the LVAD flow rate is considered. The accuracy of the reduced order model is assessed against results obtained with the full order model.) <|cite_end|> <|cite_start|> (Reference: Patient-specific isogeometric fluid–structure interaction analysis of thoracic aortic blood flow due to implantation of the Jarvik 2000 left ventricular assist device: ) <|cite_end|> <|cite_start|> (Reference: Numerical prediction of the effect of aortic Left Ventricular Assist Device outflow-graft anastomosis location: ) <|cite_end|>). The work is organized as follows. In Sec. \ref {sec:fom} we present the general parametric full order model governing hemodynamics problems, over which we apply the proposed numerical methodology. In Sec. \ref {sec:rom} we present the PODI method, whilst in Sec. \ref{sec:results} we show the numerical setting of the problem and the achieved results, as well as provide a brief description of the web application developed. Finally, in Sec. \ref{sec:conclusion} conclusions and perspectives are provided. <|paper_end|>
[ "<|reference_start|> {Model Order Reduction: 報告番号: ; 学位授与年月日: 2012-03-22 ; 学位の種別: 修士 ; 学位の種類: 修士(環境学) ; 学位記番号: 修創域第4421号 ; 研究科・専攻: 新領域創成科学研究科環境学研究系人間環境学専攻 <|reference_end|>", "<|reference_start|> Non-intrusive PODI-ROM for patient-specific aortic blood flow in presence of a LVAD device: Left ventricular assist devices (LVADs) are used to provide haemodynamic support to patients with critical cardiac failure. Severe complications can occur because of the modifications of the blood flow in the aortic region. In this work, the effect of a continuous flow LVAD device on the aortic flow is investigated by means of a non-intrusive reduced order model (ROM) built using the proper orthogonal decomposition with interpolation (PODI) method. The full order model (FOM) is represented by the incompressible Navier-Stokes equations discretized by using a Finite Volume (FV) technique, coupled with three-element Windkessel models to enforce outlet boundary conditions in a multi-scale approach. A patient-specific framework is proposed: a personalized geometry reconstructed from Computed Tomography (CT) images is used and the individualization of the coefficients of the three-element Windkessel models is based on experimental data provided by the Right Heart Catheterization (RCH) and Echocardiography (ECHO) tests. Pre-surgery configuration is also considered at FOM level in order to further validate the model. A parametric study with respect to the LVAD flow rate is considered. The accuracy of the reduced order model is assessed against results obtained with the full order model. <|reference_end|>", "<|reference_start|> Myocardial infarction after left ventricular assist device implantation: clinical course, role of aortic root thrombus, and outcomes.: <|reference_end|>", "<|reference_start|> Aortic Valve Thrombus in a Patient With an Extracorporeal Left Ventricular Assist Device: The Dilemma of Management: PATIENTS with severely compromised left ventricular output undergo left ventricular assist device (LVAD) placement for a variety of reasons. Intracorporeal (totally implantable) devices are placed either as bridge therapy to cardiac transplantation or as destination therapy in patients ineligible to undergo transplantation. Most modern devices are small and rely on impellers to produce axial, nonpulsatile flow in series with the left ventricle (LV). An example of such a device is the HeartMate II (Thoratec Corporation), which is approved by the FDA for both bridge-to-transplantation and destination therapy. In the setting of cardiogenic shock such as acute myocardial infarction (AMI) or significant insult to the LV during cardiopulmonary bypass (CPB), temporary extracorporeal devices may be used for circulatory support. In a patient with a sternotomy, typical cannulation sites are the left atrium and ascending aorta, resulting in flow that is in parallel to native LV flow. These devices are meant to be used as a bridge to recovery of myocardial function or as a bridge to a more definitive decision (such as intracorporeal LVAD or termination of care). An example of a device used in this setting is the Centrimags (Thoratecs Corporation) VAD, which also can be used for biventricular support if indicated. High flow through both types of devices can overpower and completely bypass the flow through the native left ventricular outflow tract. As a result, stasis of flow in the ascending aorta can occur, with reduced AV cusp excursion. This can contribute to perivalvular thrombus formation. Possible sequelae may include compromised coronary blood flow, heart failure, and embolic events such as AMI or cerebrovascular accident (CVA). The guidelines released in 2013 for mechanical circulatory support recommend warfarin with the optional addition of aspirin to prevent thrombotic complications in LVAD patients. On the flip side, mechanical shear stress from ventricular assist devices leads to an acquired von Willebrand factor deficiency and platelet dysfunction. This, along with pharmacologic anticoagulation, can place these patients at an increased risk for bleeding complications. Karimi et al have suggested that antiplatelet therapy titration with the utilization of thromboelastography may reduce bleeding risk in this population. <|reference_end|>" ]
[ 0, 7, 12, 13 ]
{"<|cite_1|>": "ss-1319884", "<|cite_2|>": "ss-1966724", "<|cite_3|>": "ss-1679608", "<|multi_cite_4_1|>": "ss-1657341", "<|multi_cite_4_2|>": "ss-1259585", "<|multi_cite_4_3|>": "ss-1260381", "<|multi_cite_4_4|>": "ss-1979137", "<|multi_cite_4_5|>": "arxiv-276911", "<|multi_cite_7_2|>": "ss-1679589", "<|multi_cite_7_3|>": "ss-1679590", "<|multi_cite_7_4|>": "ss-1979138", "<|multi_cite_7_5|>": "ss-1979139", "<|multi_cite_7_6|>": "ss-1979140", "<|multi_cite_7_7|>": "ss-1679592", "<|multi_cite_8_1|>": "arxiv-276911", "<|multi_cite_8_2|>": "ss-1044196", "<|multi_cite_8_3|>": "ss-1679594"}
2205.13125-0
<|paper_start|> Title: Prompt-based Learning for Unpaired Image Captioning Abstract: Prompt-based Learning for Unpaired Image Captioning: Unpaired Image Captioning (UIC) has been developed to learn image descriptions from unaligned vision-language sample pairs. Existing works usually tackle this task using adversarial learning and visual concept reward based on reinforcement learning. However, these existing works were only able to learn limited cross-domain information in vision and language domains, which restrains the captioning performance of UIC. Inspired by the success of Vision-Language Pre-Trained Models (VL-PTMs) in this research, we attempt to infer the cross-domain cue information about a given image from the large VL-PTMs for the UIC task. This research is also motivated by recent successes of prompt learning in many downstream multi-modal tasks, including image-text retrieval and vision question answering. In this work, a semantic prompt is introduced and aggregated with visual features for more accurate caption prediction under the adversarial learning framework. In addition, a metric prompt is designed to select high-quality pseudo image-caption samples obtained from the basic captioning model and refine the model in an iterative manner. Extensive experiments on the COCO and Flickr30K datasets validate the promising captioning ability of the proposed model. We expect that the proposed prompt-based UIC model will stimulate a new line of research for the VL-PTMs based captioning. Introduction The goal of image captioning is to automatically describe visual images with natural languages. This is a cross-modality task that transfers information from the image domain to the language domain <|cite_start|> (Reference: Fine-Grained Image Captioning with Global-Local Discriminative Objective: Significant progress has been made in recent years in image captioning, an active topic in the fields of vision and language. However, existing methods tend to yield overly general captions and consist of some of the most frequent words/phrases, resulting in inaccurate and indistinguishable descriptions (see Figure 1). This is primarily due to (i) the conservative characteristic of traditional training objectives that drives the model to generate correct but hardly discriminative captions for similar images and (ii) the uneven word distribution of the ground-truth captions, which encourages generating highly frequent words/phrases while suppressing the less frequent but more concrete ones. In this work, we propose a novel global-local discriminative objective that is formulated on top of a reference model to facilitate generating fine-grained descriptive captions. Specifically, from a global perspective, we design a novel global discriminative constraint that pulls the generated sentence to better discern the corresponding image from all others in the entire dataset. From the local perspective, a local discriminative constraint is proposed to increase attention such that it emphasizes the less frequent but more concrete words/phrases, thus facilitating the generation of captions that better describe the visual details of the given images. We evaluate the proposed method on the widely used MS-COCO dataset, where it outperforms the baseline methods by a sizable margin and achieves competitive performance over existing leading approaches. We also conduct self-retrieval experiments to demonstrate the discriminability of the proposed method.) <|cite_end|> <|cite_start|> (Reference: Integrating Part of Speech Guidance for Image Captioning: To generate an image caption, firstly, the content of the image should be fully understood; and then the semantic information contained in the image should be described using a phrase or statement that conforms to certain grammatical rules. Thus, it requires techniques from both computer vision and natural language processing to connect the two different media forms together, which is highly challenging. To adaptively adjust the effect of visual information and language information on the captioning process, in this paper, the part of speech information is proposed to novelly integrate with image captioning models based on the encoder-decoder framework. First, a part of speech prediction network is proposed to analyze and model the part of speech sequences for the words in natural language sentences; then, different mechanisms are proposed to integrate the part of speech guidance information with merge-based and inject-based image captioning models, respectively; finally, according to the integrated frameworks, a multi-task learning paradigm is proposed to facilitate model training. Experiments are conducted on two widely used image captioning datasets, Flickr30 k and COCO, and the results have validated that the image captions generated by the proposed method contain more accurate visual information and comply with language habits and grammar rules better.) <|cite_end|> <|cite_start|> (Reference: Captionnet: A tailor-made recurrent neural network for generating image descriptions: Image captioning is a challenging task of visual understanding and has drawn more attention of researchers. In general, two inputs are required at each time step by the Long Short-Term Memory (LSTM) network used in popular attention based image captioning frameworks, including image features and previous generated words. However, error will be accumulated if the previous words are not accurate and the related semantic is not efficient enough. Facing these challenges, a novel model named CaptionNet is proposed in this work as an improved LSTM specially designed for image captioning. Concretely, only attended image features are allowed to be fed into the memory of CaptionNet through input gates. In this way, the dependency on the previous predicted words can be reduced, forcing model to focus on more visual clues of images at the current time step. Moreover, a memory initialization method called image feature encoding is designed to capture richer semantics of the target image. The evaluation on the benchmark MSCOCO and Flickr30K datasets demonstrates the effectiveness of the proposed CaptionNet model, and extensive ablation studies are performed to verify each of the proposed methods. The project page can be found in https://mic.tongji.edu.cn/3f/9c/c9778a147356/page.htm.) <|cite_end|>. With the release of large-scale captioning datasets <|cite_start|> (Reference: Show and Tell: A Neural Image Caption Generator: Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.) <|cite_end|> <|cite_start|> (Reference: Show and Tell: Lessons learned from the 2015 MSCOCO Image Captioning Challenge: Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. Finally, given the recent surge of interest in this task, a competition was organized in 2015 using the newly released COCO dataset. We describe and analyze the various improvements we applied to our own baseline and show the resulting performance in the competition, which we won ex-aequo with a team from Microsoft Research, and provide an open source implementation in TensorFlow.) <|cite_end|>and the advances in deep learning, the performance of image captioning has been continuously improved. It has been widely used in many applications, such as human-robot interaction <|cite_start|> (Reference: Visual Dialog: We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org) <|cite_end|> <|cite_start|> (Reference: VLN-BERT: A Recurrent Vision-and-Language BERT for Navigation: Accuracy of many visiolinguistic tasks has benefited significantly from the application of vision-and-language (V&L) BERT. However, its application for the task of vision-and-language navigation (VLN) remains limited. One reason for this is the difficulty adapting the BERT architecture to the partially observable Markov decision process present in VLN, requiring history-dependent attention and decision making. In this paper we propose a recurrent BERT model that is time-aware for use in VLN. Specifically, we equip the BERT model with a recurrent function that maintains cross-modal state information for the agent. Through extensive experiments on R2R and REVERIE we demonstrate that our model can replace more complex encoder-decoder models to achieve state-of-the-art results. Moreover, our approach can be generalised to other transformer-based architectures, supports pre-training, and is capable of solving navigation and referring expression tasks simultaneously.) <|cite_end|>, visual aid for the blind <|cite_start|> (Reference: Automatic alt-text: Computer-generated image descriptions for blind users on a social network service: We designed and deployed automatic alt-text (AAT), a system that applies computer vision technology to identify faces, objects, and themes from photos to generate photo alt-text for screen reader users on Facebook. We designed our system through iterations of prototyping and in-lab user studies. Our lab test participants had a positive reaction to our system and an enhanced experience with Facebook photos. We also evaluated our system through a two-week field study as part of the Facebook iOS app for 9K VoiceOver users. We randomly assigned them into control and test groups and collected two weeks of activity data and their survey feedback. The test group reported that photos on Facebook were easier to interpret and more engaging, and found Facebook more useful in general. Our system demonstrates that artificial intelligence can be used to enhance the experience for visually impaired users on social networking sites (SNSs), while also revealing the challenges with designing automated assistive technology in a SNS context.) <|cite_end|> <|cite_start|> (Reference: VizWiz Grand Challenge: Answering Visual Questions from Blind People: The study of algorithms to automatically answer visual questions currently is motivated by visual question answering (VQA) datasets constructed in artificial VQA settings. We propose VizWiz, the first goal-oriented VQA dataset arising from a natural VQA setting. VizWiz consists of over 31,000 visual questions originating from blind people who each took a picture using a mobile phone and recorded a spoken question about it, together with 10 crowdsourced answers per visual question. VizWiz differs from the many existing VQA datasets because (1) images are captured by blind photographers and so are often poor quality, (2) questions are spoken and so are more conversational, and (3) often visual questions cannot be answered. Evaluation of modern algorithms for answering visual questions and deciding if a visual question is answerable reveals that VizWiz is a challenging dataset. We introduce this dataset to encourage a larger community to develop more generalized algorithms that can assist blind people.) <|cite_end|> <|cite_start|> (Reference: Captioning Images Taken by People Who Are Blind: While an important problem in the vision community is to design algorithms that can automatically caption images, few publicly-available datasets for algorithm development directly address the interests of real users. Observing that people who are blind have relied on (human-based) image captioning services to learn about images they take for nearly a decade, we introduce the first image captioning dataset to represent this real use case. This new dataset, which we call VizWiz-Captions, consists of over 39,000 images originating from people who are blind that are each paired with five captions. We analyze this dataset to (1) characterize the typical captions, (2) characterize the diversity of content found in the images, and (3) compare its content to that found in eight popular vision datasets. We also analyze modern image captioning algorithms to identify what makes this new dataset challenging for the vision community. We publicly-share the dataset with captioning challenge instructions at https://vizwiz.org) <|cite_end|>, and automatic driving <|cite_start|> (Reference: Textual Explanations for Self-Driving Vehicles: Deep neural perception and control networks have become key components of self-driving vehicles. User acceptance is likely to benefit from easy-to-interpret textual explanations which allow end-users to understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. We propose a new approach to introspective explanations which consists of two parts. First, we use a visual (spatial) attention model to train a convolutional network end-to-end from images to the vehicle control commands, i.e., acceleration and change of course. The controller's attention identifies image regions that potentially influence the network's output. Second, we use an attention-based video-to-text model to produce textual explanations of model actions. The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. Finally, we explore a version of our model that generates rationalizations, and compare with introspective explanations on the same video segments. We evaluate these models on a novel driving dataset with ground-truth human explanations, the Berkeley DeepDrive eXplanation (BDD-X) dataset. Code is available at https://github.com/JinkyuKimUCB/explainable-deep-driving.) <|cite_end|> <|cite_start|> (Reference: Look Before You Leap: Bridging Model-Free and Model-Based Reinforcement Learning for Planned-Ahead Vision-and-Language Navigation: Existing research studies on vision and language grounding for robot navigation focus on improving model-free deep reinforcement learning (DRL) models in synthetic environments. However, model-free DRL models do not consider the dynamics in the real-world environments, and they often fail to generalize to new scenes. In this paper, we take a radical approach to bridge the gap between synthetic studies and real-world practices---We propose a novel, planned-ahead hybrid reinforcement learning model that combines model-free and model-based reinforcement learning to solve a real-world vision-language navigation task. Our look-ahead module tightly integrates a look-ahead policy model with an environment model that predicts the next state and the reward. Experimental results suggest that our proposed method significantly outperforms the baselines and achieves the best on the real-world Room-to-Room dataset. Moreover, our scalable method is more generalizable when transferring to unseen environments.) <|cite_end|> <|cite_start|> (Reference: Explanations in Autonomous Driving: A Survey: The automotive industry has witnessed an increasing level of development in the past decades; from manufacturing manually operated vehicles to manufacturing vehicles with a high level of automation. With the recent developments in Artificial Intelligence (AI), automotive companies now employ blackbox AI models to enable vehicles to perceive their environments and make driving decisions with little or no input from a human. With the hope to deploy autonomous vehicles (AV) on a commercial scale, the acceptance of AV by society becomes paramount and may largely depend on their degree of transparency, trustworthiness, and compliance with regulations. The assessment of the compliance of AVs to these acceptance requirements can be facilitated through the provision of explanations for AVs' behaviour. Explainability is therefore seen as an important requirement for AVs. AVs should be able to explain what they have 'seen', done, and might do in environments in which they operate. In this paper, we provide a comprehensive survey of the existing body of work around explainable autonomous driving. First, we open with a motivation for explanations by highlighting and emphasising the importance of transparency, accountability, and trust in AVs; and examining existing regulations and standards related to AVs. Second, we identify and categorise the different stakeholders involved in the development, use, and regulation of AVs and elicit their explanation requirements for AV. Third, we provide a rigorous review of previous work on explanations for the different AV operations (i.e., perception, localisation, planning, control, and system management). Finally, we identify pertinent challenges and provide recommendations, such as a conceptual framework for AV explainability. This survey aims to provide the fundamental knowledge required of researchers who are interested in explainability in AVs.) <|cite_end|>. The mainstream image captioning models have followed the encoder-decoder paradigm <|cite_start|> (Reference: Unpaired Image Captioning with Semantic-Constrained Self-Learning: Image captioning has been an emerging and fast-developing research topic. Nevertheless, most existing works heavily rely on large amounts of image-sentence pairs and therefore hinder the practical applications of captioning in the wild. In this paper, we present a novel Semantic-Constrained Self-learning (SCS) framework that explores an iterative self-learning strategy to learn an image captioner with only unpaired image and text data. Technically, SCS consists of two stages, i.e., pseudo pair generation and captioner re-training, iteratively producing "pseudo" image-sentence pairs via a pre-trained captioner and re-training the captioner with the pseudo pairs, respectively. Particularly, both stages are guided by the recognized objects in the image, that act as semantic constraint to strengthen the semantic alignment between the input image and the output sentence. We leverage a semantic-constrained beam search for pseudo pair generation to regularize the decoding process with the recognized objects via forcing the inclusion/exclusion of the recognized/irrelevant objects in output sentence. For captioner re-training, a self-supervised triplet loss is utilized to preserve the relative semantic similarity ordering among generated sentences with regard to the input image triplets. Moreover, an object inclusion reward and an adversarial reward are adopted to encourage the inclusion of the predicted objects in the output sentence and pursue the generation of more realistic sentences during self-critical training, respectively. Experiments conducted on both dependent and independent unpaired data validate the superiority of SCS. More remarkably, we obtain the best published CIDEr score to-date of 74.7\% on COCO Karpathy test split for unpaired image captioning.) <|cite_end|> <|cite_start|> (Reference: RATT: Recurrent Attention to Transient Tasks for Continual Image Captioning: Research on continual learning has led to a variety of approaches to mitigating catastrophic forgetting in feed-forward classification networks. Until now surprisingly little attention has been focused on continual learning of recurrent models applied to problems like image captioning. In this paper we take a systematic look at continual learning of LSTM-based models for image captioning. We propose an attention-based approach that explicitly accommodates the transient nature of vocabularies in continual image captioning tasks -- i.e. that task vocabularies are not disjoint. We call our method Recurrent Attention to Transient Tasks (RATT), and also show how to adapt continual learning approaches based on weight egularization and knowledge distillation to recurrent continual learning problems. We apply our approaches to incremental image captioning problem on two new continual learning benchmarks we define using the MS-COCO and Flickr30 datasets. Our results demonstrate that RATT is able to sequentially learn five captioning tasks while incurring no forgetting of previously learned ones.) <|cite_end|>, which encodes the image into feature representation first and then decodes it into a sentence in a word-by-word fashion. Although the performance is good, such supervised learning based captioning models rely on massively labeled vision-language pairs <|cite_start|> (Reference: Multitask learning for cross-domain image captioning: Recent artificial intelligence research has witnessed great interest in automatically generating text descriptions of images, which are known as the <italic>image captioning</italic> task. Remarkable success has been achieved on domains where a large number of paired data in multimedia are available. Nevertheless, annotating sufficient data is labor-intensive and time-consuming, establishing significant barriers for adapting the image captioning systems to new domains. In this study, we introduc a novel Multitask Learning Algorithm for cross-Domain Image Captioning (MLADIC). MLADIC is a multitask system that simultaneously optimizes two coupled objectives via a dual learning mechanism: image captioning and text-to-image synthesis, with the hope that by leveraging the correlation of the two dual tasks, we are able to enhance the image captioning performance in the target domain. Concretely, the image captioning task is trained with an encoder–decoder model (i.e., CNN-LSTM) to generate textual descriptions of the input images. The image synthesis task employs the conditional generative adversarial network (C-GAN) to synthesize plausible images based on text descriptions. In C-GAN, a generative model <inline-formula><tex-math notation="LaTeX">$G$</tex-math></inline-formula> synthesizes plausible images given text descriptions, and a discriminative model <inline-formula><tex-math notation="LaTeX">$D$</tex-math></inline-formula> tries to distinguish the images in training data from the generated images by <inline-formula><tex-math notation="LaTeX">$G$</tex-math></inline-formula>. The adversarial process can eventually guide <inline-formula><tex-math notation="LaTeX">$G$</tex-math></inline-formula> to generate plausible and high-quality images. To bridge the gap between different domains, a two-step strategy is adopted in order to transfer knowledge from the source domains to the target domains. First, we pre-train the model to learn the alignment between the neural representations of images and that of text data with the sufficient labeled source domain data. Second, we fine-tune the learned model by leveraging the limited image–text pairs and unpaired data in the target domain. We conduct extensive experiments to evaluate the performance of MLADIC by using the MSCOCO as the source domain data, and using Flickr30k and Oxford-102 as the target domain data. The results demonstrate that MLADIC achieves substantially better performance than the strong competitors for the cross-domain image captioning task.) <|cite_end|> <|cite_start|> (Reference: High-Quality Image Captioning With Fine-Grained and Semantic-Guided Visual Attention: The soft-attention mechanism is regarded as one of the representative methods for image captioning. Based on the end-to-end convolutional neural network (CNN)-long short term memory (LSTM) framework, the soft-attention mechanism attempts to link the semantic representation in text (i.e., captioning) with relevant visual information in the image for the first time. Motivated by this approach, several state-of-the-art attention methods are proposed. However, due to the constraints of CNN architecture, the given image is only segmented to the fixed-resolution grid at a coarse level. The visual feature extracted from each grid indiscriminately fuses all inside objects and/or their portions. There is no semantic link between grid cells. In addition, the large area “stuff” (e.g., the sky or a beach) cannot be represented using the current methods. To address these problems, this paper proposes a new model based on the fully convolutional network (FCN)-LSTM framework, which can generate an attention map at a fine-grained grid-wise resolution. Moreover, the visual feature of each grid cell is contributed only by the principal object. By adopting the grid-wise labels (i.e., semantic segmentation), the visual representations of different grid cells are correlated to each other. With the ability to attend to large area “stuff,” our method can further summarize an additional semantic context from semantic labels. This method can provide comprehensive context information to the language LSTM decoder. In this way, a mechanism of fine-grained and semantic-guided visual attention is created, which can accurately link the relevant visual information with each semantic meaning inside the text. Demonstrated by three experiments including both qualitative and quantitative analyses, our model can generate captions of high quality, specifically high levels of accuracy, completeness, and diversity. Moreover, our model significantly outperforms all other methods that use VGG-based CNN encoders without fine-tuning.) <|cite_end|> <|cite_start|> (Reference: Know more say less: Image captioning based on scene graphs: Automatically describing the content of an image has been attracting considerable research attention in the multimedia field. To represent the content of an image, many approaches directly utilize convolutional neural networks (CNNs) to extract visual representations, which are fed into recurrent neural networks to generate natural language. Recently, some approaches have detected semantic concepts from images and then encoded them into high-level representations. Although substantial progress has been achieved, most of the previous methods treat entities in images individually, thus lacking structured information that provides important cues for image captioning. In this paper, we propose a framework based on scene graphs for image captioning. Scene graphs contain abundant structured information because they not only depict object entities in images but also present pairwise relationships. To leverage both visual features and semantic knowledge in structured scene graphs, we extract CNN features from the bounding box offsets of object entities for visual representations, and extract semantic relationship features from triples (e.g., man riding bike) for semantic representations. After obtaining these features, we introduce a hierarchical-attention-based module to learn discriminative features for word generation at each time step. The experimental results on benchmark datasets demonstrate the superiority of our method compared with several state-of-the-art methods.) <|cite_end|>, which is time- and energy-consuming. Also, the models trained on limited samples may have poor generalization ability. \begin{figure*}[!bht] \centering \small \includegraphics[width=0.9\textwidth, height=3cm]{samples/images/First_image_wiser_with_architecture_with_iteration_2.pdf} \caption{The prompt-based learning for unpaired image captioning. (a) An ignorant child learns knowledge from a wise man to describe an image. (b) The PL-UIC is developed to utilize prompts, learned from a vison-language pre-trained model (VL-PTM), to generate captions for images. These prompts of each image contain abundant contextual information of the matched images and texts, which is the information the previous UIC model does not have but is indispensable. The red dotted lines represent the process of caption scoring and reteaching in (a), which corresponds to the process of caption filtering and UIC model refining in (b).} \label{fig:first_image} \end{figure*} Considering the limitations of the fully-supervised image captioning paradigm, captioning using unpaired vision-language samples draws more and more attention as this approach does not require carefully labeled image-text training pairs. Usually, these models are developed based on \emph{adversarial learning} <|cite_start|> (Reference: Show, Adapt and Tell: Adversarial Training of Cross-domain Image Captioner: Impressive image captioning results are achieved in domains with plenty of training image and sentence pairs (e.g., MSCOCO). However, transferring to a target domain with significant domain shifts but no paired training data (referred to as cross-domain image captioning) remains largely unexplored. We propose a novel adversarial training procedure to leverage unpaired data in the target domain. Two critic networks are introduced to guide the captioner, namely domain critic and multi-modal critic. The domain critic assesses whether the generated sentences are indistinguishable from sentences in the target domain. The multi-modal critic assesses whether an image and its generated sentence are a valid pair. During training, the critics and captioner act as adversaries -- captioner aims to generate indistinguishable sentences, whereas critics aim at distinguishing them. The assessment improves the captioner through policy gradient updates. During inference, we further propose a novel critic-based planning method to select high-quality sentences without additional supervision (e.g., tags). To evaluate, we use MSCOCO as the source domain and four other datasets (CUB-200-2011, Oxford-102, TGIF, and Flickr30k) as the target domains. Our method consistently performs well on all datasets. In particular, on CUB-200-2011, we achieve 21.8% CIDEr-D improvement after adaptation. Utilizing critics during inference further gives another 4.5% boost.) <|cite_end|> <|cite_start|> (Reference: Adversarial Feature Learning: The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.) <|cite_end|>and the \emph{visual concept reward} based on reinforcement learning <|cite_start|> (Reference: Unsupervised Image Captioning: Deep neural networks have achieved great successes on the image captioning task. However, most of the existing models depend heavily on paired image-sentence datasets, which are very expensive to acquire. In this paper, we make the first attempt to train an image captioning model in an unsupervised manner. Instead of relying on manually labeled image-sentence pairs, our proposed model merely requires an image set, a sentence corpus, and an existing visual concept detector. The sentence corpus is used to teach the captioning model how to generate plausible sentences. Meanwhile, the knowledge in the visual concept detector is distilled into the captioning model to guide the model to recognize the visual concepts in an image. In order to further encourage the generated captions to be semantically consistent with the image, the image and caption are projected into a common latent space so that they can reconstruct each other. Given that the existing sentence corpora are mainly designed for linguistic research and are thus with little reference to image contents, we crawl a large-scale image description corpus of two million natural sentences to facilitate the unsupervised image captioning scenario. Experimental results show that our proposed model is able to produce quite promising results without any caption annotations.) <|cite_end|> <|cite_start|> (Reference: Towards Unsupervised Image Captioning with Shared Multimodal Embeddings: Understanding images without explicit supervision has become an important problem in computer vision. In this paper, we address image captioning by generating language descriptions of scenes without learning from annotated pairs of images and their captions. The core component of our approach is a shared latent space that is structured by visual concepts. In this space, the two modalities should be indistinguishable. A language model is first trained to encode sentences into semantically structured embeddings. Image features that are translated into this embedding space can be decoded into descriptions through the same language model, similarly to sentence embeddings. This translation is learned from weakly paired images and text using a loss robust to noisy assignments and a conditional adversarial component. Our approach allows to exploit large text corpora outside the annotated distributions of image/caption data. Our experiments show that the proposed domain alignment learns a semantically meaningful representation which outperforms previous work.) <|cite_end|>. As an early attempt, adversarial learning can only be utilized to guide the optimization of UIC parameters from the perspective of the overall structure of the sentence, while the correlations between the vision domain and the language domain have not been sufficiently explored. The concept reward based UIC models simply restrain their captions to contain the detected visual concepts (such as ``dog'' and ``tree''), therefore, their performance heavily depends on object detectors and very limited cross-domain knowledge is concerned. How to exploit more vision-language knowledge without paired image-text samples for UIC is still a challenging research problem to be resolved. Recently, the pre-trained giant models <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|>have demonstrated their abundant prior knowledge by their superior performance in multiple domains and tasks, including natural language processing, computer vision, and multi-modal. These models carry an extremely large number of parameters and are pre-trained on the super large-scale corpus. For example, the CLIP <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|>is pre-trained with 400 million image-text pairs using cosine similarity maximization. Its superior performance on zero/few-shot learning demonstrates that it carries a lot of visual-language prior knowledge. Many other computer vision tasks have proved that the CLIP features further improve their performance significantly <|cite_start|> (Reference: CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields: We present CLIP-NeRF, a multi-modal 3D object manipulation method for neural radiance fields (NeRF). By leveraging the joint language-image embedding space of the recent Contrastive Language-Image Pre-Training (CLIP) model, we propose a unified framework that allows manipulating NeRF in a user-friendly way, using either a short text prompt or an exemplar image. Specifically, to combine the novel view synthesis capability of NeRF and the controllable manipulation ability of latent representations from generative models, we introduce a disentangled conditional NeRF architecture that allows individual control over both shape and appearance. This is achieved by performing the shape conditioning via applying a learned deformation field to the positional encoding and deferring color conditioning to the volumetric rendering stage. To bridge this disentangled latent representation to the CLIP embedding, we design two code mappers that take a CLIP embedding as input and update the latent codes to reflect the targeted editing. The mappers are trained with a CLIP-based matching loss to ensure the manipulation accuracy. Furthermore, we propose an inverse optimization method that accurately projects an input image to the latent codes for manipulation to enable editing on real images. We evaluate our approach by extensive experiments on a variety of text prompts and exemplar images and also provide an intuitive interface for interactive editing. Our implementation is available at https://cassiepython.github.io/clipnerf/) <|cite_end|> <|cite_start|> (Reference: PointCLIP: Point Cloud Understanding by CLIP: Recently, zero-shot and few-shot learning via Contrastive Vision-Language Pre-training (CLIP) have shown inspirational performance on 2D visual recognition, which learns to match images with their corresponding texts in open-vocabulary settings. However, it remains under explored that whether CLIP, pre-trained by large-scale image-text pairs in 2D, can be generalized to 3D recognition. In this paper, we identify such a setting is feasible by proposing PointCLIP, which conducts alignment between CLIP-encoded point cloud and 3D category texts. Specifically, we encode a point cloud by projecting it into multi-view depth maps without rendering, and aggregate the view-wise zero-shot prediction to achieve knowledge transfer from 2D to 3D. On top of that, we design an inter-view adapter to better extract the global feature and adaptively fuse the few-shot knowledge learned from 3D into CLIP pre-trained in 2D. By just fine-tuning the lightweight adapter in the few-shot settings, the performance of PointCLIP could be largely improved. In addition, we observe the complementary property between PointCLIP and classical 3D-supervised networks. By simple ensembling, PointCLIP boosts baseline's performance and even surpasses state-of-the-art models. Therefore, PointCLIP is a promising alternative for effective 3D point cloud understanding via CLIP under low resource cost and data regime. We conduct thorough experiments on widely-adopted ModelNet10, ModelNet40 and the challenging ScanObjectNN to demonstrate the effectiveness of PointCLIP. The code is released at https://github.com/ZrrSkywalker/PointCLIP.) <|cite_end|> <|cite_start|> (Reference: CLIP-It! Language-Guided Video Summarization: A generic video summary is an abridged version of a video that conveys the whole story and features the most important scenes. Yet the importance of scenes in a video is often subjective, and users should have the option of customizing the summary by using natural language to specify what is important to them. Further, existing models for fully automatic generic summarization have not exploited available language models, which can serve as an effective prior for saliency. This work introduces CLIP-It, a single framework for addressing both generic and query-focused video summarization, typically approached separately in the literature. We propose a language-guided multimodal transformer that learns to score frames in a video based on their importance relative to one another and their correlation with a user-defined query (for query-focused summarization) or an automatically generated dense video caption (for generic video summarization). Our model can be extended to the unsupervised setting by training without ground-truth supervision. We outperform baselines and prior work by a significant margin on both standard video summarization datasets (TVSum and SumMe) and a query-focused video summarization dataset (QFVS). Particularly, we achieve large improvements in the transfer setting, attesting to our method's strong generalization capabilities.) <|cite_end|>. On the other hand, prompt learning <|cite_start|> (Reference: Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing: This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning". Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x' that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: it allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this paper we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g.the choice of pre-trained models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts, but also release other resources, e.g., a website http://pretrain.nlpedia.ai/ including constantly-updated survey, and paperlist.) <|cite_end|>is proposed to better leverage pre-trained models to improve overall performance on downstream tasks, such as PPT <|cite_start|> (Reference: PPT: Pre-trained Prompt Tuning for Few-shot Learning: Prompts for pre-trained language models (PLMs) have shown remarkable performance by bridging the gap between pre-training tasks and various downstream tasks. Among these methods, prompt tuning, which freezes PLMs and only tunes soft prompts, provides an efficient and effective solution for adapting large-scale PLMs to downstream tasks. However, prompt tuning is yet to be fully explored. In our pilot experiments, we find that prompt tuning performs comparably with conventional full-model fine-tuning when downstream data are sufficient, whereas it performs much worse under few-shot learning settings, which may hinder the application of prompt tuning in practice. We attribute this low performance to the manner of initializing soft prompts. Therefore, in this work, we propose to pre-train prompts by adding soft prompts into the pre-training stage to obtain a better initialization. We name this Pre-trained Prompt Tuning framework "PPT". To ensure the generalization of PPT, we formulate similar classification tasks into a unified task form and pre-train soft prompts for this unified task. Extensive experiments show that tuning pre-trained prompts for downstream tasks can reach or even outperform full-model fine-tuning under both full-data and few-shot settings. Our approach is effective and efficient for using large-scale PLMs in practice.) <|cite_end|>, CoOp <|cite_start|> (Reference: Learning to Prompt for Vision-Language Models: Large pre-trained vision-language models like CLIP have shown great potential in learning representations that are transferable across a wide range of downstream tasks. Different from the traditional representation learning that is based mostly on discretized labels, vision-language pre-training aligns images and texts in a common feature space, which allows zero-shot transfer to a downstream task via prompting, i.e., classification weights are synthesized from natural language describing classes of interest. In this work, we show that a major challenge for deploying such models in practice is prompt engineering, which requires domain expertise and is extremely time-consuming -- one needs to spend a significant amount of time on words tuning since a slight change in wording could have a huge impact on performance. Inspired by recent advances in prompt learning research in natural language processing (NLP), we propose Context Optimization (CoOp), a simple approach specifically for adapting CLIP-like vision-language models for downstream image recognition. Concretely, CoOp models a prompt's context words with learnable vectors while the entire pre-trained parameters are kept fixed. To handle different image recognition tasks, we provide two implementations of CoOp: unified context and class-specific context. Through extensive experiments on 11 datasets, we demonstrate that CoOp requires as few as one or two shots to beat hand-crafted prompts with a decent margin and is able to gain significant improvements over prompt engineering with more shots, e.g., with 16 shots the average gain is around 15% (with the highest reaching over 45%). Despite being a learning-based approach, CoOp achieves superb domain generalization performance compared with the zero-shot model using hand-crafted prompts.) <|cite_end|>, and VPT <|cite_start|> (Reference: Visual Prompt Tuning: The current modus operandi in adapting pre-trained models involves updating all the backbone parameters, ie, full fine-tuning. This paper introduces Visual Prompt Tuning (VPT) as an efficient and effective alternative to full fine-tuning for large-scale Transformer models in vision. Taking inspiration from recent advances in efficiently tuning large language models, VPT introduces only a small amount (less than 1% of model parameters) of trainable parameters in the input space while keeping the model backbone frozen. Via extensive experiments on a wide variety of downstream recognition tasks, we show that VPT achieves significant performance gains compared to other parameter efficient tuning protocols. Most importantly, VPT even outperforms full fine-tuning in many cases across model capacities and training data scales, while reducing per-task storage cost.) <|cite_end|>. These works inspire us to \emph{design new mechanisms for UIC by extracting prior vision-language knowledge from pre-trained big models.} In this paper, a novel Prompt-based Learning scheme is proposed for UIC, termed PL-UIC, which can extract prior knowledge from the large-scale VL-PTMs. The key insight of this idea is similar to coaching a child to describe an image with the help of a wise man, as illustrated in Fig.~\ref{fig:first_image}. The child may describe the content of the given image more accurately if the wise man could give some important prompts. Therefore, two kinds of prompts are designed, \textit{i.e.}, the \emph{semantic prompt} and \emph{metric prompt}, to imitate such a learning paradigm. More specifically, the visual images are taken as input to the semantic prompt extraction module, consisting of the pre-trained VL-PTMs (CLIP <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|>is used in our experiments) and a feed-forward layer. The predicted prompt vector will be fed into the CLIP model to adjust its context and then align the image and prompt accurately. Then, the semantic prompt is injected into the adversarial learning based UIC framework for more intelligent and robust caption generation. The metric prompt is designed to transform the aforementioned unsupervised captioning optimization into a semi-supervised manner. As the pseudo captions can be obtained using the basic captioning model, and then high-quality samples can be filtered based on the metric prompt to polish the captioning model in an iterative way. As elaborated in Fig.~\ref{fig:low_quality_captions}, the metric value of an image and a caption obtained from the CLIP model can serve as the metric prompt. This semantic prompt-based learning and metric prompt guided high-quality sample filtering are integrated to form a strong caption generator~\emph{without using annotated aligned image-text pairs.} To sum up, the contributions of this paper can be summarized as the following three aspects: $\bullet$ We have developed a novel Prompt-based Learning scheme for Unpaired Image Captioning, termed PL-UIC, which can make full use of VL-PTMs for high-performance captioning. To the best of our knowledge, it is the first work to infer the cue information (\textit{i.e.}, the prompt) about a given image that exists in the large VL-PTMs for the UIC task. $\bullet$ Two types of simple yet effective prompt schemes have been designed for the UIC task, \textit{i.e.}, the semantic prompt and the metric prompt. The semantic prompt has been devised to extract vision-aware prior knowledge via the textual format and taken as input to guide the caption generation. The metric prompt guided pseudo label filter has been designed to help improving the selection of highly-matched image-caption pairs, which enabled us to enhance the proposed UIC model in a semi-supervised way. $\bullet$ Extensive experiments have been carried out on the widely used COCO and Flickr30k datasets to demonstrate that the proposed prompt-based learning can efficiently boost the performance in caption generation. The design principle proposed in this research can also be applied to other applications that demand prior knowledge. Related Work In this section, we review the related works on supervised image captioning, unpaired image captioning models, and prompt learning. \textbf{Image Captioning.~} Classical image captioning implements the encoder-decoder architecture, which first encodes images into features and decodes these image features into sentences <|cite_start|> (Reference: Show and Tell: A Neural Image Caption Generator: Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.) <|cite_end|> <|cite_start|> (Reference: Recall What You See Continually Using GridLSTM in Image Captioning: The goal of image captioning is to automatically describe an image with a sentence, and the task has attracted research attention from both the computer vision and natural-language processing research communities. The existing encoder–decoder model and its variants, which are the most popular models for image captioning, use the image features in three ways: first, they inject the encoded image features into the decoder only once at the initial step, which does not enable the rich image content to be explored sufficiently while gradually generating a text caption; second, they concatenate the encoded image features with text as extra inputs at every step, which introduces unnecessary noise; and, third, they using an attention mechanism, which increases the computational complexity due to the introduction of extra neural nets to identify the attention regions. Different from the existing methods, in this paper, we propose a novel network, Recall Network, for generating captions that are consistent with the images. The recall network selectively involves the visual features by using a GridLSTM and, thus, is able to recall image contents while generating each word. By importing the visual information as the latent memory along the depth dimension LSTM, the decoder is able to admit the visual features dynamically through the inherent LSTM structure without adding any extra neural nets or parameters. The Recall Network efficiently prevents the decoder from deviating from the original image content. To verify the efficiency of our model, we conducted exhaustive experiments on full and dense image captioning. The experimental results clearly demonstrate that our recall network outperforms the conventional encoder–decoder model by a large margin and that it performs comparably to the state-of-the-art methods.) <|cite_end|> <|cite_start|> (Reference: Multi-Level Policy and Reward-Based Deep Reinforcement Learning Framework for Image Captioning: Image captioning is one of the most challenging tasks in AI because it requires an understanding of both complex visuals and natural language. Because image captioning is essentially a sequential prediction task, recent advances in image captioning have used reinforcement learning (RL) to better explore the dynamics of word-by-word generation. However, the existing RL-based image captioning methods rely primarily on a single policy network and reward function—an approach that is not well matched to the multi-level (word and sentence) and multi-modal (vision and language) nature of the task. To solve this problem, we propose a novel multi-level policy and reward RL framework for image captioning that can be easily integrated with RNN-based captioning models, language metrics, or visual-semantic functions for optimization. Specifically, the proposed framework includes two modules: 1) a multi-level policy network that jointly updates the word- and sentence-level policies for word generation; and 2) a multi-level reward function that collaboratively leverages both a vision-language reward and a language-language reward to guide the policy. Furthermore, we propose a guidance term to bridge the policy and the reward for RL optimization. The extensive experiments on the MSCOCO and Flickr30k datasets and the analyses show that the proposed framework achieves competitive performances on a variety of evaluation metrics. In addition, we conduct ablation studies on multiple variants of the proposed framework and explore several representative image captioning models and metrics for the word-level policy network and the language-language reward function to evaluate the generalization ability of the proposed framework.) <|cite_end|>later. The goal of these models is to maximize the probability of generating the correct captions, relying on tremendous image-caption pairs <|cite_start|> (Reference: A Comprehensive Survey of Deep Learning for Image Captioning: Generating a description of an image is called image captioning. Image captioning requires to recognize the important objects, their attributes and their relationships in an image. It also needs to generate syntactically and semantically correct sentences. Deep learning-based techniques are capable of handling the complexities and challenges of image captioning. In this survey paper, we aim to present a comprehensive review of existing deep learning-based image captioning techniques. We discuss the foundation of the techniques to analyze their performances, strengths and limitations. We also discuss the datasets and the evaluation metrics popularly used in deep learning based automatic image captioning.) <|cite_end|> <|cite_start|> (Reference: Show, tell, and polish: Ruminant decoding for image captioning: The encoder-decoder framework has been the base of popular image captioning models, which typically predicts the target sentence based on the encoded source image one word at a time in sequence. However, such a single-pass decoding framework encounters two problems. First, mistakes in the predicted words cannot be corrected and may propagate to the entire sentence. Second, because the single-pass decoder cannot access the following un-generated words, it can only perform local planning to choose every single word according to the preceding words, while lacks the global planning ability as for maintaining the semantic consistency and fluency of the whole sentence. In order to address the above two problems, in this work, we design a ruminant captioning framework which contains an image encoder, a base decoder, and a ruminant decoder. Specifically, the outputs of the former/base decoder are utilized as the global information to guide the words prediction of the latter/ruminant decoder, in an attempt to mimic human polishing process. We enable jointly training of the whole framework and overcome the non-differential problem of discrete words by designing a novel reinforcement learning based optimization algorithm. Experiments on two datasets (MS COCO and Flickr30 k) demonstrate that our ruminant decoding method can bring significant improvements over traditional single-pass decoding based models and achieves state-of-the-art performance.) <|cite_end|>. To solve the problem of the tight dependence on the costly image-caption pairs, some researchers proposed to use fewer and fewer pairs to complete the task, including novel object captioning <|cite_start|> (Reference: Incorporating Copying Mechanism in Image Captioning for Learning Novel Objects: Image captioning often requires a large set of training image-sentence pairs. In practice, however, acquiring sufficient training pairs is always expensive, making the recent captioning models limited in their ability to describe objects outside of training corpora (i.e., novel objects). In this paper, we present Long Short-Term Memory with Copying Mechanism (LSTM-C) --- a new architecture that incorporates copying into the Convolutional Neural Networks (CNN) plus Recurrent Neural Networks (RNN) image captioning framework, for describing novel objects in captions. Specifically, freely available object recognition datasets are leveraged to develop classifiers for novel objects. Our LSTM-C then nicely integrates the standard word-by-word sentence generation by a decoder RNN with copying mechanism which may instead select words from novel objects at proper places in the output sentence. Extensive experiments are conducted on both MSCOCO image captioning and ImageNet datasets, demonstrating the ability of our proposed LSTM-C architecture to describe novel objects. Furthermore, superior results are reported when compared to state-of-the-art deep models.) <|cite_end|> <|cite_start|> (Reference: Deep Compositional Captioning: Describing Novel Object Categories without Paired Training Data: While recent deep neural network models have achieved promising results on the image captioning task, they rely largely on the availability of corpora with paired image and sentence captions to describe objects in context. In this work, we propose the Deep Compositional Captioner (DCC) to address the task of generating descriptions of novel objects which are not present in paired image-sentence datasets. Our method achieves this by leveraging large object recognition datasets and external text corpora and by transferring knowledge between semantically similar concepts. Current deep caption models can only describe objects contained in paired image-sentence corpora, despite the fact that they are pre-trained with large object recognition datasets, namely ImageNet. In contrast, our model can compose sentences that describe novel objects and their interactions with other objects. We demonstrate our model's ability to describe novel concepts by empirically evaluating its performance on MSCOCO and show qualitative results on ImageNet images of objects for which no paired image-caption data exist. Further, we extend our approach to generate descriptions of objects in video clips. Our results show that DCC has distinct advantages over existing image and video captioning approaches for generating descriptions of new objects in context.) <|cite_end|>and semi-supervised image captioning <|cite_start|> (Reference: A Semi-supervised Framework for Image Captioning: State-of-the-art approaches for image captioning require supervised training data consisting of captions with paired image data. These methods are typically unable to use unsupervised data such as textual data with no corresponding images, which is a much more abundant commodity. We here propose a novel way of using such textual data by artificially generating missing visual information. We evaluate this learning approach on a newly designed model that detects visual concepts present in an image and feed them to a reviewer-decoder architecture with an attention mechanism. Unlike previous approaches that encode visual concepts using word embeddings, we instead suggest using regional image features which capture more intrinsic information. The main benefit of this architecture is that it synthesizes meaningful thought vectors that capture salient image properties and then applies a soft attentive decoder to decode the thought vectors and generate image captions. We evaluate our model on both Microsoft COCO and Flickr30K datasets and demonstrate that this model combined with our semi-supervised learning method can largely improve performance and help the model to generate more accurate and diverse captions.) <|cite_end|> <|cite_start|> (Reference: Image Captioning with Very Scarce Supervised Data: Adversarial Semi-Supervised Learning Approach: Constructing an organized dataset comprised of a large number of images and several captions for each image is a laborious task, which requires vast human effort. On the other hand, collecting a large number of images and sentences separately may be immensely easier. In this paper, we develop a novel data-efficient semi-supervised framework for training an image captioning model. We leverage massive unpaired image and caption data by learning to associate them. To this end, our proposed semi-supervised learning method assigns pseudo-labels to unpaired samples via Generative Adversarial Networks to learn the joint distribution of image and caption. To evaluate, we construct scarcely-paired COCO dataset, a modified version of MS COCO caption dataset. The empirical results show the effectiveness of our method compared to several strong baselines, especially when the amount of the paired samples are scarce.) <|cite_end|>. Despite the promising captioning reform that has been completed, the costly paired image-caption datasets are indispensable in the training process. Distinct from all these works, we attempt to complete UIC without requiring any image-caption pair. \textbf{Unpaired Image Captioning.~} Distinct from the aforementioned supervised image captioning, UIC is to generate descriptions for images without requiring any image-caption pairs. Feng \textit{et al.} <|cite_start|> (Reference: Unsupervised Image Captioning: Deep neural networks have achieved great successes on the image captioning task. However, most of the existing models depend heavily on paired image-sentence datasets, which are very expensive to acquire. In this paper, we make the first attempt to train an image captioning model in an unsupervised manner. Instead of relying on manually labeled image-sentence pairs, our proposed model merely requires an image set, a sentence corpus, and an existing visual concept detector. The sentence corpus is used to teach the captioning model how to generate plausible sentences. Meanwhile, the knowledge in the visual concept detector is distilled into the captioning model to guide the model to recognize the visual concepts in an image. In order to further encourage the generated captions to be semantically consistent with the image, the image and caption are projected into a common latent space so that they can reconstruct each other. Given that the existing sentence corpora are mainly designed for linguistic research and are thus with little reference to image contents, we crawl a large-scale image description corpus of two million natural sentences to facilitate the unsupervised image captioning scenario. Experimental results show that our proposed model is able to produce quite promising results without any caption annotations.) <|cite_end|>tackled UIC via adversarial learning and the alignments between images and visual concepts. Although the UIC is achieved, the captioning performance has a big gap between UIC and supervised image captioning due to the weak vision-language correlations. Thus, some researchers put effort into enhancing the weak cross-domain correlations in the task <|cite_start|> (Reference: Towards Unsupervised Image Captioning with Shared Multimodal Embeddings: Understanding images without explicit supervision has become an important problem in computer vision. In this paper, we address image captioning by generating language descriptions of scenes without learning from annotated pairs of images and their captions. The core component of our approach is a shared latent space that is structured by visual concepts. In this space, the two modalities should be indistinguishable. A language model is first trained to encode sentences into semantically structured embeddings. Image features that are translated into this embedding space can be decoded into descriptions through the same language model, similarly to sentence embeddings. This translation is learned from weakly paired images and text using a loss robust to noisy assignments and a conditional adversarial component. Our approach allows to exploit large text corpora outside the annotated distributions of image/caption data. Our experiments show that the proposed domain alignment learns a semantically meaningful representation which outperforms previous work.) <|cite_end|> <|cite_start|> (Reference: Recurrent Relational Memory Network for Unsupervised Image Captioning: Unsupervised image captioning with no annotations is an emerging challenge in computer vision, where the existing arts usually adopt GAN (Generative Adversarial Networks) models. In this paper, we propose a novel memory-based network rather than GAN, named Recurrent Relational Memory Network ($R^2M$). Unlike complicated and sensitive adversarial learning that non-ideally performs for long sentence generation, $R^2M$ implements a concepts-to-sentence memory translator through two-stage memory mechanisms: fusion and recurrent memories, correlating the relational reasoning between common visual concepts and the generated words for long periods. $R^2M$ encodes visual context through unsupervised training on images, while enabling the memory to learn from irrelevant textual corpus via supervised fashion. Our solution enjoys less learnable parameters and higher computational efficiency than GAN-based methods, which heavily bear parameter sensitivity. We experimentally validate the superiority of $R^2M$ than state-of-the-arts on all benchmark datasets.) <|cite_end|> <|cite_start|> (Reference: Unpaired Image Captioning with Semantic-Constrained Self-Learning: Image captioning has been an emerging and fast-developing research topic. Nevertheless, most existing works heavily rely on large amounts of image-sentence pairs and therefore hinder the practical applications of captioning in the wild. In this paper, we present a novel Semantic-Constrained Self-learning (SCS) framework that explores an iterative self-learning strategy to learn an image captioner with only unpaired image and text data. Technically, SCS consists of two stages, i.e., pseudo pair generation and captioner re-training, iteratively producing "pseudo" image-sentence pairs via a pre-trained captioner and re-training the captioner with the pseudo pairs, respectively. Particularly, both stages are guided by the recognized objects in the image, that act as semantic constraint to strengthen the semantic alignment between the input image and the output sentence. We leverage a semantic-constrained beam search for pseudo pair generation to regularize the decoding process with the recognized objects via forcing the inclusion/exclusion of the recognized/irrelevant objects in output sentence. For captioner re-training, a self-supervised triplet loss is utilized to preserve the relative semantic similarity ordering among generated sentences with regard to the input image triplets. Moreover, an object inclusion reward and an adversarial reward are adopted to encourage the inclusion of the predicted objects in the output sentence and pursue the generation of more realistic sentences during self-critical training, respectively. Experiments conducted on both dependent and independent unpaired data validate the superiority of SCS. More remarkably, we obtain the best published CIDEr score to-date of 74.7\% on COCO Karpathy test split for unpaired image captioning.) <|cite_end|>. For example, Laina \textit{et al.} <|cite_start|> (Reference: Towards Unsupervised Image Captioning with Shared Multimodal Embeddings: Understanding images without explicit supervision has become an important problem in computer vision. In this paper, we address image captioning by generating language descriptions of scenes without learning from annotated pairs of images and their captions. The core component of our approach is a shared latent space that is structured by visual concepts. In this space, the two modalities should be indistinguishable. A language model is first trained to encode sentences into semantically structured embeddings. Image features that are translated into this embedding space can be decoded into descriptions through the same language model, similarly to sentence embeddings. This translation is learned from weakly paired images and text using a loss robust to noisy assignments and a conditional adversarial component. Our approach allows to exploit large text corpora outside the annotated distributions of image/caption data. Our experiments show that the proposed domain alignment learns a semantically meaningful representation which outperforms previous work.) <|cite_end|>proposed to narrow the domain gap between images and languages by a shared embedding space of images and visual concepts. Also, several works focused on adopting scene graph modeling in UIC to align more textual information with images, including relationships and attributes <|cite_start|> (Reference: Unpaired Image Captioning via Scene Graph Alignments: Most of current image captioning models heavily rely on paired image-caption datasets. However, getting large scale image-caption paired data is labor-intensive and time-consuming. In this paper, we present a scene graph-based approach for unpaired image captioning. Our framework comprises an image scene graph generator, a sentence scene graph generator, a scene graph encoder, and a sentence decoder. Specifically, we first train the scene graph encoder and the sentence decoder on the text modality. To align the scene graphs between images and sentences, we propose an unsupervised feature alignment method that maps the scene graph features from the image to the sentence modality. Experimental results show that our proposed model can generate quite promising results without using any image-caption training pairs, outperforming existing methods by a wide margin.) <|cite_end|> <|cite_start|> (Reference: Exploring Semantic Relationships for Image Captioning without Parallel Data: Recently, image captioning has aroused great interest in both academic and industrial worlds. Most existing systems are built upon large-scale datasets consisting of image-sentence pairs, which, however, are time-consuming to construct. In addition, even for the most advanced image captioning systems, it is still difficult to realize deep image understanding. In this work, we achieve unpaired image captioning by bridging the vision and the language domains with high-level semantic information. The motivation stems from the fact that the semantic concepts with the same modality can be extracted from both images and descriptions. To further improve the quality of captions generated by the model, we propose the Semantic Relationship Explorer, which explores the relationships between semantic concepts for better understanding of the image. Extensive experiments on MSCOCO dataset show that we can generate desirable captions without paired datasets. Furthermore, the proposed approach boosts five strong baselines under the paired setting, where the most significant improvement in CIDEr score reaches 8%, demonstrating that it is effective and generalizes well to a wide range of models.) <|cite_end|> <|cite_start|> (Reference: Interactions Guided Generative Adversarial Network for unsupervised image captioning: ) <|cite_end|> <|cite_start|> (Reference: Unpaired Image Captioning by Image-level Weakly-Supervised Visual Concept Recognition: The goal of unpaired image captioning (UIC) is to describe images without using image-caption pairs in the training phase. Although challenging, we except the task can be accomplished by leveraging a training set of images aligned with visual concepts. Most existing studies use off-the-shelf algorithms to obtain the visual concepts because the Bounding Box (BBox) labels or relationship-triplet labels used for the training are expensive to acquire. In order to resolve the problem in expensive annotations, we propose a novel approach to achieve cost-effective UIC. Specifically, we adopt image-level labels for the optimization of the UIC model in a weakly-supervised manner. For each image, we assume that only the image-level labels are available without specific locations and numbers. The image-level labels are utilized to train a weakly-supervised object recognition model to extract object information (e.g., instance) in an image, and the extracted instances are adopted to infer the relationships among different objects based on an enhanced graph neural network (GNN). The proposed approach achieves comparable or even better performance compared with previous methods without the expensive cost of annotations. Furthermore, we design an unrecognized object (UnO) loss combined with a visual concept reward to improve the alignment of the inferred object and relationship information with the images. It can effectively alleviate the issue encountered by existing UIC models about generating sentences with nonexistent objects. To the best of our knowledge, this is the first attempt to solve the problem of Weakly-Supervised visual concept recognition for UIC (WS-UIC) based only on image-level labels. Extensive experiments have been carried out to demonstrate that the proposed WS-UIC model achieves inspiring results on the COCO dataset while significantly reducing the cost of labeling.) <|cite_end|>. The following methods achieved better captioning performance since much more vision-language alignment is explored in UIC. Despite the enhanced captioning performance, there is still much room for improvement due to the neglect of the majority of vision-language correlations. Different from all these works, we attempt to utilize the prompt-based learning in UIC, which is aided by the pre-trained CLIP model <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|>with abundant vision-language prior knowledge. \begin{figure} \centering \includegraphics[width=0.48\textwidth, height=2.3cm]{samples/images/low_quality_captions.pdf} \caption{The metric prompt of image-caption pairs generated by the prompt-based UIC model. The higher value of the metric value, the higher quality of the image-caption pairs.} \label{fig:low_quality_captions} \end{figure} \textbf{Prompt-based Learning.~} Prompt-based learning methods are proposed in natural language processing (NLP), which aim to reduce or obviate the requirement for large supervised datasets in the downstream tasks
[ "<|reference_start|> Unpaired Image Captioning with Semantic-Constrained Self-Learning: Image captioning has been an emerging and fast-developing research topic. Nevertheless, most existing works heavily rely on large amounts of image-sentence pairs and therefore hinder the practical applications of captioning in the wild. In this paper, we present a novel Semantic-Constrained Self-learning (SCS) framework that explores an iterative self-learning strategy to learn an image captioner with only unpaired image and text data. Technically, SCS consists of two stages, i.e., pseudo pair generation and captioner re-training, iteratively producing \"pseudo\" image-sentence pairs via a pre-trained captioner and re-training the captioner with the pseudo pairs, respectively. Particularly, both stages are guided by the recognized objects in the image, that act as semantic constraint to strengthen the semantic alignment between the input image and the output sentence. We leverage a semantic-constrained beam search for pseudo pair generation to regularize the decoding process with the recognized objects via forcing the inclusion/exclusion of the recognized/irrelevant objects in output sentence. For captioner re-training, a self-supervised triplet loss is utilized to preserve the relative semantic similarity ordering among generated sentences with regard to the input image triplets. Moreover, an object inclusion reward and an adversarial reward are adopted to encourage the inclusion of the predicted objects in the output sentence and pursue the generation of more realistic sentences during self-critical training, respectively. Experiments conducted on both dependent and independent unpaired data validate the superiority of SCS. More remarkably, we obtain the best published CIDEr score to-date of 74.7\\% on COCO Karpathy test split for unpaired image captioning. <|reference_end|>", "<|reference_start|> Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing: This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub \"prompt-based learning\". Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x' that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: it allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this paper we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g.the choice of pre-trained models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts, but also release other resources, e.g., a website http://pretrain.nlpedia.ai/ including constantly-updated survey, and paperlist. <|reference_end|>", "<|reference_start|> Multi-Level Policy and Reward-Based Deep Reinforcement Learning Framework\nfor Image Captioning: Image captioning is one of the most challenging tasks in AI because it requires an understanding of both complex visuals and natural language. Because image captioning is essentially a sequential prediction task, recent advances in image captioning have used reinforcement learning (RL) to better explore the dynamics of word-by-word generation. However, the existing RL-based image captioning methods rely primarily on a single policy network and reward function—an approach that is not well matched to the multi-level (word and sentence) and multi-modal (vision and language) nature of the task. To solve this problem, we propose a novel multi-level policy and reward RL framework for image captioning that can be easily integrated with RNN-based captioning models, language metrics, or visual-semantic functions for optimization. Specifically, the proposed framework includes two modules: 1) a multi-level policy network that jointly updates the word- and sentence-level policies for word generation; and 2) a multi-level reward function that collaboratively leverages both a vision-language reward and a language-language reward to guide the policy. Furthermore, we propose a guidance term to bridge the policy and the reward for RL optimization. The extensive experiments on the MSCOCO and Flickr30k datasets and the analyses show that the proposed framework achieves competitive performances on a variety of evaluation metrics. In addition, we conduct ablation studies on multiple variants of the proposed framework and explore several representative image captioning models and metrics for the word-level policy network and the language-language reward function to evaluate the generalization ability of the proposed framework. <|reference_end|>", "<|reference_start|> A Comprehensive Survey of Deep Learning for Image Captioning: Generating a description of an image is called image captioning. Image captioning requires to recognize the important objects, their attributes and their relationships in an image. It also needs to generate syntactically and semantically correct sentences. Deep learning-based techniques are capable of handling the complexities and challenges of image captioning. In this survey paper, we aim to present a comprehensive review of existing deep learning-based image captioning techniques. We discuss the foundation of the techniques to analyze their performances, strengths and limitations. We also discuss the datasets and the evaluation metrics popularly used in deep learning based automatic image captioning. <|reference_end|>" ]
[ 13, 27, 34, 35 ]
{"<|multi_cite_1_1|>": "arxiv-279901", "<|multi_cite_1_2|>": "ss-915290", "<|multi_cite_1_3|>": "ss-898287", "<|multi_cite_2_1|>": "arxiv-68898", "<|multi_cite_2_2|>": "arxiv-106294", "<|multi_cite_3_1|>": "arxiv-111063", "<|multi_cite_3_2|>": "ss-982814", "<|multi_cite_4_1|>": "ss-1508480", "<|multi_cite_4_2|>": "arxiv-149337", "<|multi_cite_4_3|>": "arxiv-249347", "<|multi_cite_5_1|>": "arxiv-167718", "<|multi_cite_5_2|>": "arxiv-152285", "<|multi_cite_5_3|>": "arxiv-326214", "<|multi_cite_6_1|>": "ss-1533615", "<|multi_cite_6_2|>": "arxiv-278034", "<|multi_cite_7_1|>": "ss-1858300", "<|multi_cite_7_2|>": "ss-1530726", "<|multi_cite_7_3|>": "ss-1515242", "<|multi_cite_8_1|>": "arxiv-123083", "<|multi_cite_8_2|>": "arxiv-99044", "<|multi_cite_9_1|>": "arxiv-181949", "<|multi_cite_9_2|>": "arxiv-220278", "<|cite_10|>": "arxiv-323919", "<|cite_11|>": "arxiv-323919", "<|multi_cite_12_1|>": "arxiv-386472", "<|multi_cite_12_2|>": "arxiv-385347", "<|multi_cite_12_3|>": "arxiv-352433", "<|cite_13|>": "arxiv-357741", "<|cite_14|>": "arxiv-365831", "<|cite_15|>": "arxiv-364496", "<|cite_16|>": "arxiv-407638", "<|cite_17|>": "arxiv-323919", "<|multi_cite_18_1|>": "arxiv-68898", "<|multi_cite_18_2|>": "ss-1533614", "<|multi_cite_18_3|>": "ss-1281154", "<|multi_cite_19_1|>": "arxiv-175638", "<|multi_cite_19_2|>": "ss-1088019", "<|multi_cite_20_1|>": "arxiv-132186", "<|multi_cite_20_2|>": "arxiv-87348", "<|multi_cite_21_1|>": "arxiv-110264", "<|multi_cite_21_2|>": "arxiv-222003", "<|cite_22|>": "arxiv-181949", "<|multi_cite_23_1|>": "arxiv-220278", "<|multi_cite_23_2|>": "arxiv-274163", "<|multi_cite_23_3|>": "ss-1533615", "<|cite_24|>": "arxiv-220278", "<|multi_cite_25_1|>": "arxiv-196733", "<|multi_cite_25_2|>": "ss-1668137", "<|multi_cite_25_3|>": "ss-1668138", "<|multi_cite_25_4|>": "arxiv-403741", "<|cite_26|>": "arxiv-323919", "<|cite_27|>": "arxiv-357741", "<|multi_cite_28_1|>": "arxiv-387930", "<|multi_cite_28_2|>": "arxiv-359037", "<|multi_cite_28_3|>": "arxiv-387919", "<|multi_cite_29_1|>": "arxiv-365831", "<|multi_cite_29_2|>": "arxiv-378736", "<|multi_cite_30_1|>": "arxiv-268228", "<|multi_cite_30_2|>": "arxiv-244298", "<|cite_31|>": "arxiv-364496", "<|multi_cite_32_1|>": "ss-2041094", "<|multi_cite_32_2|>": "arxiv-374633", "<|cite_33|>": "arxiv-374633"}
1107.3729
<|paper_start|> Title: On the approximation in the smoothed finite element method (SFEM) Abstract: On the approximation in the smoothed finite element method (SFEM): This letter aims at resolving the issues raised in the recent short communication [1] and answered by [2] by proposing a systematic approximation scheme based on non-mapped shape functions, which both allows to fully exploit the unique advantages of the smoothed finite element method (SFEM) [3, 4, 5, 6, 7, 8, 9] and resolve the existence, linearity and positivity deficiencies pointed out in [1]. We show that Wachspress interpolants [10] computed in the physical coordinate system are very well suited to the SFEM, especially when elements are heavily distorted (obtuse interior angles). The proposed approximation leads to results which are almost identical to those of the SFEM initially proposed in [3]. These results that the proposed approximation scheme forms a strong and rigorous basis for construction of smoothed finite element methods. Introduction The smoothed finite element method (SFEM) was first proposed in. This new numerical method, based on gradient (strain) smoothing, is rooted in meshfree stabilized conforming nodal integration <|cite_start|> (Reference: Some recent improvements in meshfree methods for incompressible finite elasticity boundary value problems with contact: ) <|cite_end|> and was shown to provide a suite of finite elements with a range of interesting properties. Those properties depend on the number of smoothing cells employed within each finite element (see <|cite_start|> (Reference: Strain smoothing in FEM and XFEM: ) <|cite_end|> for a review of recent developments and properties) and include: \begin{itemize} \item improved dual accuracy and superconvergence; \item relative insensitivity to volumetric locking; \item relative insensitivity to mesh distortion; \item softer than the FEM. \end{itemize} A rigorous theoretical framework was provided in <|cite_start|> (Reference: Theoretical aspects of the smoothed finite element method ({SFEM}): This paper examines the theoretical bases for the smoothed finite element method (SFEM), which was formulated by incorporating cell‐wise strain smoothing operation into standard compatible finite element method (FEM). The weak form of SFEM can be derived from the Hu–Washizu three‐field variational principle. For elastic problems, it is proved that 1D linear element and 2D linear triangle element in SFEM are identical to their counterparts in FEM, while 2D bilinear quadrilateral elements in SFEM are different from that of FEM: when the number of smoothing cells (SCs) of the elements equals 1, the SFEM solution is proved to be ‘variationally consistent’ and has the same properties with those of FEM using reduced integration; when SC approaches infinity, the SFEM solution will approach the solution of the standard displacement compatible FEM model; when SC is a finite number larger than 1, the SFEM solutions are not ‘variationally consistent’ but ‘energy consistent’, and will change monotonously from the solution of SFEM (SC = 1) to that of SFEM (SC → ∞). It is suggested that there exists an optimal number of SC such that the SFEM solution is closest to the exact solution. The properties of SFEM are confirmed by numerical examples. Copyright © 2006 John Wiley & Sons, Ltd.) <|cite_end|> <|cite_start|> (Reference: Smooth finite element methods: Convergence, accuracy and properties: A stabilized conforming nodal integration finite element method based on strain smoothing stabilization is presented. The integration of the stiffness matrix is performed on the boundaries of the finite elements. A rigorous variational framework based on the Hu–Washizu assumed strain variational form is developed.) <|cite_end|> and the method was extended to plates <|cite_start|> (Reference: A smoothed finite element method for plate analysis: ) <|cite_end|>, shells and coupled with the extended finite element method <|cite_start|> (Reference: Strain smoothing in FEM and XFEM: ) <|cite_end|>. The essential feature of the SFEM is that no isoparametric mapping is required, which implies that the approximation can be defined in the physical space directly, thereby providing freedom in the selection of the element geometry. In the initial paper (Eq. (22) and reproduced here for simplicity \Eref{eqn:liueq22}), non-mapped Lagrange shape functions are proposed as a possibility to calculate the shape functions at an arbitrary point within a smoothed finite element. It is then stated in the same paper (p863 last paragraph) that ``unless state otherwise, we still use the averaged shape functions for convenience.'' These shape functions are recalled in Table~\ref{table:ShapeFunctions} and \fref{fig:ShapeFunctions}, for ease of reading. \begin{equation} \renewcommand{\arraystretch}{1.5} N_e(\xx_e) = \left[ \begin{array}{cccc}1 & x_e & y_e & x_e y_e \end{array} \right] \left[ \begin{array}{cccc}1 & x_1 & y_1 & x_1 y_1 \\ 1 & x_2 & y_2 & x_2 y_2 \\ 1 & x_3 & y_3 & x_3 y_3 \\ 1 & x_4 & y_4 & x_4 y_4 \end{array} \right]^{-1} \label{eqn:liueq22} \end{equation} \begin{table} \renewcommand{\arraystretch}{1} \caption{Shape function value at different sites within an element (\fref{fig:ShapeFunctions})} \centering \begin{tabular}{llllll} \hline Site & Node 1 & Node 2 & Node 3 & Node 4 & Description \\ \cline{1-6} \hline 1 & 1.0 & 0.0 & 0.0 & 0.0 & Field node \\ 2 & 0.0 & 1.0 & 0.0 & 0.0 & Field node \\ 3 & 0.0 & 0.0 & 1.0 & 0.0 & Field node \\ 4 & 0.0 & 0.0 & 0.0 & 1.0 & Field node \\ 5 & 0.5 & 0.5 & 0.0 & 0.0 & Side midpoint \\ 6 & 0.0 & 0.5 & 0.5 & 0.0 & Side midpoint \\ 7 & 0.0 & 0.0 & 0.5 & 0.5 & Side midpoint \\ 8 & 0.5 & 0.0 & 0.0 & 0.5 & Side midpoint \\ 9 & 0.25 & 0.25 & 0.25 & 0.25 & Intersection of two bimedians\\ \hline \end{tabular} \label{table:ShapeFunctions} \end{table} \begin{figure} \centering \scalebox{1.0}{\input{shapefns.pstex_t}} \caption{A four-node element divided into four smoothing cells.} \label{fig:ShapeFunctions} \end{figure} In fact, in our work on the SFEM <|cite_start|> (Reference: Addressing volumetric locking and instabilities by selective integration in smoothed finite elements: This paper promotes the development of a novel family of finite elements with smoothed strains, offering remarkable properties. In the smoothed finite element method (FEM), elements are divided into subcells. The strain at a point is defined as a weighted average of the standard strain field over a representative domain. This yields superconvergent stresses, both in regular and singular settings, as well as increased accuracy, with slightly lower computational cost than the standard FEM. The one-subcell version that does not exhibit volumetric locking yields more accurate stresses but less accurate displacements and is equivalent to a quasi-equilibrium FEM. It is also subject to instabilities. In the limit where the number of subcells goes to infinity, the standard FEM is recovered, which yields more accurate displacements and less accurate stresses. The specific contribution of this paper is to show that expressing the volumetric part of the strain field using a one-subcell formulation is sufficient to get rid of volumetric locking and increase the displacement accuracy compared with the standard FEM when the single subcell version is used to express both the volumetric and deviatoric parts of the strain. Selective integration also alleviates instabilities associated with the single subcell element, which are due to rank deficiency. Numerical examples on various compressible and incompressible linear elastic test cases show that high accuracy is retained compared with the standard FEM without increasing computational cost. Copyright © 2008 John Wiley & Sons, Ltd.) <|cite_end|> <|cite_start|> (Reference: Smooth finite element methods: Convergence, accuracy and properties: A stabilized conforming nodal integration finite element method based on strain smoothing stabilization is presented. The integration of the stiffness matrix is performed on the boundaries of the finite elements. A rigorous variational framework based on the Hu–Washizu assumed strain variational form is developed.) <|cite_end|> <|cite_start|> (Reference: A smoothed finite element method for plate analysis: ) <|cite_end|>, and, to our knowledge, in all other work published to date <|cite_start|> (Reference: Addressing volumetric locking and instabilities by selective integration in smoothed finite elements: This paper promotes the development of a novel family of finite elements with smoothed strains, offering remarkable properties. In the smoothed finite element method (FEM), elements are divided into subcells. The strain at a point is defined as a weighted average of the standard strain field over a representative domain. This yields superconvergent stresses, both in regular and singular settings, as well as increased accuracy, with slightly lower computational cost than the standard FEM. The one-subcell version that does not exhibit volumetric locking yields more accurate stresses but less accurate displacements and is equivalent to a quasi-equilibrium FEM. It is also subject to instabilities. In the limit where the number of subcells goes to infinity, the standard FEM is recovered, which yields more accurate displacements and less accurate stresses. The specific contribution of this paper is to show that expressing the volumetric part of the strain field using a one-subcell formulation is sufficient to get rid of volumetric locking and increase the displacement accuracy compared with the standard FEM when the single subcell version is used to express both the volumetric and deviatoric parts of the strain. Selective integration also alleviates instabilities associated with the single subcell element, which are due to rank deficiency. Numerical examples on various compressible and incompressible linear elastic test cases show that high accuracy is retained compared with the standard FEM without increasing computational cost. Copyright © 2008 John Wiley & Sons, Ltd.) <|cite_end|> <|cite_start|> (Reference: Theoretical aspects of the smoothed finite element method ({SFEM}): This paper examines the theoretical bases for the smoothed finite element method (SFEM), which was formulated by incorporating cell‐wise strain smoothing operation into standard compatible finite element method (FEM). The weak form of SFEM can be derived from the Hu–Washizu three‐field variational principle. For elastic problems, it is proved that 1D linear element and 2D linear triangle element in SFEM are identical to their counterparts in FEM, while 2D bilinear quadrilateral elements in SFEM are different from that of FEM: when the number of smoothing cells (SCs) of the elements equals 1, the SFEM solution is proved to be ‘variationally consistent’ and has the same properties with those of FEM using reduced integration; when SC approaches infinity, the SFEM solution will approach the solution of the standard displacement compatible FEM model; when SC is a finite number larger than 1, the SFEM solutions are not ‘variationally consistent’ but ‘energy consistent’, and will change monotonously from the solution of SFEM (SC = 1) to that of SFEM (SC → ∞). It is suggested that there exists an optimal number of SC such that the SFEM solution is closest to the exact solution. The properties of SFEM are confirmed by numerical examples. Copyright © 2006 John Wiley & Sons, Ltd.) <|cite_end|>, these ``averaged shape functions'' have been used, with good results. Yet, <|cite_start|> (Reference: On the smoothed finite element method: Recently, Liu et al. proposed the smoothed finite element method by using the non‐mapped shape functions and then introducing the strain smoothing operator when evaluating the element stiffness in the framework of the finite element method. However, the theories and examples by Liu et al. are not sufficient for general quadrilateral elements. This paper shows that the non‐mapped shape functions used in the smoothed finite element have disadvantages in existence, linearity, non‐negativity and patch test. Copyright © 2008 John Wiley & Sons, Ltd.) <|cite_end|> provides a critique of the SFEM stating that the approximation provided by \Eref{eqn:liueq22} are inadequate because: \begin{itemize} \item they do not always exist (as described in the 1975 book <|cite_start|> (Reference: A Rational Basis for Function Approximation: ) <|cite_end|>); \item they may not be positive everywhere in the element; \item they may not be linear everywhere in the element. \end{itemize} Because of this <|cite_start|> (Reference: On the smoothed finite element method: Recently, Liu et al. proposed the smoothed finite element method by using the non‐mapped shape functions and then introducing the strain smoothing operator when evaluating the element stiffness in the framework of the finite element method. However, the theories and examples by Liu et al. are not sufficient for general quadrilateral elements. This paper shows that the non‐mapped shape functions used in the smoothed finite element have disadvantages in existence, linearity, non‐negativity and patch test. Copyright © 2008 John Wiley & Sons, Ltd.) <|cite_end|> disqualifies the current version of the SFEM and discredits the existing results of <|cite_start|> (Reference: Addressing volumetric locking and instabilities by selective integration in smoothed finite elements: This paper promotes the development of a novel family of finite elements with smoothed strains, offering remarkable properties. In the smoothed finite element method (FEM), elements are divided into subcells. The strain at a point is defined as a weighted average of the standard strain field over a representative domain. This yields superconvergent stresses, both in regular and singular settings, as well as increased accuracy, with slightly lower computational cost than the standard FEM. The one-subcell version that does not exhibit volumetric locking yields more accurate stresses but less accurate displacements and is equivalent to a quasi-equilibrium FEM. It is also subject to instabilities. In the limit where the number of subcells goes to infinity, the standard FEM is recovered, which yields more accurate displacements and less accurate stresses. The specific contribution of this paper is to show that expressing the volumetric part of the strain field using a one-subcell formulation is sufficient to get rid of volumetric locking and increase the displacement accuracy compared with the standard FEM when the single subcell version is used to express both the volumetric and deviatoric parts of the strain. Selective integration also alleviates instabilities associated with the single subcell element, which are due to rank deficiency. Numerical examples on various compressible and incompressible linear elastic test cases show that high accuracy is retained compared with the standard FEM without increasing computational cost. Copyright © 2008 John Wiley & Sons, Ltd.) <|cite_end|> <|cite_start|> (Reference: Theoretical aspects of the smoothed finite element method ({SFEM}): This paper examines the theoretical bases for the smoothed finite element method (SFEM), which was formulated by incorporating cell‐wise strain smoothing operation into standard compatible finite element method (FEM). The weak form of SFEM can be derived from the Hu–Washizu three‐field variational principle. For elastic problems, it is proved that 1D linear element and 2D linear triangle element in SFEM are identical to their counterparts in FEM, while 2D bilinear quadrilateral elements in SFEM are different from that of FEM: when the number of smoothing cells (SCs) of the elements equals 1, the SFEM solution is proved to be ‘variationally consistent’ and has the same properties with those of FEM using reduced integration; when SC approaches infinity, the SFEM solution will approach the solution of the standard displacement compatible FEM model; when SC is a finite number larger than 1, the SFEM solutions are not ‘variationally consistent’ but ‘energy consistent’, and will change monotonously from the solution of SFEM (SC = 1) to that of SFEM (SC → ∞). It is suggested that there exists an optimal number of SC such that the SFEM solution is closest to the exact solution. The properties of SFEM are confirmed by numerical examples. Copyright © 2006 John Wiley & Sons, Ltd.) <|cite_end|> <|cite_start|> (Reference: Smooth finite element methods: Convergence, accuracy and properties: A stabilized conforming nodal integration finite element method based on strain smoothing stabilization is presented. The integration of the stiffness matrix is performed on the boundaries of the finite elements. A rigorous variational framework based on the Hu–Washizu assumed strain variational form is developed.) <|cite_end|> <|cite_start|> (Reference: A smoothed finite element method for plate analysis: ) <|cite_end|>, despite the fact (also noted in <|cite_start|> (Reference: On the essence and the evaluation of the shape functions for the smoothed finite element method (sfem): This paper is written in response to the recently published paper (Int. J. Numer. Meth. Engng 2008; 76:1285–1295) at IJNME entitled ‘On the smoothed finite element method’ (SFEM) by Zhang HH, Liu SJ, Li LX.) <|cite_end|>) that those non-mapped Lagrange shape functions of \Eref{eqn:liueq22} were in general not used in the aforementioned papers. In this contribution, we show that it is possible to resolve the three issues mentioned by <|cite_start|> (Reference: On the smoothed finite element method: Recently, Liu et al. proposed the smoothed finite element method by using the non‐mapped shape functions and then introducing the strain smoothing operator when evaluating the element stiffness in the framework of the finite element method. However, the theories and examples by Liu et al. are not sufficient for general quadrilateral elements. This paper shows that the non‐mapped shape functions used in the smoothed finite element have disadvantages in existence, linearity, non‐negativity and patch test. Copyright © 2008 John Wiley & Sons, Ltd.) <|cite_end|> about the Lagrange non-mapped shape functions while retaining the advantageous features of the smoothed finite element method, in particular its ability to deal with extremely distorted meshes. <|paper_end|>
[ "<|reference_start|> Smooth finite element methods: Convergence, accuracy and properties: A stabilized conforming nodal integration finite element method based on strain smoothing stabilization is presented. The integration of the stiffness matrix is performed on the boundaries of the finite elements. A rigorous variational framework based on the Hu–Washizu assumed strain variational form is developed. <|reference_end|>", "<|reference_start|> Addressing volumetric locking and instabilities by selective integration in smoothed finite elements: This paper promotes the development of a novel family of finite elements with smoothed strains, offering remarkable properties. In the smoothed finite element method (FEM), elements are divided into subcells. The strain at a point is defined as a weighted average of the standard strain field over a representative domain. \n \n \n \nThis yields superconvergent stresses, both in regular and singular settings, as well as increased accuracy, with slightly lower computational cost than the standard FEM. \n \n \n \nThe one-subcell version that does not exhibit volumetric locking yields more accurate stresses but less accurate displacements and is equivalent to a quasi-equilibrium FEM. It is also subject to instabilities. In the limit where the number of subcells goes to infinity, the standard FEM is recovered, which yields more accurate displacements and less accurate stresses. \n \n \n \nThe specific contribution of this paper is to show that expressing the volumetric part of the strain field using a one-subcell formulation is sufficient to get rid of volumetric locking and increase the displacement accuracy compared with the standard FEM when the single subcell version is used to express both the volumetric and deviatoric parts of the strain. Selective integration also alleviates instabilities associated with the single subcell element, which are due to rank deficiency. \n \n \n \nNumerical examples on various compressible and incompressible linear elastic test cases show that high accuracy is retained compared with the standard FEM without increasing computational cost. Copyright © 2008 John Wiley & Sons, Ltd. <|reference_end|>", "<|reference_start|> Addressing volumetric locking and instabilities by selective integration in smoothed finite elements: This paper promotes the development of a novel family of finite elements with smoothed strains, offering remarkable properties. In the smoothed finite element method (FEM), elements are divided into subcells. The strain at a point is defined as a weighted average of the standard strain field over a representative domain. \n \n \n \nThis yields superconvergent stresses, both in regular and singular settings, as well as increased accuracy, with slightly lower computational cost than the standard FEM. \n \n \n \nThe one-subcell version that does not exhibit volumetric locking yields more accurate stresses but less accurate displacements and is equivalent to a quasi-equilibrium FEM. It is also subject to instabilities. In the limit where the number of subcells goes to infinity, the standard FEM is recovered, which yields more accurate displacements and less accurate stresses. \n \n \n \nThe specific contribution of this paper is to show that expressing the volumetric part of the strain field using a one-subcell formulation is sufficient to get rid of volumetric locking and increase the displacement accuracy compared with the standard FEM when the single subcell version is used to express both the volumetric and deviatoric parts of the strain. Selective integration also alleviates instabilities associated with the single subcell element, which are due to rank deficiency. \n \n \n \nNumerical examples on various compressible and incompressible linear elastic test cases show that high accuracy is retained compared with the standard FEM without increasing computational cost. Copyright © 2008 John Wiley & Sons, Ltd. <|reference_end|>", "<|reference_start|> Addressing volumetric locking and instabilities by selective integration in smoothed finite elements: This paper promotes the development of a novel family of finite elements with smoothed strains, offering remarkable properties. In the smoothed finite element method (FEM), elements are divided into subcells. The strain at a point is defined as a weighted average of the standard strain field over a representative domain. \n \n \n \nThis yields superconvergent stresses, both in regular and singular settings, as well as increased accuracy, with slightly lower computational cost than the standard FEM. \n \n \n \nThe one-subcell version that does not exhibit volumetric locking yields more accurate stresses but less accurate displacements and is equivalent to a quasi-equilibrium FEM. It is also subject to instabilities. In the limit where the number of subcells goes to infinity, the standard FEM is recovered, which yields more accurate displacements and less accurate stresses. \n \n \n \nThe specific contribution of this paper is to show that expressing the volumetric part of the strain field using a one-subcell formulation is sufficient to get rid of volumetric locking and increase the displacement accuracy compared with the standard FEM when the single subcell version is used to express both the volumetric and deviatoric parts of the strain. Selective integration also alleviates instabilities associated with the single subcell element, which are due to rank deficiency. \n \n \n \nNumerical examples on various compressible and incompressible linear elastic test cases show that high accuracy is retained compared with the standard FEM without increasing computational cost. Copyright © 2008 John Wiley & Sons, Ltd. <|reference_end|>" ]
[ 3, 6, 9, 14 ]
{"<|cite_2|>": "ss-1128754", "<|cite_3|>": "ss-1128755", "<|multi_cite_4_1|>": "ss-1128756", "<|multi_cite_4_2|>": "ss-1128757", "<|cite_5|>": "ss-1128758", "<|cite_7|>": "ss-1128755", "<|multi_cite_10_1|>": "ss-1128759", "<|multi_cite_10_4|>": "ss-1128757", "<|multi_cite_10_5|>": "ss-1128758", "<|multi_cite_11_2|>": "ss-1128759", "<|multi_cite_11_3|>": "ss-1128756", "<|cite_12|>": "ss-1128760", "<|cite_13|>": "ss-1395028", "<|cite_14|>": "ss-1128760", "<|multi_cite_15_2|>": "ss-1128759", "<|multi_cite_15_3|>": "ss-1128756", "<|multi_cite_15_6|>": "ss-1128757", "<|multi_cite_15_7|>": "ss-1128758", "<|cite_16|>": "ss-1128761", "<|cite_17|>": "ss-1128760"}
2312.10082
<|paper_start|> Title: Finding Paths for Explainable MOOC Recommendation: A Learner Perspective Abstract: Finding Paths for Explainable MOOC Recommendation: A Learner Perspective: The increasing availability of Massive Open Online Courses (MOOCs) has created a necessity for personalized course recommendation systems. These systems often combine neural networks with Knowledge Graphs (KGs) to achieve richer representations of learners and courses. While these enriched representations allow more accurate and personalized recommendations, explainability remains a significant challenge which is especially problematic for certain domains with significant impact such as education and online learning. Recently, a novel class of recommender systems that uses reinforcement learning and graph reasoning over KGs has been proposed to generate explainable recommendations in the form of paths over a KG. Despite their accuracy and interpretability on e-commerce datasets, these approaches have scarcely been applied to the educational domain and their use in practice has not been studied. In this work, we propose an explainable recommendation system for MOOCs that uses graph reasoning. To validate the practical implications of our approach, we conducted a user study examining user perceptions of our new explainable recommendations. We demonstrate the generalizability of our approach by conducting experiments on two educational datasets: COCO and Xuetang. Introduction The proliferation of Massive Open Online Courses (MOOCs) has led to a democratization of educational resources, yet it has also introduced an information overload problem. To illustrate, Udemy provides over 213,000 courses, including 10,500 accredited ones, while Coursera hosts over 14,000 courses, with 7,000 being accredited. This overwhelming variety highlights the essential role of effective recommendation systems in assisting learners in selecting from the myriad of available courses. These systems are essential in helping learners find the most suitable courses based on their individual needs (e.g., goals, backgrounds, and motivations). They can facilitate optimal learning experiences and play a pivotal role in effectively steering learners' academic and professional paths. Indeed, for a recommendation to be truly impactful, it must be tailored to address the students' diverse learning objectives, skill levels, and aspirations. Recent advancements in neural network-based recommender systems have set new standards for generating precise and individualized course suggestions <|cite_start|> (Reference: A systematic review and research perspective on recommender systems: ) <|cite_end|>. Nonetheless, the majority of these models serve as black boxes leaving the rationale behind their recommendations opaque. This lack of transparency can diminish learners' trust and their willingness to accept the suggested recommendations <|cite_start|> (Reference: How much information?: Effects of transparency on trust in an algorithmic interface: The rising prevalence of algorithmic interfaces, such as curated feeds in online news, raises new questions for designers, scholars, and critics of media. This work focuses on how transparent design of algorithmic interfaces can promote awareness and foster trust. A two-stage process of how transparency affects trust was hypothesized drawing on theories of information processing and procedural justice. In an online field experiment, three levels of system transparency were tested in the high-stakes context of peer assessment. Individuals whose expectations were violated (by receiving a lower grade than expected) trusted the system less, unless the grading algorithm was made more transparent through explanation. However, providing too much information eroded this trust. Attitudes of individuals whose expectations were met did not vary with transparency. Results are discussed in terms of a dual process model of attitude change and the depth of justification of perceived inconsistency. Designing for trust requires balanced interface transparency - not too little and not too much.) <|cite_end|>, highlighting the tradeoff between model accuracy and interpretability. Given the significant impact of educational choices, and considering the proven connection between clear, understandable recommendations and trust among learners <|cite_start|> (Reference: How much information?: Effects of transparency on trust in an algorithmic interface: The rising prevalence of algorithmic interfaces, such as curated feeds in online news, raises new questions for designers, scholars, and critics of media. This work focuses on how transparent design of algorithmic interfaces can promote awareness and foster trust. A two-stage process of how transparency affects trust was hypothesized drawing on theories of information processing and procedural justice. In an online field experiment, three levels of system transparency were tested in the high-stakes context of peer assessment. Individuals whose expectations were violated (by receiving a lower grade than expected) trusted the system less, unless the grading algorithm was made more transparent through explanation. However, providing too much information eroded this trust. Attitudes of individuals whose expectations were met did not vary with transparency. Results are discussed in terms of a dual process model of attitude change and the depth of justification of perceived inconsistency. Designing for trust requires balanced interface transparency - not too little and not too much.) <|cite_end|>, there exists a clear need for algorithms that are not only accurate but also explainable. Such algorithms should explain their suggestions, assisting learners in making well-informed decisions by balancing accuracy with clarity and transparency in the recommendation process. In a variety of domains, explainable recommendation systems have garnered considerable attention as an active area of research <|cite_start|> (Reference: Explainable Recommendation with Comparative Constraints on Product Aspects: To aid users in choice-making, explainable recommendation models seek to provide not only accurate recommendations but also accompanying explanations that help to make sense of those recommendations. Most of the previous approaches rely on evaluative explanations, assessing the quality of an individual item along some aspects of interest to the user. In this work, we are interested in comparative explanations, the less studied problem of assessing a recommended item in comparison to another reference item. In particular, we propose to anchor reference items on the previously adopted items in a user's history. Not only do we aim at providing comparative explanations involving such items, but we also formulate comparative constraints involving aspect-level comparisons between the target item and the reference items. The framework allows us to incorporate these constraints and integrate them with recommendation objectives involving both types of subjective and objective aspect-level quality assumptions. Experiments on public datasets of several product categories showcase the efficacies of our methodology as compared to baselines at attaining better recommendation accuracies and intuitive explanations.) <|cite_end|>. Different approaches have been explored, including factorization models which explain recommendation by selecting item features from user reviews <|cite_start|> (Reference: Explicit Factor Models for Explainable Recommendation Based on Phrase-Level Sentiment Analysis: Collaborative Filtering(CF)-based recommendation algorithms, such as Latent Factor Models (LFM), work well in terms of prediction accuracy. However, the latent features make it difficulty to explain the recommendation results to the users. Fortunately, with the continuous growth of online user reviews, the information available for training a recommender system is no longer limited to just numerical star ratings or user/item features. By extracting explicit user opinions about various aspects of a product from the reviews, it is possible to learn more details about what aspects a user cares, which further sheds light on the possibility to make explainable recommendations. In this work, we propose the Explicit Factor Model (EFM) to generate explainable recommendations, meanwhile keep a high prediction accuracy. We first extract explicit product features (i.e. aspects) and user opinions by phrase-level sentiment analysis on user reviews, then generate both recommendations and disrecommendations according to the specific product features to the user's interests and the hidden features learned. Besides, intuitional feature-level explanations about why an item is or is not recommended are generated from the model. Offline experimental results on several real-world datasets demonstrate the advantages of our framework over competitive baseline algorithms on both rating prediction and top-K recommendation tasks. Online experiments show that the detailed explanations make the recommendations and disrecommendations more influential on user's purchasing behavior.) <|cite_end|> and topic modeling approaches that provides users with topic word clouds <|cite_start|> (Reference: FLAME: A probabilistic model combining aspect based opinion mining and collaborative filtering: Aspect-based opinion mining from online reviews has attracted a lot of attention recently. Given a set of reviews, the main task of aspect-based opinion mining is to extract major aspects of the items and to infer the latent aspect ratings from each review. However, users may have different preferences which might lead to different opinions on the same aspect of an item. Even if fine-grained aspect rating analysis is provided for each review, it is still difficult for a user to judge whether a specific aspect of an item meets his own expectation. In this paper, we study the problem of estimating personalized sentiment polarities on different aspects of the items. We propose a unified probabilistic model called Factorized Latent Aspect ModEl (FLAME), which combines the advantages of collaborative filtering and aspect based opinion mining. FLAME learns users' personalized preferences on different aspects from their past reviews, and predicts users' aspect ratings on new items by collective intelligence. Experiments on two online review datasets show that FLAME outperforms state-of-the-art methods on the tasks of aspect identification and aspect rating prediction.) <|cite_end|>. Graph-based models and knowledge-graph-based explanations have been developed to generate sentences explaining the recommendation that uses relations in the knowledge graph (KG) <|cite_start|> (Reference: UniWalk: Explainable and Accurate Recommendation for Rating and Network Data: How can we leverage social network data and observed ratings to correctly recommend proper items and provide a persuasive explanation for the recommendations? Many online services provide social networks among users, and it is crucial to utilize social information since recommendation by a friend is more likely to grab attention than the one from a random user. Also, explaining why items are recommended is very important in encouraging the users' actions such as actual purchases. Exploiting both ratings and social graph for recommendation, however, is not trivial because of the heterogeneity of the data. In this paper, we propose UniWalk, an explainable and accurate recommender system that exploits both social network and rating data. UniWalk combines both data into a unified graph, learns latent features of users and items, and recommends items to each user through the features. Importantly, it explains why items are recommended together with the recommendation results. Extensive experiments show that UniWalk provides the best explainability and achieves the state-of-the-art-accuracy.) <|cite_end|>. Additional strategies range from leveraging the attention mechanism <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|> to select item reviews serving as explanations <|cite_start|> (Reference: Neural attentional rating regression with review-level explanations: Reviews information is dominant for users to make online purchasing decisions in e-commerces. However, the usefulness of reviews is varied. We argue that less-useful reviews hurt model's performance, and are also less meaningful for user's reference. While some existing models utilize reviews for improving the performance of recommender systems, few of them consider the usefulness of reviews for recommendation quality. In this paper, we introduce a novel attention mechanism to explore the usefulness of reviews, and propose a Neural Attentional Regression model with Review-level Explanations (NARRE) for recommendation. Specifically, NARRE can not only predict precise ratings, but also learn the usefulness of each review simultaneously. Therefore, the highly-useful reviews are obtained which provide review-level explanations to help users make better and faster decisions. Extensive experiments on benchmark datasets of Amazon and Yelp on different domains show that the proposed NARRE model consistently outperforms the state-of-the-art recommendation approaches, including PMF, NMF, SVD++, HFT, and DeepCoNN in terms of rating prediction, by the proposed attention model that takes review usefulness into consideration. Furthermore, the selected reviews are shown to be effective when taking existing review-usefulness ratings in the system as ground truth. Besides, crowd-sourcing based evaluations reveal that in most cases, NARRE achieves equal or even better performances than system's usefulness rating method in selecting reviews. And it is flexible to offer great help on the dominant cases in real e-commerce scenarios when the ratings on review-usefulness are not available in the system.) <|cite_end|> and employing modern Large Language Models to generate explanations in natural language <|cite_start|> (Reference: GPT: 今秋在上海举行的上海话剧展演活动中,有一台上海滑稽剧团创作和演出的滑稽戏《GPT不正常》。(赵化南、严顺开编剧,严顺开导演)看这出戏,令人耳目一新。它的主要长处是比较注意对人物的刻画,让观众在笑声中思考一些问题,有所收益。两年前在上海刮起的甲肝风波至今使人心有余悸。当初在甲肝肆虐时社会上确实有一部分人谈“肝”色变,诚惶诚恐,甚至表现得有点异常。作者选取这一上海人非常熟悉的题材,不仅容易使人发生兴趣,产生共鸣,而且这个题材本身也潜藏着可供挖掘喜剧因素的较大可能性。可贵的是,编导者没有挖空心思去制造外在的噱头以取悦观众,而是认真揭示人物思想性格的特殊性,努力挖掘人物性格中的喜剧因素。) <|cite_end|> <|cite_start|> (Reference: Crowd-based personalized natural language explanations for recommendations: Explanations are important for users to make decisions on whether to take recommendations. However, algorithm generated explanations can be overly simplistic and unconvincing. We believe that humans can overcome these limitations. Inspired by how people explain word-of-mouth recommendations, we designed a process, combining crowdsourcing and computation, that generates personalized natural language explanations. We modeled key topical aspects of movies, asked crowdworkers to write explanations based on quotes from online movie reviews, and personalized the explanations presented to users based on their rating history. We evaluated the explanations by surveying 220 MovieLens users, finding that compared to personalized tag-based explanations, natural language explanations: 1) contain a more appropriate amount of information, 2) earn more trust from users, and 3) make users more satisfied. This paper contributes to the research literature by describing a scalable process for generating high quality and personalized natural language explanations, improving on state-of-the-art content-based explanations, and showing the feasibility and advantages of approaches that combine human wisdom with algorithmic processes.) <|cite_end|>. Nonetheless, the majority of these explanations are generated post-hoc and may not accurately represent the model's underlying reasoning. Recent advancements in Reinforcement Learning (RL) applied to KG reasoning for recommendation offer intrinsic interpretability without compromising predictive performance. In this paradigm, an RL agent navigates through the KG using relations between entities, starting from a learner and concluding at the course to be recommended, thereby inherently providing an interpretable line of reasoning. To our knowledge, Policy-Guided Path Reasoning (PGPR) <|cite_start|> (Reference: Reinforcement Knowledge Graph Reasoning for Explainable Recommendation: Recent advances in personalized recommendation have sparked great interest in the exploitation of rich structured information provided by knowledge graphs. Unlike most existing approaches that only focus on leveraging knowledge graphs for more accurate recommendation, we perform explicit reasoning with knowledge for decision making so that the recommendations are generated and supported by an interpretable causal inference procedure. To this end, we propose a method called Policy-Guided Path Reasoning (PGPR), which couples recommendation and interpretability by providing actual paths in a knowledge graph. Our contributions include four aspects. We first highlight the significance of incorporating knowledge graphs into recommendation to formally define and interpret the reasoning process. Second, we propose a reinforcement learning (RL) approach featuring an innovative soft reward strategy, user-conditional action pruning and a multi-hop scoring function. Third, we design a policy-guided graph search algorithm to efficiently and effectively sample reasoning paths for recommendation. Finally, we extensively evaluate our method on several large-scale real-world benchmark datasets, obtaining favorable results compared with state-of-the-art methods.) <|cite_end|> was the first approach to use RL applied to KG reasoning for explainable recommendation. Since PGPR, several improvement have been proposed <|cite_start|> (Reference: Explainable Knowledge Graph-based Recommendation via Deep Reinforcement Learning: This paper studies recommender systems with knowledge graphs, which can effectively address the problems of data sparsity and cold start. Recently, a variety of methods have been developed for this problem, which generally try to learn effective representations of users and items and then match items to users according to their representations. Though these methods have been shown quite effective, they lack good explanations, which are critical to recommender systems. In this paper, we take a different path and propose generating recommendations by finding meaningful paths from users to items. Specifically, we formulate the problem as a sequential decision process, where the target user is defined as the initial state, and the walks on the graphs are defined as actions. We shape the rewards according to existing state-of-the-art methods and then train a policy function with policy gradient methods. Experimental results on three real-world datasets show that our proposed method not only provides effective recommendations but also offers good explanations.) <|cite_end|> <|cite_start|> (Reference: Multi-level Recommendation Reasoning over Knowledge Graphs with Reinforcement Learning: Knowledge graphs (KGs) have been widely used to improve recommendation accuracy. The multi-hop paths on KGs also enable recommendation reasoning, which is considered a crystal type of explainability. In this paper, we propose a reinforcement learning framework for multi-level recommendation reasoning over KGs, which leverages both ontology-view and instance-view KGs to model multi-level user interests. This framework ensures convergence to a more satisfying solution by effectively transferring high-level knowledge to lower levels. Based on the framework, we propose a multi-level reasoning path extraction method, which automatically selects between high-level concepts and low-level ones to form reasoning paths that better reveal user interests. Experiments on three datasets demonstrate the effectiveness of our method.) <|cite_end|> <|cite_start|> (Reference: Reinforcement Learning over Knowledge Graphs for Explainable Dialogue Intent Mining: In light of the millions of households that have adopted intelligent assistant powered devices, multi-turn dialogue has become an important field of inquiry. Most current methods identify the underlying intent in the dialogue using opaque classification techniques that fail to provide any interpretable basis for the classification. To address this, we propose a scheme to interpret the intent in multi-turn dialogue based on specific characteristics of the dialogue text. We rely on policy-guided reinforcement learning to identify paths in a graph to confirm concrete paths of inference that serve as interpretable explanations. The graph is induced based on the multi-turn dialogue user utterances, the intents, i.e., standard queries of the dialogues, and the sub-intents associated with the dialogues. Our reinforcement learning method then discerns the characteristics of the dialogue in chronological order as the basis for multi-turn dialogue path selection. Finally, we consider a wide range of recently proposed knowledge graph-based recommender systems as baselines, mostly based on deep reinforcement learning and our method performs best.) <|cite_end|> <|cite_start|> (Reference: {CAFE:: The purpose of this research is to analyze the financial management and marketing mix strategy by Café ABC in the development of Cafe as an implication of management policy. The research method used is quantitative for financial management analysis with parameters Margin Contribution, Break Event Point, Margin of Safety, Shut Down Point, and Degree of Operating Leverage. And a qualitative method for formulating strategies with IFE and EFE matrix parameters, SWOT matrix, IE matrix and QSPM as decision making. The data used in this study are primary data sources obtained from interviews with company managers. Secondary data is obtained from the financial and operational reports of the Café every month, as well as data from the relevant literature. From the results of the financial management analysis, it shows that the difference between income and costs from individual guests is positive, so that activities to separate individual guests from group guests can continue. The results of the IE matrix show that Cafe is in a growth and build position with an intensive strategy, namely market penetration strategy, market development and product development. The results of the QSPM include the main alternative strategy with the highest TAS value of 5,761, the S-O strategy ranks third with a TAS value of 4,932 and the sixth with a TAS value of 3,724. The S-T strategy ranks seventh with a TAS value of 3,515, W-O strategy with a TAS 5,534 value, the strategy for optimizing development is fifth with a TAS value of 3,804 and a faster return on profits for group guests of Rp. 59,163,725,- W-T strategy obtained TAS value of 5,761 with the largest contribution profit value for group guests of Rp. 186.033,244,-Keywords) <|cite_end|>. In the educational domain, course recommendation systems have been the subject of extensive study, covering a wide variety of aspects. Investigations have been conducted into serendipity-based diverse course recommendation <|cite_start|> (Reference: Combating the Filter Bubble: Designing for Serendipity in a University Course Recommendation System: Collaborative filtering based algorithms, including Recurrent Neural Networks (RNN), tend towards predicting a perpetuation of past observed behavior. In a recommendation context, this can lead to an overly narrow set of suggestions lacking in serendipity and inadvertently placing the user in what is known as a "filter bubble." In this paper, we grapple with the issue of the filter bubble in the context of a course recommendation system in production at a public university. Most universities in the United States encourage students to explore developing interests while simultaneously advising them to adhere to course taking norms which progress them towards graduation. These competing objectives, and the stakes involved for students, make this context a particularly meaningful one for investigating real-world recommendation strategies. We introduce a novel modification to the skip-gram model applied to nine years of historic course enrollment sequences to learn course vector representations used to diversify recommendations based on similarity to a student's specified favorite course. This model, which we call multifactor2vec, is intended to improve the semantics of the primary token embedding by also learning embeddings of potentially conflated factors of the token (e.g., instructor). Our offline testing found this model improved accuracy and recall on our course similarity and analogy validation sets over a standard skip-gram. Incorporating course catalog description text resulted in further improvements. We compare the performance of these models to the system's existing RNN-based recommendations with a user study of undergraduates (N = 70) rating six characteristics of their course recommendations. Results of the user study show a dramatic lack of novelty in RNN recommendations and depict the characteristic trade-offs that make serendipity difficult to achieve.) <|cite_end|> and explainable learning activities recommendation through open learner models (OLMs) <|cite_start|> (Reference: Complementing educational recommender systems with open learner models: Educational recommender systems (ERSs) aim to adaptively recommend a broad range of personalised resources and activities to students that will most meet their learning needs. Commonly, ERSs operate as a "black box" and give students no insight into the rationale of their choice. Recent contributions from the learning analytics and educational data mining communities have emphasised the importance of transparent, understandable and open learner models (OLMs) that provide insight and enhance learners' understanding of interactions with learning environments. In this paper, we aim to investigate the impact of complementing ERSs with transparent and understandable OLMs that provide justification for their recommendations. We conduct a randomised control trial experiment using an ERS with two interfaces ("Non-Complemented Interface" and "Complemented Interface") to determine the effect of our approach on student engagement and their perception of the effectiveness of the ERS. Overall, our results suggest that complementing an ERS with an OLM can have a positive effect on student engagement and their perception about the effectiveness of the system despite potentially making the system harder to navigate. In some cases, complementing an ERS with an OLM has the negative consequence of decreasing engagement, understandability and sense of fairness.) <|cite_end|>. Research has also delved into peer learner recommendation <|cite_start|> (Reference: Reciprocal peer recommendation for learning purposes: Larger student intakes by universities and the rise of education through Massive Open Online Courses has led to less direct contact time with teaching staff for each student. One potential way of addressing this contact deficit is to invite learners to engage in peer learning and peer support; however, without technological support they may be unable to discover suitable peer connections that can enhance their learning experience. Two different research subfields with ties to recommender systems provide partial solutions to this problem. Reciprocal recommender systems provide sophisticated filtering techniques that enable users to connect with one another. To date, however, the main focus of reciprocal recommender systems has been on providing recommendation in online dating sites. Recommender systems for technology enhanced learning have employed and tailored exemplary recommenders towards use in education, with a focus on recommending learning content rather than other users. In this paper, we first discuss the importance of supporting peer learning and the role recommending reciprocal peers can play in educational settings. We then introduce our open-source course-level recommendation platform called RiPPLE that has the capacity to provide reciprocal peer recommendation. The proposed reciprocal peer recommender algorithm is evaluated against key criteria such as scalability, reciprocality, coverage, and quality and shows improvement over a baseline recommender. Primary results indicate that the system can help learners connect with peers based on their knowledge gaps and reciprocal preferences, with designed flexibility to address key limitations of existing algorithms identified in the literature.) <|cite_end|>, target course-oriented recommendation <|cite_start|> (Reference: Goal-based Course Recommendation: With cross-disciplinary academic interests increasing and academic advising resources over capacity, the importance of exploring data-assisted methods to support student decision making has never been higher. We build on the findings and methodologies of a quickly developing literature around prediction and recommendation in higher education and develop a novel recurrent neural network-based recommendation system for suggesting courses to help students prepare for target courses of interest, personalized to their estimated prior knowledge background and zone of proximal development. We validate the model using tests of grade prediction and the ability to recover prerequisite relationships articulated by the university. In the third validation, we run the fully personalized recommendation for students the semester before taking a historically difficult course and observe differential overlap with our would-be suggestions. While not proof of causal effectiveness, these three evaluation perspectives on the performance of the goal-based model build confidence and bring us one step closer to deployment of this personalized course preparation affordance in the wild.) <|cite_end|>, and the recommendation of short video clips to mitigate information overload <|cite_start|> (Reference: Proceedings of the 15th International Conference on Educational Data Mining, EDM 2022, Durham, UK, July 24-27, 2022: ) <|cite_end|>. In the specific domain of MOOCs recommender systems using neural networks (NN), multiple research directions have been pursued. These include optimizing the accuracy of recommendation <|cite_start|> (Reference: A course hybrid recommender system for limited user information scenarios: Recommender systems in educational contexts have proven to be effective in identifying learning resources that fit the interests and needs of learners. Their usage has been of special interest in online self-learning scenarios to increase student retention and improve the learning experience. In this article, we present the design of a hybrid course recommendation system for an online learning platform. The proposed hybrid system articulates the recommendation carried out by collaborative and content-based filter strategies. For the collaborative filtering recommender, we address the challenge of recommending meaningful content with limited information from users by using rating estimation strategies from a log system (Google Analytics). Our approach posits strategies to mine logs and generates effective ratings through the counting and temporal analysis of sessions. We evaluate different rating penalty strategies and compare the use of per-user metrics for rating estimation. For the content-based recommender, we compare different text embeddings that range from well-known topic models (LSA and LDA) to more recent multilingual contextual embeddings pre-trained on large-scale unlabelled corpora. The results show that the best model in terms of P @5 was the Collaborative filtering recommendation model with a value of 0 . 4 , i.e., two out of five courses recommended could be of the user’s interest. This result is satisfactory considering that our models were trained from ratings inferred from implicit user data. The content-based strategies did not yield significant results, however, these strategies help to mitigate the cold start problem and validate the use of a combined hybrid strategy.) <|cite_end|> <|cite_start|> (Reference: Massive open online courses (MOOCs) recommendation modeling using deep learning: Since knowledge in the world of internet has always been developed with updated information. Recommendation system for a Massive Open Online Courses (MOOCs) can help create endless learning opportunities. This study presents a Massive Open Online Courses Recommendation Modeling using Deep Learning with Multilayer Perceptron architecture which is suitable for enormous data analysis. The research methodology begins with the process used for the data analysis process, using the data mining technique according to the Cross-industry standard process for data mining (CRISP-DM), consisting of six steps: business understanding, understanding of data, data preparation, modeling, evaluation and deployment. We received a set of data from Harvard and MIT, published for edX learning data in 2012-2013, consisting of 16 programs, 18 features and 641138 sample items. The research found that the most appropriate model is a model with 7 hidden layers and 1e-3 learning rate, processed by GPU acceleration for 250 Epochs. The evaluation of the model’s performance is evaluated by calculating the precision value using 542784 testing samples.) <|cite_end|> <|cite_start|> (Reference: Novel online recommendation algorithm for massive open online courses (NoR-MOOCs): Massive Open Online Courses (MOOCs) have gained in popularity over the last few years. The space of online learning resources has been increasing exponentially and has created a problem of information overload. To overcome this problem, recommender systems that can recommend learning resources to users according to their interests have been proposed. MOOCs contain a huge amount of data with the quantity of data increasing as new learners register. Traditional recommendation techniques suffer from scalability, sparsity and cold start problems resulting in poor quality recommendations. Furthermore, they cannot accommodate the incremental update of the model with the arrival of new data making them unsuitable for MOOCs dynamic environment. From this line of research, we propose a novel online recommender system, namely NoR-MOOCs, that is accurate, scales well with the data and moreover overcomes previously recorded problems with recommender systems. Through extensive experiments conducted over the COCO data-set, we have shown empirically that NoR-MOOCs significantly outperforms traditional KMeans and Collaborative Filtering algorithms in terms of predictive and classification accuracy metrics.) <|cite_end|> <|cite_start|> (Reference: Goal-based Course Recommendation: With cross-disciplinary academic interests increasing and academic advising resources over capacity, the importance of exploring data-assisted methods to support student decision making has never been higher. We build on the findings and methodologies of a quickly developing literature around prediction and recommendation in higher education and develop a novel recurrent neural network-based recommendation system for suggesting courses to help students prepare for target courses of interest, personalized to their estimated prior knowledge background and zone of proximal development. We validate the model using tests of grade prediction and the ability to recover prerequisite relationships articulated by the university. In the third validation, we run the fully personalized recommendation for students the semester before taking a historically difficult course and observe differential overlap with our would-be suggestions. While not proof of causal effectiveness, these three evaluation perspectives on the performance of the goal-based model build confidence and bring us one step closer to deployment of this personalized course preparation affordance in the wild.) <|cite_end|> <|cite_start|> (Reference: Proceedings of the 15th International Conference on Educational Data Mining, EDM 2022, Durham, UK, July 24-27, 2022: ) <|cite_end|>, ensuring fairness <|cite_start|> (Reference: The Winner Takes It All: Geographic Imbalance and Provider (Un)Fairness in Educational Recommender Systems: Educational recommender systems channel most of the research efforts on the effectiveness of the recommended items. While teachers have a central role in online platforms, the impact of recommender systems for teachers in terms of the exposure such systems give to the courses is an under-explored area. In this paper, we consider data coming from a real-world platform and analyze the distribution of the recommendations w.r.t. the geographical provenience of the teachers. We observe that data is highly imbalanced towards the United States, in terms of offered courses and of interactions. These imbalances are exacerbated by recommender systems, which overexpose the country w.r.t. its representation in the data, thus generating unfairness for teachers outside that country. To introduce equity, we propose an approach that regulates the share of recommendations given to the items produced in a country (visibility) and the position of the items in the recommended list (exposure).) <|cite_end|> <|cite_start|> (Reference: Interplay between Upsampling and Regularization for Provider Fairness in Recommender Systems: Considering the impact of recommendations on item providers is one of the duties of multi-sided recommender systems. Item providers are key stakeholders in online platforms, and their earnings and plans are influenced by the exposure their items receive in recommended lists. Prior work showed that certain minority groups of providers, characterized by a common sensitive attribute (e.g., gender or race), are being disproportionately affected by indirect and unintentional discrimination. Our study in this paper handles a situation where ($i$) the same provider is associated with multiple items of a list suggested to a user, ($ii$) an item is created by more than one provider jointly, and ($iii$) predicted user-item relevance scores are biasedly estimated for items of provider groups. Under this scenario, we assess disparities in relevance, visibility, and exposure, by simulating diverse representations of the minority group in the catalog and the interactions. Based on emerged unfair outcomes, we devise a treatment that combines observation upsampling and loss regularization, while learning user-item relevance scores. Experiments on real-world data demonstrate that our treatment leads to lower disparate relevance. The resulting recommended lists show fairer visibility and exposure, higher minority item coverage, and negligible loss in recommendation utility.) <|cite_end|> <|cite_start|> (Reference: Equality of Learning Opportunity via Individual Fairness in Personalized Recommendations: Online educational platforms are playing a primary role in mediating the success of individuals' careers. Therefore, while building overlying content recommendation services, it becomes essential to guarantee that learners are provided with equal recommended learning opportunities, according to the platform values, context, and pedagogy. Though the importance of ensuring equality of learning opportunities has been well investigated in traditional institutions, how this equality can be operationalized in online learning ecosystems through recommender systems is still under-explored. In this paper, we formalize educational principles that model recommendations' learning properties, and a novel fairness metric that combines them in order to monitor the equality of recommended learning opportunities among learners. Then, we envision a scenario wherein an educational platform should be arranged in such a way that the generated recommendations meet each principle to a certain degree for all learners, constrained to their individual preferences. Under this view, we explore the learning opportunities provided by recommender systems in a large-scale course platform, uncovering systematic inequalities. To reduce this effect, we propose a novel post-processing approach that balances personalization and equality of recommended opportunities. Experiments show that our approach leads to higher equality, with a negligible loss in personalization. Our study moves a step forward in operationalizing the ethics of human learning in recommendations, a core unit of intelligent educational systems.) <|cite_end|> <|cite_start|> (Reference: Novel online recommendation algorithm for massive open online courses (NoR-MOOCs): Massive Open Online Courses (MOOCs) have gained in popularity over the last few years. The space of online learning resources has been increasing exponentially and has created a problem of information overload. To overcome this problem, recommender systems that can recommend learning resources to users according to their interests have been proposed. MOOCs contain a huge amount of data with the quantity of data increasing as new learners register. Traditional recommendation techniques suffer from scalability, sparsity and cold start problems resulting in poor quality recommendations. Furthermore, they cannot accommodate the incremental update of the model with the arrival of new data making them unsuitable for MOOCs dynamic environment. From this line of research, we propose a novel online recommender system, namely NoR-MOOCs, that is accurate, scales well with the data and moreover overcomes previously recorded problems with recommender systems. Through extensive experiments conducted over the COCO data-set, we have shown empirically that NoR-MOOCs significantly outperforms traditional KMeans and Collaborative Filtering algorithms in terms of predictive and classification accuracy metrics.) <|cite_end|> <|cite_start|> (Reference: The Effect of Algorithmic Bias on Recommender Systems for Massive Open Online Courses: ) <|cite_end|>, and augmenting explainability <|cite_start|> (Reference: Capacity Tracing-Enhanced Course Recommendation in MOOCs: Massive open online courses (MOOCs) have been an important learning tool in education. In order to reduce the high dropout rate and improve learners’ satisfactions, it is urgent for MOOCs platform to provide course recommendation and tutoring service. To achieve it, it is necessary to determine and trace learners’ learning state. Cognitive diagnosis in psychometric is a good way to quantify learners’ capacities, but it demands explicit learner feedback, which does not always exist in MOOCs platform, such a typical weak-interaction scenario. Therefore, in this article, multidimensional item response theory (MIRT) is exploratively integrated into recommendation models in MOOCs by introducing a time-effectiveness hypothesis to obtain the implicit response on a followed course. To dynamically update learners’ capacities by considering real-time and capacity multidimensionality, MIRT is extended to a capacity tracing model. The estimation for learner capacity is treated as attributes and integrated into collaborative filtering framework in course recommendation. To the best of our knowledge, this is the first work to integrate capacity tracing into course recommendation in MOOCs. Extensive experiments are conducted on a real-world dataset, demonstrating that the capacity tracing-enhanced course recommendation has improved effectiveness and explainability in MOOCs.) <|cite_end|>. While NN-based approaches have set benchmarks in predictive accuracy, this efficacy frequently comes at the cost of model interpretability, raising concerns about the trade-off between performance and transparency. In the specific context of RL applied to KG reasoning for MOOC recommendation, existing research remains limited. To our knowledge, only two studies directly address this issue. The first, Reinforced Explainable Knowledge Concept Recommendation (EKCRec) <|cite_start|> (Reference: Reinforced Explainable Knowledge Concept Recommendation in MOOCs: In this article, we study knowledge concept recommendation in Massive Open Online Courses (MOOCs) in an explainable manner. Knowledge concepts, composing course units (e.g., videos) in MOOCs, refer to topics and skills that students are expected to master. Compared to traditional course recommendation in MOOCs, knowledge concepts recommendation has drawn more attention because students’ interests over knowledge concepts can better revealstudents’ real intention in a more refined granularity. However, there are three unique challenges in knowledge concept recommendation: (1) How to design an appropriate data structure to capture complex relationships between knowledge concepts, course units, and other participants (e.g., students, teachers)? (2) How to model interactions between students and knowledge concepts? (3) How to make explainable recommendation results to students? To tackle these challenges, we formulate the knowledge concept recommendation as a reinforcement learning task integrated with MOOC knowledge graph (KG). Specifically, we first construct MOOC KG as the environment to capture all the relationships and behavioral histories by considering all the entities (e.g., students, teachers, videos, courses, and knowledge concepts) on the MOOC provider. Then, to model the interactions between students and knowledge concepts, we train an agent to mimic students’ learning behavioral patterns facing the complex environment. Moreover, to provide explainable recommendation results, we generate recommended knowledge concepts in the format of a path from MOOC KG to indicate semantic reasons. Finally, we conduct extensive experiments on a real-world MOOC dataset to demonstrate the effectiveness of our proposed method.) <|cite_end|>, adopts an approach similar to PGPR for MOOCs, yet it has several limitations. These include the use of proprietary datasets, the absence of user studies, and the lack of publicly available code for replication. The second work presents preliminary results applying both PGPR and CAFE methodologies to publicly available MOOC datasets. However, this work remains in the discussion stage and constitutes an ongoing project <|cite_start|> (Reference: Towards Explainable Educational Recommendation through Path Reasoning Methods: Current recommender systems in education lack explainability and interpretability, making it challenging for stakeholders to understand how the recommended content relates to them. Path reasoning methods are an emerging class of recommender systems that provides users with the reasoning behind a recommendation. While these methods have been shown to work well in several domains, there is no extensive research on their effectiveness in the context of education. In this ongoing project, we investigate the extent to which the existing path reasoning methods meet utility and beyond utility objectives in educational data. Experiments on two large-scale online course datasets show that this class of methods yields promising results and poses the ground for future advances.) <|cite_end|>. In this study, we introduce an explainable MOOC recommendation system based on RL applied to KG reasoning that uses PGPR <|cite_start|> (Reference: Reinforcement Knowledge Graph Reasoning for Explainable Recommendation: Recent advances in personalized recommendation have sparked great interest in the exploitation of rich structured information provided by knowledge graphs. Unlike most existing approaches that only focus on leveraging knowledge graphs for more accurate recommendation, we perform explicit reasoning with knowledge for decision making so that the recommendations are generated and supported by an interpretable causal inference procedure. To this end, we propose a method called Policy-Guided Path Reasoning (PGPR), which couples recommendation and interpretability by providing actual paths in a knowledge graph. Our contributions include four aspects. We first highlight the significance of incorporating knowledge graphs into recommendation to formally define and interpret the reasoning process. Second, we propose a reinforcement learning (RL) approach featuring an innovative soft reward strategy, user-conditional action pruning and a multi-hop scoring function. Third, we design a policy-guided graph search algorithm to efficiently and effectively sample reasoning paths for recommendation. Finally, we extensively evaluate our method on several large-scale real-world benchmark datasets, obtaining favorable results compared with state-of-the-art methods.) <|cite_end|>. Initially developed for product recommendation in e-commerce settings, PGPR relies on domain-specific heuristics tailored for the Amazon dataset's KG. In contrast, our adaptation generalizes the model to function with a new set of knowledge graphs, obviating the need for domain-specific adjustments. The robustness of our approach is validated through evaluations performed on two publicly accessible, real-world MOOC datasets, demonstrating its efficacy in providing both accurate and interpretable recommendations. Additionally, in contrast to previous work, we conducted an in-depth user study to probe the alignment between end-user perception and the path-based explanations generated by our model. Specifically, we investigate three aspects: initially, we analyze users' preferences for path-based explanations against traditional explanations based on popularity and the behavior of similar learners (Collaborative Filtering); subsequently, we assess the alignment of path's content (teacher, course category, learner) with learners’ motivations; finally, we investigate the threshold beyond which learners begin to perceive the explanation paths as overly complex. The implementation is made publicly available to facilitate future research\footnote{\url{https://github.com/epfl-ml4ed/courserec}}. With our analyses, we aim to answer the following research questions: \begin{enumerate} \item[(\textbf{RQ1})] What is the performance and interpretability of path-based recommendations? \item[(\textbf{RQ2})] What are users' preferences for explanations in terms of approach, motivation, and complexity? \end{enumerate} Our investigation yields several findings: \begin{itemize} \item Path-based models are competitive with state-of-the-art MOOCs recommendation models in terms of accuracy. \item Learners showed a preference for path-based explanation compared to popularity explanation and a similar preference for the collaborative filtering explanation. \item Learners' motivation has a significant impact on how much detail they want in explanations; those who are learning for self-improvement want more comprehensive details. \item Learners do not like paths that are too long or complicated. \end{itemize} <|paper_end|>
[ "<|reference_start|> FLAME: A probabilistic model combining aspect based opinion mining and collaborative filtering: Aspect-based opinion mining from online reviews has attracted a lot of attention recently. Given a set of reviews, the main task of aspect-based opinion mining is to extract major aspects of the items and to infer the latent aspect ratings from each review. However, users may have different preferences which might lead to different opinions on the same aspect of an item. Even if fine-grained aspect rating analysis is provided for each review, it is still difficult for a user to judge whether a specific aspect of an item meets his own expectation. In this paper, we study the problem of estimating personalized sentiment polarities on different aspects of the items. We propose a unified probabilistic model called Factorized Latent Aspect ModEl (FLAME), which combines the advantages of collaborative filtering and aspect based opinion mining. FLAME learns users' personalized preferences on different aspects from their past reviews, and predicts users' aspect ratings on new items by collective intelligence. Experiments on two online review datasets show that FLAME outperforms state-of-the-art methods on the tasks of aspect identification and aspect rating prediction. <|reference_end|>", "<|reference_start|> Crowd-based personalized natural language explanations for recommendations: Explanations are important for users to make decisions on whether to take recommendations. However, algorithm generated explanations can be overly simplistic and unconvincing. We believe that humans can overcome these limitations. Inspired by how people explain word-of-mouth recommendations, we designed a process, combining crowdsourcing and computation, that generates personalized natural language explanations. We modeled key topical aspects of movies, asked crowdworkers to write explanations based on quotes from online movie reviews, and personalized the explanations presented to users based on their rating history. We evaluated the explanations by surveying 220 MovieLens users, finding that compared to personalized tag-based explanations, natural language explanations: 1) contain a more appropriate amount of information, 2) earn more trust from users, and 3) make users more satisfied. This paper contributes to the research literature by describing a scalable process for generating high quality and personalized natural language explanations, improving on state-of-the-art content-based explanations, and showing the feasibility and advantages of approaches that combine human wisdom with algorithmic processes. <|reference_end|>", "<|reference_start|> Reinforcement Knowledge Graph Reasoning for Explainable Recommendation: Recent advances in personalized recommendation have sparked great interest in the exploitation of rich structured information provided by knowledge graphs. Unlike most existing approaches that only focus on leveraging knowledge graphs for more accurate recommendation, we perform explicit reasoning with knowledge for decision making so that the recommendations are generated and supported by an interpretable causal inference procedure. To this end, we propose a method called Policy-Guided Path Reasoning (PGPR), which couples recommendation and interpretability by providing actual paths in a knowledge graph. Our contributions include four aspects. We first highlight the significance of incorporating knowledge graphs into recommendation to formally define and interpret the reasoning process. Second, we propose a reinforcement learning (RL) approach featuring an innovative soft reward strategy, user-conditional action pruning and a multi-hop scoring function. Third, we design a policy-guided graph search algorithm to efficiently and effectively sample reasoning paths for recommendation. Finally, we extensively evaluate our method on several large-scale real-world benchmark datasets, obtaining favorable results compared with state-of-the-art methods. <|reference_end|>", "<|reference_start|> Novel online recommendation algorithm for massive open online courses (NoR-MOOCs): Massive Open Online Courses (MOOCs) have gained in popularity over the last few years. The space of online learning resources has been increasing exponentially and has created a problem of information overload. To overcome this problem, recommender systems that can recommend learning resources to users according to their interests have been proposed. MOOCs contain a huge amount of data with the quantity of data increasing as new learners register. Traditional recommendation techniques suffer from scalability, sparsity and cold start problems resulting in poor quality recommendations. Furthermore, they cannot accommodate the incremental update of the model with the arrival of new data making them unsuitable for MOOCs dynamic environment. From this line of research, we propose a novel online recommender system, namely NoR-MOOCs, that is accurate, scales well with the data and moreover overcomes previously recorded problems with recommender systems. Through extensive experiments conducted over the COCO data-set, we have shown empirically that NoR-MOOCs significantly outperforms traditional KMeans and Collaborative Filtering algorithms in terms of predictive and classification accuracy metrics. <|reference_end|>" ]
[ 5, 10, 11, 23 ]
{"<|cite_1|>": "ss-2070367", "<|cite_2|>": "ss-884685", "<|cite_3|>": "ss-884685", "<|cite_4|>": "ss-1413018", "<|cite_5|>": "ss-1271405", "<|multi_cite_6_2|>": "ss-1003714", "<|cite_7|>": "arxiv-137674", "<|cite_8|>": "arxiv-126595", "<|cite_9|>": "ss-1958039", "<|multi_cite_10_1|>": "ss-742396", "<|multi_cite_10_2|>": "ss-2532084", "<|cite_11|>": "arxiv-209433", "<|multi_cite_12_1|>": "ss-1964704", "<|multi_cite_12_2|>": "ss-946973", "<|multi_cite_12_3|>": "ss-1927413", "<|multi_cite_12_4|>": "ss-987842", "<|cite_13|>": "ss-749176", "<|cite_14|>": "ss-2249112", "<|cite_15|>": "ss-977703", "<|cite_16|>": "arxiv-185656", "<|cite_17|>": "ss-1190225", "<|multi_cite_18_1|>": "ss-2249113", "<|multi_cite_18_2|>": "ss-2249114", "<|multi_cite_18_3|>": "ss-2249115", "<|multi_cite_18_4|>": "arxiv-185656", "<|multi_cite_18_5|>": "ss-1190225", "<|multi_cite_19_1|>": "ss-680997", "<|multi_cite_19_2|>": "arxiv-270104", "<|multi_cite_19_3|>": "arxiv-270107", "<|multi_cite_19_4|>": "ss-2249115", "<|multi_cite_19_5|>": "ss-1653633", "<|multi_cite_20_2|>": "ss-2249116", "<|cite_21|>": "ss-2132166", "<|cite_22|>": "ss-2249117", "<|cite_23|>": "arxiv-209433"}
2401.12508-0
<|paper_start|> Title: On the Stochastic (Variance-Reduced) Proximal Gradient Method for Regularized Expected Reward Optimization Abstract: On the Stochastic (Variance-Reduced) Proximal Gradient Method for Regularized Expected Reward Optimization: We consider a regularized expected reward optimization problem in the non-oblivious setting that covers many existing problems in reinforcement learning (RL). In order to solve such an optimization problem, we apply and analyze the classical stochastic proximal gradient method. In particular, the method has shown to admit an $O(\epsilon^{-4})$ sample complexity to an $\epsilon$-stationary point, under standard conditions. Since the variance of the classical stochastic gradient estimator is typically large, which slows down the convergence, we also apply an efficient stochastic variance-reduce proximal gradient method with an importance sampling based ProbAbilistic Gradient Estimator (PAGE). Our analysis shows that the sample complexity can be improved from $O(\epsilon^{-4})$ to $O(\epsilon^{-3})$ under additional conditions. Our results on the stochastic (variance-reduced) proximal gradient method match the sample complexity of their most competitive counterparts for discounted Markov decision processes under similar settings. To the best of our knowledge, the proposed methods represent a novel approach in addressing the general regularized reward optimization problem. Introduction Reinforcement learning (RL) <|cite_start|> (Reference: {Reinforcement Learning: An introduction: The Oxford Handbook of Computational and Mathematical PsychologyFoundations of Deep Reinforcement LearningReinforcement Learning and Optimal ControlAn Introduction to Machine LearningBayesian Reasoning and Machine LearningNeural Networks for ControlReinforcement LearningDeep Reinforcement Learning Hands-OnDeep Reinforcement LearningAlgorithms for Reinforcement LearningAdvances in Reinforcement LearningGrokking Deep Reinforcement LearningReinforcement Learning, second editionDeep Learning with PyTorchDeep Reinforcement Learning HandsOnIntroduction to Machine LearningHuman-Robot Interaction Control Using Reinforcement LearningReinforcement LearningAn Introduction to Deep Reinforcement LearningReinforcement LearningReinforcement Learning, second editionReinforcement Learning with PythonStatistical Reinforcement LearningDeep LearningDeep Reinforcement Learning in ActionReinforcement Learning, second editionMathematics for Machine LearningReinventing DiscoveryDeep Reinforcement Learning with PythonReinforcement LearningAI Crash CourseDeep Reinforcement LearningReinforcement LearningReinforcement Learning and Dynamic Programming Using Function ApproximatorsDecision Making Under UncertaintyA Brief Introduction to Machine Learning for EngineersIntroduction to Deep LearningThe Biology and Technology of Intelligent Autonomous AgentsThe International Conference on Deep Learning, Big Data and Blockchain (Deep-BDB 2021)From Shortest Paths to Reinforcement Learning) <|cite_end|>has recently become a highly active research area of machine learning that learns to make sequential decisions via interacting with environment. In recent years, RL has achieved tremendous success so far in many applications such as control, job scheduling, online advertising, and game-playing <|cite_start|> (Reference: A reinforcement learning approach to job-shop scheduling: We apply reinforcement learning methods to learn domain-specific heuristics for job shop scheduling. A repair-based scheduler starts with a critical-path schedule and incrementally repairs constraint violations with the goal of finding a short conflict-free schedule. The temporal difference algorithm TD(λ) is applied to tram a neural network to learn a heuristic evaluation function over states. This evaluation function is used by a one-step lookahead search procedure to find good solutions to new scheduling problems. We evaluate this approach on synthetic problems and on problems from a NASA space shuttle pay load processing task. The evaluation function is trained on problems involving a small number of jobs and then tested on larger problems. The TD scheduler performs better than the best known existing algorithm for this task--Zwehen's iterative repair method based on simulated annealing. The results suggest that reinforcement learning can provide a new method for constructing high-performance scheduling systems.) <|cite_end|> <|cite_start|> (Reference: Sequential cost-sensitive decision making with reinforcement learning: Recently, there has been increasing interest in the issues of cost-sensitive learning and decision making in a variety of applications of data mining. A number of approaches have been developed that are effective at optimizing cost-sensitive decisions when each decision is considered in isolation. However, the issue of sequential decision making, with the goal of maximizing total benefits accrued over a period of time instead of immediate benefits, has rarely been addressed. In the present paper, we propose a novel approach to sequential decision making based on the reinforcement learning framework. Our approach attempts to learn decision rules that optimize a sequence of cost-sensitive decisions so as to maximize the total benefits accrued over time. We use the domain of targeted' marketing as a testbed for empirical evaluation of the proposed method. We conducted experiments using approximately two years of monthly promotion data derived from the well-known KDD Cup 1998 donation data set. The experimental results show that the proposed method for optimizing total accrued benefits out performs the usual targeted-marketing methodology of optimizing each promotion in isolation. We also analyze the behavior of the targeting rules that were obtained and discuss their appropriateness to the application domain.) <|cite_end|> <|cite_start|> (Reference: Playing Atari with Deep Reinforcement Learning: We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.) <|cite_end|>, to mention a few. One of the central tasks of RL is to solve a certain (expected) reward optimization problem for decision making. In this paper, we consider the following problem of maximizing the regularized expected reward: \begin{equation} \label{eq-max-reward} \max_{\theta\in\R^n}\; \cF(\theta):=\E_{x\sim\pi_\theta}\left[\cR_\theta(x)\right] - \cG(\theta), \end{equation} where $\cG: \R^n \to \R\cup\{+\infty\}$ is a closed proper convex (possibly nonsmooth) function, $x\in \R^d$, $\cR_\theta:\R^d\to \R$ is the reward function depending on the parameter $\theta$, and $\pi_\theta$ denotes the probability distribution over a given subset $\cS\subseteq\R^d$ parameterized by $\theta\in \R^n$. By adapting the convention in RL, we call $\pi_\theta$ a policy parameterized by $\theta$. Moreover, for the rest of this paper, we denote $\cJ(\theta):=\E_{x\sim\pi_\theta}\left[\cR_\theta(x)\right]$ as the expected reward function in the \textit{non-oblivious} setting. The learning objective is to learn a decision rule via finding the policy parameter $\theta$ that maximizes the regularized expected reward. There are large body of works in supervised learning focusing on the \textit{oblivious} setting <|cite_start|> (Reference: Solving large scale linear prediction problems using stochastic gradient descent algorithms: Linear prediction methods, such as least squares for regression, logistic regression and support vector machines for classification, have been extensively used in statistics and machine learning. In this paper, we study stochastic gradient descent (SGD) algorithms on regularized forms of linear prediction methods. This class of methods, related to online algorithms such as perceptron, are both efficient and very simple to implement. We obtain numerical rate of convergence for such algorithms, and discuss its implications. Experiments on text data will be provided to demonstrate numerical and statistical consequences of our theoretical findings.) <|cite_end|> <|cite_start|> (Reference: {The Elements of Statistical Learning: Data mining, Inference, and Prediction: In the words of the authors, the goal of this book was to “bring together many of the important new ideas in learning, and explain them in a statistical framework.” The authors have been quite successful in achieving this objective, and their work is a welcome addition to the statistics and learning literatures. Statistics has always been interdisciplinary, borrowing ideas from diverse Ž elds and repaying the debt with contributions, both theoretical and practical, to the other intellectual disciplines. For statistical learning, this cross-fertilization is especially noticeable. This book is a valuable resource, both for the statistician needing an introduction to machine learning and related Ž elds and for the computer scientist wishing to learn more about statistics. Statisticians will especially appreciate that it is written in their own language. The level of the book is roughly that of a second-year doctoral student in statistics, and it will be useful as a textbook for such students. In a stimulating article, Breiman (2001) argued that statistics has been focused too much on a “data modeling culture,” where the model is paramount. Breiman argued instead for an “algorithmic modeling culture,” with emphasis on black-box types of prediction. Breiman’s article is controversial, and in his discussion, Efron objects that “prediction is certainly an interesting subject, but Leo’s paper overstates both its role and our profession’s lack of interest in it.” Although I mostly agree with Efron, I worry that the courses offered by most statistics departments include little, if any, treatment of statistical learning and prediction. (Stanford, where Efron and the authors of this book teach, is an exception.) Graduate students in statistics certainly need to know more than they do now about prediction, machine learning, statistical learning, and data mining (not disjoint subjects). I hope that graduate courses covering the topics of this book will become more common in statistics curricula. Most of the book is focused on supervised learning, where one has inputs and outputs from some system and wishes to predict unknown outputs corresponding to known inputs. The methods discussed for supervised learning include linear and logistic regression; basis expansion, such as splines and wavelets; kernel techniques, such as local regression, local likelihood, and radial basis functions; neural networks; additive models; decision trees based on recursive partitioning, such as CART; and support vector machines. There is a Ž nal chapter on unsupervised learning, including association rules, cluster analysis, self-organizing maps, principal components and curves, and independent component analysis. Many statisticians will be unfamiliar with at least some of these algorithms. Association rules are popular for mining commercial data in what is called “market basket analysis.” The aim is to discover types of products often purchased together. Such knowledge can be used to develop marketing strategies, such as store or catalog layouts. Self-organizing maps (SOMs) involve essentially constrained k-means clustering, where prototypes are mapped to a two-dimensional curved coordinate system. Independent components analysis is similar to principal components analysis and factor analysis, but it uses higher-order moments to achieve independence, not merely zero correlation between components. A strength of the book is the attempt to organize a plethora of methods into a coherent whole. The relationships among the methods are emphasized. I know of no other book that covers so much ground. Of course, with such broad coverage, it is not possible to cover any single topic in great depth, so this book will encourage further reading. Fortunately, each chapter includes bibliographic notes surveying the recent literature. These notes and the extensive references provide a good introduction to the learning literature, including much outside of statistics. The book might be more suitable as a textbook if less material were covered in greater depth; however, such a change would compromise the book’s usefulness as a reference, and so I am happier with the book as it was written.) <|cite_end|> <|cite_start|> (Reference: {Lectures on stochastic programming: modeling and theory: “SPbook”2009/5/4page iiiiiiiiiiDarinka DentchevaDepartment of Mathematical SciencesStevens Institute of TechnologyHoboken, NJ 07030, USAAndrzej Ruszczynski´Department of Management Science and Information SystemsRutgers UniversityPiscataway, NJ 08854, USAAlexander ShapiroSchool of Industrial and Systems EngineeringGeorgia Institute of TechnologyAtlanta, GA 30332, USAThe authors dedicate this book:to Julia, Benjamin, Daniel, Natan and Yael;to Tsonka, Konstatin and Marek;and to the Memory of Feliks, Maria, and Dentcho.) <|cite_end|>, i.e., $\cJ(\theta):=\E_{x\sim\pi}\left[\cR_\theta(x)\right]$, where $x$ is sampled from an invariant distribution $\pi$. Clearly, problem \eqref{eq-max-reward} can be viewed as a generalization of those machine learning problems with oblivious objective functions. In the literature, an RL problem is often formulated as a discrete-time and discounted Markov decision processes (MDP) <|cite_start|> (Reference: {Reinforcement Learning: An introduction: The Oxford Handbook of Computational and Mathematical PsychologyFoundations of Deep Reinforcement LearningReinforcement Learning and Optimal ControlAn Introduction to Machine LearningBayesian Reasoning and Machine LearningNeural Networks for ControlReinforcement LearningDeep Reinforcement Learning Hands-OnDeep Reinforcement LearningAlgorithms for Reinforcement LearningAdvances in Reinforcement LearningGrokking Deep Reinforcement LearningReinforcement Learning, second editionDeep Learning with PyTorchDeep Reinforcement Learning HandsOnIntroduction to Machine LearningHuman-Robot Interaction Control Using Reinforcement LearningReinforcement LearningAn Introduction to Deep Reinforcement LearningReinforcement LearningReinforcement Learning, second editionReinforcement Learning with PythonStatistical Reinforcement LearningDeep LearningDeep Reinforcement Learning in ActionReinforcement Learning, second editionMathematics for Machine LearningReinventing DiscoveryDeep Reinforcement Learning with PythonReinforcement LearningAI Crash CourseDeep Reinforcement LearningReinforcement LearningReinforcement Learning and Dynamic Programming Using Function ApproximatorsDecision Making Under UncertaintyA Brief Introduction to Machine Learning for EngineersIntroduction to Deep LearningThe Biology and Technology of Intelligent Autonomous AgentsThe International Conference on Deep Learning, Big Data and Blockchain (Deep-BDB 2021)From Shortest Paths to Reinforcement Learning) <|cite_end|>which aims to learn an optimal policy via optimizing the (discounted) cumulative sum of rewards. We can also see that the learning objective of a MDP can be covered by the problem \eqref{eq-max-reward} with the property that the function $\cR(x)$ does not depend on $\theta$ (see Example \ref{eg-mdp}). Recently, the application of RL approaches for solving combinatorial optimization (CO) problems which are typically NP-hard has attracted much attention. These CO problems may include the traveling salesman problem and related problems <|cite_start|> (Reference: Neural Combinatorial Optimization with Reinforcement Learning: This paper presents a framework to tackle combinatorial optimization problems using neural networks and reinforcement learning. We focus on the traveling salesman problem (TSP) and train a recurrent network that, given a set of city coordinates, predicts a distribution over different city permutations. Using negative tour length as the reward signal, we optimize the parameters of the recurrent network using a policy gradient method. We compare learning the network parameters on a set of training graphs against learning them on individual test graphs. Despite the computational expense, without much engineering and heuristic designing, Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes. Applied to the KnapSack, another NP-hard problem, the same method obtains optimal solutions for instances with up to 200 items.) <|cite_end|> <|cite_start|> (Reference: Reinforcement Learning for Combinatorial Optimization: A Survey: Many traditional algorithms for solving combinatorial optimization problems involve using hand-crafted heuristics that sequentially construct a solution. Such heuristics are designed by domain experts and may often be suboptimal due to the hard nature of the problems. Reinforcement learning (RL) proposes a good alternative to automate the search of these heuristics by training an agent in a supervised or self-supervised manner. In this survey, we explore the recent advancements of applying RL frameworks to hard combinatorial problems. Our survey provides the necessary background for operations research and machine learning communities and showcases the works that are moving the field forward. We juxtapose recently proposed RL methods, laying out the timeline of the improvements for each problem, as well as we make a comparison with traditional algorithms, indicating that RL models can become a promising direction for solving combinatorial problems.) <|cite_end|>, the reward optimization problem arising from the finite expression method <|cite_start|> (Reference: Finite Expression Method for Solving High-Dimensional Partial Differential Equations: Designing efficient and accurate numerical solvers for high-dimensional partial differential equations (PDEs) remains a challenging and important topic in computational science and engineering, mainly due to the "curse of dimensionality" in designing numerical schemes that scale in dimension. This paper introduces a new methodology that seeks an approximate PDE solution in the space of functions with finitely many analytic expressions and, hence, this methodology is named the finite expression method (FEX). It is proved in approximation theory that FEX can avoid the curse of dimensionality. As a proof of concept, a deep reinforcement learning method is proposed to implement FEX for various high-dimensional PDEs in different dimensions, achieving high and even machine accuracy with a memory complexity polynomial in dimension and an amenable time complexity. An approximate solution with finite analytic expressions also provides interpretable insights into the ground truth PDE solution, which can further help to advance the understanding of physical systems and design postprocessing techniques for a refined solution.) <|cite_end|> <|cite_start|> (Reference: A Finite Expression Method for Solving High-Dimensional Committor Problems: Transition path theory (TPT) is a mathematical framework for quantifying rare transition events between a pair of selected metastable states $A$ and $B$. Central to TPT is the committor function, which describes the probability to hit the metastable state $B$ prior to $A$ from any given starting point of the phase space. Once the committor is computed, the transition channels and the transition rate can be readily found. The committor is the solution to the backward Kolmogorov equation with appropriate boundary conditions. However, solving it is a challenging task in high dimensions due to the need to mesh a whole region of the ambient space. In this work, we explore the finite expression method (FEX, Liang and Yang (2022)) as a tool for computing the committor. FEX approximates the committor by an algebraic expression involving a fixed finite number of nonlinear functions and binary arithmetic operations. The optimal nonlinear functions, the binary operations, and the numerical coefficients in the expression template are found via reinforcement learning. The FEX-based committor solver is tested on several high-dimensional benchmark problems. It gives comparable or better results than neural network-based solvers. Most importantly, FEX is capable of correctly identifying the algebraic structure of the solution which allows one to reduce the committor problem to a low-dimensional one and find the committor with any desired accuracy.) <|cite_end|>, and the general binary optimization problem <|cite_start|> (Reference: Monte Carlo Policy Gradient Method for Binary Optimization: Binary optimization has a wide range of applications in combinatorial optimization problems such as MaxCut, MIMO detection, and MaxSAT. However, these problems are typically NP-hard due to the binary constraints. We develop a novel probabilistic model to sample the binary solution according to a parameterized policy distribution. Specifically, minimizing the KL divergence between the parameterized policy distribution and the Gibbs distributions of the function value leads to a stochastic optimization problem whose policy gradient can be derived explicitly similar to reinforcement learning. For coherent exploration in discrete spaces, parallel Markov Chain Monte Carlo (MCMC) methods are employed to sample from the policy distribution with diversity and approximate the gradient efficiently. We further develop a filter scheme to replace the original objective function by the one with the local search technique to broaden the horizon of the function landscape. Convergence to stationary points in expectation of the policy gradient method is established based on the concentration inequality for MCMC. Numerical results show that this framework is very promising to provide near-optimal solutions for quite a few binary optimization problems.) <|cite_end|>, to name just a few. The common key component of the aforementioned applications is the reward optimization which could also be formulated as problem \eqref{eq-max-reward}. There also exist problems with general reward functions that are outside the scope of cumulative sum of rewards of trajectories that are used in MDPs. An example is the MDP with general utilities; see e.g., <|cite_start|> (Reference: Variational Policy Gradient Method for Reinforcement Learning with General Utilities: In recent years, reinforcement learning (RL) systems with general goals beyond a cumulative sum of rewards have gained traction, such as in constrained problems, exploration, and acting upon prior experiences. In this paper, we consider policy optimization in Markov Decision Problems, where the objective is a general concave utility function of the state-action occupancy measure, which subsumes several of the aforementioned examples as special cases. Such generality invalidates the Bellman equation. As this means that dynamic programming no longer works, we focus on direct policy search. Analogously to the Policy Gradient Theorem \cite{sutton2000policy} available for RL with cumulative rewards, we derive a new Variational Policy Gradient Theorem for RL with general utilities, which establishes that the parametrized policy gradient may be obtained as the solution of a stochastic saddle point problem involving the Fenchel dual of the utility function. We develop a variational Monte Carlo gradient estimation algorithm to compute the policy gradient based on sample paths. We prove that the variational policy gradient scheme converges globally to the optimal policy for the general objective, though the optimization problem is nonconvex. We also establish its rate of convergence of the order $O(1/t)$ by exploiting the hidden convexity of the problem, and proves that it converges exponentially when the problem admits hidden strong convexity. Our analysis applies to the standard RL problem with cumulative rewards as a special case, in which case our result improves the available convergence rate.) <|cite_end|> <|cite_start|> (Reference: Policy Gradient for Reinforcement Learning with General Utilities: In Reinforcement Learning (RL), the goal of agents is to discover an optimal policy that maximizes the expected cumulative rewards. This objective may also be viewed as finding a policy that optimizes a linear function of its state-action occupancy measure, hereafter referred as Linear RL. However, many supervised and unsupervised RL problems are not covered in the Linear RL framework, such as apprenticeship learning, pure exploration and variational intrinsic control, where the objectives are non-linear functions of the occupancy measures. RL with non-linear utilities looks unwieldy, as methods like Bellman equation, value iteration, policy gradient, dynamic programming that had tremendous success in Linear RL, fail to trivially generalize. In this paper, we derive the policy gradient theorem for RL with general utilities. The policy gradient theorem proves to be a cornerstone in Linear RL due to its elegance and ease of implementability. Our policy gradient theorem for RL with general utilities shares the same elegance and ease of implementability. Based on the policy gradient theorem derived, we also present a simple sample-based algorithm. We believe our results will be of interest to the community and offer inspiration to future works in this generalized setting.) <|cite_end|> <|cite_start|> (Reference: Reinforcement Learning with General Utilities: Simpler Variance Reduction and Large State-Action Space: We consider the reinforcement learning (RL) problem with general utilities which consists in maximizing a function of the state-action occupancy measure. Beyond the standard cumulative reward RL setting, this problem includes as particular cases constrained RL, pure exploration and learning from demonstrations among others. For this problem, we propose a simpler single-loop parameter-free normalized policy gradient algorithm. Implementing a recursive momentum variance reduction mechanism, our algorithm achieves $\tilde{\mathcal{O}}(\epsilon^{-3})$ and $\tilde{\mathcal{O}}(\epsilon^{-2})$ sample complexities for $\epsilon$-first-order stationarity and $\epsilon$-global optimality respectively, under adequate assumptions. We further address the setting of large finite state action spaces via linear function approximation of the occupancy measure and show a $\tilde{\mathcal{O}}(\epsilon^{-4})$ sample complexity for a simple policy gradient method with a linear regression subroutine.) <|cite_end|>and references therein. Adding a regularizer to the objective function is a commonly used technique to impose desirable structures to the solution and/or to greatly enhance the expression powerful and applicability of RL <|cite_start|> (Reference: Policy Mirror Descent for Reinforcement Learning: Linear Convergence, New Sampling Complexity, and Generalized Problem Classes: We present new policy mirror descent (PMD) methods for solving reinforcement learning (RL) problems with either strongly convex or general convex regularizers. By exploring the structural properties of these overall highly nonconvex problems we show that the PMD methods exhibit fast linear rate of convergence to the global optimality. We develop stochastic counterparts of these methods, and establish an ${\cal O}(1/\epsilon)$ (resp., ${\cal O}(1/\epsilon^2)$) sampling complexity for solving these RL problems with strongly (resp., general) convex regularizers using different sampling schemes, where $\epsilon$ denote the target accuracy. We further show that the complexity for computing the gradients of these regularizers, if necessary, can be bounded by ${\cal O}\{(\log_\gamma \epsilon) [(1-\gamma)L/\mu]^{1/2}\log (1/\epsilon)\}$ (resp., ${\cal O} \{(\log_\gamma \epsilon ) (L/\epsilon)^{1/2}\}$)for problems with strongly (resp., general) convex regularizers. Here $\gamma$ denotes the discounting factor. To the best of our knowledge, these complexity bounds, along with our algorithmic developments, appear to be new in both optimization and RL literature. The introduction of these convex regularizers also greatly expands the flexibility and applicability of RL models.) <|cite_end|> <|cite_start|> (Reference: Policy Mirror Descent for Regularized Reinforcement Learning: A Generalized Framework with Linear Convergence: Policy optimization, which finds the desired policy by maximizing value functions via optimization techniques, lies at the heart of reinforcement learning (RL). In addition to value maximization, other practical considerations arise as well, including the need of encouraging exploration, and that of ensuring certain structural properties of the learned policy due to safety, resource and operational constraints. These can often be accounted for via regularized RL, which augments the target value function with a structure-promoting regularizer. Focusing on discounted infinite-horizon Markov decision processes, we propose a generalized policy mirror descent (GPMD) algorithm for solving regularized RL. As a generalization of policy mirror descent (arXiv:2102.00135), our algorithm accommodates a general class of convex regularizers and promotes the use of Bregman divergence in cognizant of the regularizer in use. We demonstrate that our algorithm converges linearly to the global solution over an entire range of learning rates, in a dimension-free fashion, even when the regularizer lacks strong convexity and smoothness. In addition, this linear convergence feature is provably stable in the face of inexact policy evaluation and imperfect policy updates. Numerical experiments are provided to corroborate the appealing performance of GPMD.) <|cite_end|>. When one considers the direct/simplex parameterization <|cite_start|> (Reference: On the Theory of Policy Gradient Methods: Optimality, Approximation, and Distribution Shift: Policy gradient methods are among the most effective methods in challenging reinforcement learning problems with large state and/or action spaces. However, little is known about even their most basic theoretical convergence properties, including: if and how fast they converge to a globally optimal solution or how they cope with approximation error due to using a restricted class of parametric policies. This work provides provable characterizations of the computational, approximation, and sample size properties of policy gradient methods in the context of discounted Markov Decision Processes (MDPs). We focus on both: "tabular" policy parameterizations, where the optimal policy is contained in the class and where we show global convergence to the optimal policy; and parametric policy classes (considering both log-linear and neural policy classes), which may not contain the optimal policy and where we provide agnostic learning results. One central contribution of this work is in providing approximation guarantees that are average case -- which avoid explicit worst-case dependencies on the size of state space -- by making a formal connection to supervised learning under distribution shift. This characterization shows an important interplay between estimation error, approximation error, and exploration (as characterized through a precisely defined condition number).) <|cite_end|>of $\pi_\theta$, a regularization function using the indicator function for the standard probability simplex is needed. Moreover, by using other indicator functions for general convex sets, one is able to impose some additional constraints on the parameter $\theta$. For the softmax parameterization, one may also enforce a bounded constraint to $\theta$ to prevent it taking values that are too large. This can avoid potential numerical issues including the overflow error on a floating point system. On the other hand, there are incomplete parametric policy classes, such as the log-linear and neural policy classes, that are often formulated as $\{\pi_\theta|\theta\in \Theta\}$ <|cite_start|> (Reference: On the Theory of Policy Gradient Methods: Optimality, Approximation, and Distribution Shift: Policy gradient methods are among the most effective methods in challenging reinforcement learning problems with large state and/or action spaces. However, little is known about even their most basic theoretical convergence properties, including: if and how fast they converge to a globally optimal solution or how they cope with approximation error due to using a restricted class of parametric policies. This work provides provable characterizations of the computational, approximation, and sample size properties of policy gradient methods in the context of discounted Markov Decision Processes (MDPs). We focus on both: "tabular" policy parameterizations, where the optimal policy is contained in the class and where we show global convergence to the optimal policy; and parametric policy classes (considering both log-linear and neural policy classes), which may not contain the optimal policy and where we provide agnostic learning results. One central contribution of this work is in providing approximation guarantees that are average case -- which avoid explicit worst-case dependencies on the size of state space -- by making a formal connection to supervised learning under distribution shift. This characterization shows an important interplay between estimation error, approximation error, and exploration (as characterized through a precisely defined condition number).) <|cite_end|>. In this case, the indicator function is still necessary and useful. Some recent works (see e.g., <|cite_start|> (Reference: Understanding the impact of entropy on policy optimization: Entropy regularization is commonly used to improve policy optimization in reinforcement learning. It is believed to help with \emph{exploration} by encouraging the selection of more stochastic policies. In this work, we analyze this claim using new visualizations of the optimization landscape based on randomly perturbing the loss function. We first show that even with access to the exact gradient, policy optimization is difficult due to the geometry of the objective function. Then, we qualitatively show that in some environments, a policy with higher entropy can make the optimization landscape smoother, thereby connecting local optima and enabling the use of larger learning rates. This paper presents new tools for understanding the optimization landscape, shows that policy entropy serves as a regularizer, and highlights the challenge of designing general-purpose policy optimization algorithms.) <|cite_end|> <|cite_start|> (Reference: Optimality and Approximation with Policy Gradient Methods in Markov Decision Processes: Policy gradient methods are among the most effective methods in challenging reinforcement learning problems with large state and/or action spaces. However, little is known about even their most basic theoretical convergence properties, including: if and how fast they converge to a globally optimal solution (say with a sufficiently rich policy class); how they cope with approximation error due to using a restricted class of parametric policies; or their finite sample behavior. Such characterizations are important not only to compare these methods to their approximate value function counterparts (where such issues are relatively well understood, at least in the worst case), but also to help with more principled approaches to algorithm design. This work provides provable characterizations of computational, approximation, and sample size issues with regards to policy gradient methods in the context of discounted Markov Decision Processes (MDPs). We focus on both: 1) "tabular" policy parameterizations, where the optimal policy is contained in the class and where we show global convergence to the optimal policy, and 2) restricted policy classes, which may not contain the optimal policy and where we provide agnostic learning results. One insight of this work is in formalizing the importance how a favorable initial state distribution provides a means to circumvent worst-case exploration issues. Overall, these results place policy gradient methods under a solid theoretical footing, analogous to the global convergence guarantees of iterative value function based algorithms.) <|cite_end|> <|cite_start|> (Reference: On the Global Convergence Rates of Softmax Policy Gradient Methods: We make three contributions toward better understanding policy gradient methods in the tabular setting. First, we show that with the true gradient, policy gradient with a softmax parametrization converges at a $O(1/t)$ rate, with constants depending on the problem and initialization. This result significantly expands the recent asymptotic convergence results. The analysis relies on two findings: that the softmax policy gradient satisfies a \L{}ojasiewicz inequality, and the minimum probability of an optimal action during optimization can be bounded in terms of its initial value. Second, we analyze entropy regularized policy gradient and show that it enjoys a significantly faster linear convergence rate $O(e^{-c \cdot t})$ toward softmax optimal policy $(c > 0)$. This result resolves an open question in the recent literature. Finally, combining the above two results and additional new $\Omega(1/t)$ lower bound results, we explain how entropy regularization improves policy optimization, even with the true gradient, from the perspective of convergence rate. The separation of rates is further explained using the notion of non-uniform \L{}ojasiewicz degree. These results provide a theoretical understanding of the impact of entropy and corroborate existing empirical studies.) <|cite_end|> <|cite_start|> (Reference: Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization: Natural policy gradient (NPG) methods are among the most widely used policy optimization algorithms in contemporary reinforcement learning. This class of methods is often applied in conjunction with entropy regularization -- an algorithmic scheme that encourages exploration -- and is closely related to soft policy iteration and trust region policy optimization. Despite the empirical success, the theoretical underpinnings for NPG methods remain limited even for the tabular setting. This paper develops $\textit{non-asymptotic}$ convergence guarantees for entropy-regularized NPG methods under softmax parameterization, focusing on discounted Markov decision processes (MDPs). Assuming access to exact policy evaluation, we demonstrate that the algorithm converges linearly -- or even quadratically once it enters a local region around the optimal policy -- when computing optimal value functions of the regularized MDP. Moreover, the algorithm is provably stable vis-\`a-vis inexactness of policy evaluation. Our convergence results accommodate a wide range of learning rates, and shed light upon the role of entropy regularization in enabling fast convergence.) <|cite_end|>) have investigated the impact of the entropy regularization for MDPs. Systematic studies on general convex regularization for MDPs have been limited until the recent works <|cite_start|> (Reference: A Hybrid Stochastic Policy Gradient Algorithm for Reinforcement Learning: We propose a novel hybrid stochastic policy gradient estimator by combining an unbiased policy gradient estimator, the REINFORCE estimator, with another biased one, an adapted SARAH estimator for policy optimization. The hybrid policy gradient estimator is shown to be biased, but has variance reduced property. Using this estimator, we develop a new Proximal Hybrid Stochastic Policy Gradient Algorithm (ProxHSPGA) to solve a composite policy optimization problem that allows us to handle constraints or regularizers on the policy parameters. We first propose a single-looped algorithm then introduce a more practical restarting variant. We prove that both algorithms can achieve the best-known trajectory complexity $\mathcal{O}\left(\varepsilon^{-3}\right)$ to attain a first-order stationary point for the composite problem which is better than existing REINFORCE/GPOMDP $\mathcal{O}\left(\varepsilon^{-4}\right)$ and SVRPG $\mathcal{O}\left(\varepsilon^{-10/3}\right)$ in the non-composite setting. We evaluate the performance of our algorithm on several well-known examples in reinforcement learning. Numerical results show that our algorithm outperforms two existing methods on these examples. Moreover, the composite settings indeed have some advantages compared to the non-composite ones on certain problems.) <|cite_end|> <|cite_start|> (Reference: Policy Mirror Descent for Reinforcement Learning: Linear Convergence, New Sampling Complexity, and Generalized Problem Classes: We present new policy mirror descent (PMD) methods for solving reinforcement learning (RL) problems with either strongly convex or general convex regularizers. By exploring the structural properties of these overall highly nonconvex problems we show that the PMD methods exhibit fast linear rate of convergence to the global optimality. We develop stochastic counterparts of these methods, and establish an ${\cal O}(1/\epsilon)$ (resp., ${\cal O}(1/\epsilon^2)$) sampling complexity for solving these RL problems with strongly (resp., general) convex regularizers using different sampling schemes, where $\epsilon$ denote the target accuracy. We further show that the complexity for computing the gradients of these regularizers, if necessary, can be bounded by ${\cal O}\{(\log_\gamma \epsilon) [(1-\gamma)L/\mu]^{1/2}\log (1/\epsilon)\}$ (resp., ${\cal O} \{(\log_\gamma \epsilon ) (L/\epsilon)^{1/2}\}$)for problems with strongly (resp., general) convex regularizers. Here $\gamma$ denotes the discounting factor. To the best of our knowledge, these complexity bounds, along with our algorithmic developments, appear to be new in both optimization and RL literature. The introduction of these convex regularizers also greatly expands the flexibility and applicability of RL models.) <|cite_end|> <|cite_start|> (Reference: Policy Mirror Descent for Regularized Reinforcement Learning: A Generalized Framework with Linear Convergence: Policy optimization, which finds the desired policy by maximizing value functions via optimization techniques, lies at the heart of reinforcement learning (RL). In addition to value maximization, other practical considerations arise as well, including the need of encouraging exploration, and that of ensuring certain structural properties of the learned policy due to safety, resource and operational constraints. These can often be accounted for via regularized RL, which augments the target value function with a structure-promoting regularizer. Focusing on discounted infinite-horizon Markov decision processes, we propose a generalized policy mirror descent (GPMD) algorithm for solving regularized RL. As a generalization of policy mirror descent (arXiv:2102.00135), our algorithm accommodates a general class of convex regularizers and promotes the use of Bregman divergence in cognizant of the regularizer in use. We demonstrate that our algorithm converges linearly to the global solution over an entire range of learning rates, in a dimension-free fashion, even when the regularizer lacks strong convexity and smoothness. In addition, this linear convergence feature is provably stable in the face of inexact policy evaluation and imperfect policy updates. Numerical experiments are provided to corroborate the appealing performance of GPMD.) <|cite_end|>. Finally, problem \eqref{eq-max-reward} takes a similar form as the convex stochastic optimization problem with decision-dependent distributions considered in <|cite_start|> (Reference: Stochastic Optimization with Decision-Dependent Distributions: Stochastic optimization problems often involve data distributions that change in reaction to the decision variables. This is the case, for example, when members of the population respond to a deployed classifier by manipulating their features so as to improve the likelihood of being positively labeled. Recent works on performative prediction identify an intriguing solution concept for such problems: find the decision that is optimal with respect to the static distribution that the decision induces. Continuing this line of work, we show that, in the strongly convex setting, typical stochastic algorithms—originally designed for static problems—can be applied directly for finding such equilibria with little loss in efficiency. The reason is simple to explain: the main consequence of the distributional shift is that it corrupts algorithms with a bias that decays linearly with the distance to the solution. Using this perspective, we obtain convergence guarantees for popular algorithms, such as stochastic gradient, clipped gradient, prox-point, and dual averaging methods, along with their accelerated and proximal variants. In realistic applications, deployment of a decision rule is often much more expensive than sampling. We show how to modify the aforementioned algorithms so as to maintain their sample efficiency when performing only logarithmically many deployments.) <|cite_end|>. Consequently, we can see that problem \eqref{eq-max-reward} is in fact quite general and has promising modeling power as it covers many existing problems in the literature. The purpose of this paper is to leverage existing tools and results in MDPs and nonconvex optimization for solving the general regularized expected reward optimization problem \eqref{eq-max-reward} with general policy parameterizations, which, to the best of our knowledge, has not been formally considered in the RL literature. It is well known that the policy gradient method <|cite_start|> (Reference: Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning: ) <|cite_end|> <|cite_start|> (Reference: Policy gradient methods for reinforcement learning with function approximation: Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy.) <|cite_end|> <|cite_start|> (Reference: Infinite-Horizon Policy-Gradient Estimation: Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a {\em biased} estimate of the gradient of the {\em average reward} in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter $\beta\in [0,1)$ (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter $\beta$ is related to the {\em mixing time} of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward) <|cite_end|>, which lies in the heart of RL, is one of the most competitive and efficient algorithms due to its simplicity and versatility. Moreover, the policy gradient method is readily implemented and can be paired with other effective techniques. In this paper, we observe that the stochastic proximal gradient method, which shares the same spirit of the policy gradient method, can be applied directly for solving the targeted problem \eqref{eq-max-reward} with convergence guarantees to a stationary point. Since the classical stochastic gradient estimator typically introduces a large variance, there is also a need to consider designing advanced stochastic gradient estimators with smaller variances. To this end, we shall also look into a certain stochastic variance-reduced proximal gradient method and analyze its convergence properties. In particular, the contributions of this paper are summarized as follows. \begin{itemize} \item We consider a novel regularized reward optimization framework \eqref{eq-max-reward} that covers many existing important models in the machine learning and optimization literature. Thus, problem \eqref{eq-max-reward} admits a promising modeling power which encourages potential applications. \item In order to solve our targeted problem, we consider applying the classical stochastic proximal gradient method and analyze its convergence properties. We first demonstrate that the gradient of $\cJ(\cdot)$ is Lipschitz continuous under standard conditions with respect to the reward function $\cR_\theta(\cdot)$ and the parameterized policy $\pi_\theta(\cdot)$. Using the L-smoothness of $\cJ(\cdot)$, we then show that the classical stochastic proximal gradient method with a constant step-size (depending only on the Lipschitz constant for $\nabla_\theta\cJ(\cdot)$) for solving problem \eqref{eq-max-reward} outputs an $\epsilon$-stationary point (see Definition \ref{def-stationary}) within $T:=O(\epsilon^{-2})$ iterations, and the sample size for each iteration is $O(\epsilon^{-2})$, where $\epsilon>0$ is a given tolerance. Thus, the total sample complexity becomes $O(\epsilon^{-4})$, which matches the current state-of-the-art sample complexity of the classical stochastic policy gradient for MDPs; see e.g., <|cite_start|> (Reference: Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning: ) <|cite_end|> <|cite_start|> (Reference: Infinite-Horizon Policy-Gradient Estimation: Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a {\em biased} estimate of the gradient of the {\em average reward} in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter $\beta\in [0,1)$ (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter $\beta$ is related to the {\em mixing time} of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward) <|cite_end|> <|cite_start|> (Reference: Global Convergence of Policy Gradient Methods to (Almost) Locally Optimal Policies: Policy gradient (PG) methods are a widely used reinforcement learning methodology in many applications such as video games, autonomous driving, and robotics. In spite of its empirical success, a rigorous understanding of the global convergence of PG methods is lacking in the literature. In this work, we close the gap by viewing PG methods from a nonconvex optimization perspective. In particular, we propose a new variant of PG methods for infinite-horizon problems that uses a random rollout horizon for the Monte-Carlo estimation of the policy gradient. This method then yields an unbiased estimate of the policy gradient with bounded variance, which enables the tools from nonconvex optimization to be applied to establish global convergence. Employing this perspective, we first recover the convergence results with rates to the stationary-point policies in the literature. More interestingly, motivated by advances in nonconvex optimization, we modify the proposed PG method by introducing periodically enlarged stepsizes. The modified algorithm is shown to escape saddle points under mild assumptions on the reward and the policy parameterization. Under a further strict saddle points assumption, this result establishes convergence to essentially locally-optimal policies of the underlying problem, and thus bridges the gap in existing literature on the convergence of PG methods. Results from experiments on the inverted pendulum are then provided to corroborate our theory, namely, by slightly reshaping the reward function to satisfy our assumption, unfavorable saddle points can be avoided and better limit points can be attained. Intriguingly, this empirical finding justifies the benefit of reward-reshaping from a nonconvex optimization perspective.) <|cite_end|> <|cite_start|> (Reference: Non-asymptotic Convergence of Adam-type Reinforcement Learning Algorithms under Markovian Sampling: Despite the wide applications of Adam in reinforcement learning (RL), the theoretical convergence of Adam-type RL algorithms has not been established. This paper provides the first such convergence analysis for two fundamental RL algorithms of policy gradient (PG) and temporal difference (TD) learning that incorporate AMSGrad updates (a standard alternative of Adam in theoretical analysis), referred to as PG-AMSGrad and TD-AMSGrad, respectively. Moreover, our analysis focuses on Markovian sampling for both algorithms. We show that under general nonlinear function approximation, PG-AMSGrad with a constant stepsize converges to a neighborhood of a stationary point at the rate of $\mathcal{O}(1/T)$ (where $T$ denotes the number of iterations), and with a diminishing stepsize converges exactly to a stationary point at the rate of $\mathcal{O}(\log^2 T/\sqrt{T})$. Furthermore, under linear function approximation, TD-AMSGrad with a constant stepsize converges to a neighborhood of the global optimum at the rate of $\mathcal{O}(1/T)$, and with a diminishing stepsize converges exactly to the global optimum at the rate of $\mathcal{O}(\log T/\sqrt{T})$. Our study develops new techniques for analyzing the Adam-type RL algorithms under Markovian sampling.) <|cite_end|> <|cite_start|> (Reference: A general sample complexity analysis of vanilla policy gradient: We adapt recent tools developed for the analysis of Stochastic Gradient Descent (SGD) in non-convex optimization to obtain convergence and sample complexity guarantees for the vanilla policy gradient (PG). Our only assumptions are that the expected return is smooth w.r.t. the policy parameters, that its $H$-step truncated gradient is close to the exact gradient, and a certain ABC assumption. This assumption requires the second moment of the estimated gradient to be bounded by $A\geq 0$ times the suboptimality gap, $B \geq 0$ times the norm of the full batch gradient and an additive constant $C \geq 0$, or any combination of aforementioned. We show that the ABC assumption is more general than the commonly used assumptions on the policy space to prove convergence to a stationary point. We provide a single convergence theorem that recovers the $\widetilde{\mathcal{O}}(\epsilon^{-4})$ sample complexity of PG to a stationary point. Our results also affords greater flexibility in the choice of hyper parameters such as the step size and the batch size $m$, including the single trajectory case (i.e., $m=1$). When an additional relaxed weak gradient domination assumption is available, we establish a novel global optimum convergence theory of PG with $\widetilde{\mathcal{O}}(\epsilon^{-3})$ sample complexity. We then instantiate our theorems in different settings, where we both recover existing results and obtain improved sample complexity, e.g., $\widetilde{\mathcal{O}}(\epsilon^{-3})$ sample complexity for the convergence to the global optimum for Fisher-non-degenerated parametrized policies.) <|cite_end|>. \item Moreover, in order to further reduce the variance of the stochastic gradient estimator, we utilise an importance sampling based probabilistic gradient estimator which leads to an efficient single-looped variance reduced method. The application of this probabilistic gradient estimator is motivated by the recent progress in developing efficient stochastic variance-reduced gradient methods for solving stochastic optimization <|cite_start|> (Reference: PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization: In this paper, we propose a novel stochastic gradient estimator -- ProbAbilistic Gradient Estimator (PAGE) -- for nonconvex optimization. PAGE is easy to implement as it is designed via a small adjustment to vanilla SGD: in each iteration, PAGE uses the vanilla minibatch SGD update with probability $p_t$ or reuses the previous gradient with a small adjustment, at a much lower computational cost, with probability $1-p_t$. We give a simple formula for the optimal choice of $p_t$. Moreover, we prove the first tight lower bound $\Omega(n+\frac{\sqrt{n}}{\epsilon^2})$ for nonconvex finite-sum problems, which also leads to a tight lower bound $\Omega(b+\frac{\sqrt{b}}{\epsilon^2})$ for nonconvex online problems, where $b:= \min\{\frac{\sigma^2}{\epsilon^2}, n\}$. Then, we show that PAGE obtains the optimal convergence results $O(n+\frac{\sqrt{n}}{\epsilon^2})$ (finite-sum) and $O(b+\frac{\sqrt{b}}{\epsilon^2})$ (online) matching our lower bounds for both nonconvex finite-sum and online problems. Besides, we also show that for nonconvex functions satisfying the Polyak-\L{}ojasiewicz (PL) condition, PAGE can automatically switch to a faster linear convergence rate $O(\cdot\log \frac{1}{\epsilon})$. Finally, we conduct several deep learning experiments (e.g., LeNet, VGG, ResNet) on real datasets in PyTorch showing that PAGE not only converges much faster than SGD in training but also achieves the higher test accuracy, validating the optimal theoretical results and confirming the practical superiority of PAGE.) <|cite_end|>and (unregularized) MDPs <|cite_start|> (Reference: PAGE-PG: A Simple and Loopless Variance-Reduced Policy Gradient Method with Probabilistic Gradient Estimation: Despite their success, policy gradient methods suffer from high variance of the gradient estimate, which can result in unsatisfactory sample complexity. Recently, numerous variance-reduced extensions of policy gradient methods with provably better sample complexity and competitive numerical performance have been proposed. After a compact survey on some of the main variance-reduced REINFORCE-type methods, we propose ProbAbilistic Gradient Estimation for Policy Gradient (PAGE-PG), a novel loopless variance-reduced policy gradient method based on a probabilistic switch between two types of updates. Our method is inspired by the PAGE estimator for supervised learning and leverages importance sampling to obtain an unbiased gradient estimator. We show that PAGE-PG enjoys a $\mathcal{O}\left( \epsilon^{-3} \right)$ average sample complexity to reach an $\epsilon$-stationary solution, which matches the sample complexity of its most competitive counterparts under the same setting. A numerical evaluation confirms the competitive performance of our method on classical control tasks.) <|cite_end|>. We show that, under additional technical conditions, the total sample complexity is improved from $O(\epsilon^{-4})$ to $O(\epsilon^{-3})$. This result again matches the results of some existing competitive variance-reduced methods for MDPs <|cite_start|> (Reference: Stochastic Variance-Reduced Policy Gradient: In this paper, we propose a novel reinforcement- learning algorithm consisting in a stochastic variance-reduced version of policy gradient for solving Markov Decision Processes (MDPs). Stochastic variance-reduced gradient (SVRG) methods have proven to be very successful in supervised learning. However, their adaptation to policy gradient is not straightforward and needs to account for I) a non-concave objective func- tion; II) approximations in the full gradient com- putation; and III) a non-stationary sampling pro- cess. The result is SVRPG, a stochastic variance- reduced policy gradient algorithm that leverages on importance weights to preserve the unbiased- ness of the gradient estimate. Under standard as- sumptions on the MDP, we provide convergence guarantees for SVRPG with a convergence rate that is linear under increasing batch sizes. Finally, we suggest practical variants of SVRPG, and we empirically evaluate them on continuous MDPs.) <|cite_end|> <|cite_start|> (Reference: Sample Efficient Policy Gradient Methods with Recursive Variance Reduction: Improving the sample efficiency in reinforcement learning has been a long-standing research problem. In this work, we aim to reduce the sample complexity of existing policy gradient methods. We propose a novel policy gradient algorithm called SRVR-PG, which only requires $O(1/\epsilon^{3/2})$ episodes to find an $\epsilon$-approximate stationary point of the nonconcave performance function $J(\boldsymbol{\theta})$ (i.e., $\boldsymbol{\theta}$ such that $\|\nabla J(\boldsymbol{\theta})\|_2^2\leq\epsilon$). This sample complexity improves the existing result $O(1/\epsilon^{5/3})$ for stochastic variance reduced policy gradient algorithms by a factor of $O(1/\epsilon^{1/6})$. In addition, we also propose a variant of SRVR-PG with parameter exploration, which explores the initial policy parameter from a prior probability distribution. We conduct numerical experiments on classic control problems in reinforcement learning to validate the performance of our proposed algorithms.) <|cite_end|> <|cite_start|> (Reference: A Hybrid Stochastic Policy Gradient Algorithm for Reinforcement Learning: We propose a novel hybrid stochastic policy gradient estimator by combining an unbiased policy gradient estimator, the REINFORCE estimator, with another biased one, an adapted SARAH estimator for policy optimization. The hybrid policy gradient estimator is shown to be biased, but has variance reduced property. Using this estimator, we develop a new Proximal Hybrid Stochastic Policy Gradient Algorithm (ProxHSPGA) to solve a composite policy optimization problem that allows us to handle constraints or regularizers on the policy parameters. We first propose a single-looped algorithm then introduce a more practical restarting variant. We prove that both algorithms can achieve the best-known trajectory complexity $\mathcal{O}\left(\varepsilon^{-3}\right)$ to attain a first-order stationary point for the composite problem which is better than existing REINFORCE/GPOMDP $\mathcal{O}\left(\varepsilon^{-4}\right)$ and SVRPG $\mathcal{O}\left(\varepsilon^{-10/3}\right)$ in the non-composite setting. We evaluate the performance of our algorithm on several well-known examples in reinforcement learning. Numerical results show that our algorithm outperforms two existing methods on these examples. Moreover, the composite settings indeed have some advantages compared to the non-composite ones on certain problems.) <|cite_end|> <|cite_start|> (Reference: Bregman Gradient Policy Optimization: In the paper, we design a novel Bregman gradient policy optimization framework for reinforcement learning based on Bregman divergences and momentum techniques. Specifically, we propose a Bregman gradient policy optimization (BGPO) algorithm based on the basic momentum technique and mirror descent iteration. Meanwhile, we further propose an accelerated Bregman gradient policy optimization (VR-BGPO) algorithm based on the variance reduced technique. Moreover, we provide a convergence analysis framework for our Bregman gradient policy optimization under the nonconvex setting. We prove that our BGPO achieves a sample complexity of $O(\epsilon^{-4})$ for finding $\epsilon$-stationary policy only requiring one trajectory at each iteration, and our VR-BGPO reaches the best known sample complexity of $O(\epsilon^{-3})$, which also only requires one trajectory at each iteration. In particular, by using different Bregman divergences, our BGPO framework unifies many existing policy optimization algorithms such as the existing (variance reduced) policy gradient algorithms such as natural policy gradient algorithm. Extensive experimental results on multiple reinforcement learning tasks demonstrate the efficiency of our new algorithms.) <|cite_end|> <|cite_start|> (Reference: Policy Optimization with Stochastic Mirror Descent: Improving sample efficiency has been a longstanding goal in reinforcement learning. This paper proposes $\mathtt{VRMPO}$ algorithm: a sample efficient policy gradient method with stochastic mirror descent. In $\mathtt{VRMPO}$, a novel variance-reduced policy gradient estimator is presented to improve sample efficiency. We prove that the proposed $\mathtt{VRMPO}$ needs only $\mathcal{O}(\epsilon^{-3})$ sample trajectories to achieve an $\epsilon$-approximate first-order stationary point, which matches the best sample complexity for policy optimization. The extensive experimental results demonstrate that $\mathtt{VRMPO}$ outperforms the state-of-the-art policy gradient methods in various settings.) <|cite_end|> <|cite_start|> (Reference: PAGE-PG: A Simple and Loopless Variance-Reduced Policy Gradient Method with Probabilistic Gradient Estimation: Despite their success, policy gradient methods suffer from high variance of the gradient estimate, which can result in unsatisfactory sample complexity. Recently, numerous variance-reduced extensions of policy gradient methods with provably better sample complexity and competitive numerical performance have been proposed. After a compact survey on some of the main variance-reduced REINFORCE-type methods, we propose ProbAbilistic Gradient Estimation for Policy Gradient (PAGE-PG), a novel loopless variance-reduced policy gradient method based on a probabilistic switch between two types of updates. Our method is inspired by the PAGE estimator for supervised learning and leverages importance sampling to obtain an unbiased gradient estimator. We show that PAGE-PG enjoys a $\mathcal{O}\left( \epsilon^{-3} \right)$ average sample complexity to reach an $\epsilon$-stationary solution, which matches the sample complexity of its most competitive counterparts under the same setting. A numerical evaluation confirms the competitive performance of our method on classical control tasks.) <|cite_end|>. Moreover, to the best of our knowledge, the application of the above probabilistic gradient estimator is new for solving the regularized expected reward optimization \eqref{eq-max-reward}. \end{itemize} The rest of this paper is organized as follows. We first summarise some relative work in Section \ref{section-relatedwork}. Next, in Section \ref{section-preliminary}, we present some background information that are needed for the exposition of this paper. Then, in Section \ref{section-spgd}, we describe the classical stochastic proximal gradient method for solving \eqref{eq-max-reward} and present the convergence properties of this method under standard technical conditions. Section \ref{section-page} is dedicated to describing and analyzing the stochastic variance-reduced proximal gradient method with an importance sampling based probabilistic gradient estimator. Finally, we make some concluding remarks, and list certain limitations and future research directions of this paper in Section \ref{section-conclusions}. Related Work \label{section-relatedwork} \textbf{The policy gradient method}. One of the most influential algorithms for solving RL problems is the policy gradient method built upon the foundations established in <|cite_start|> (Reference: Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning: ) <|cite_end|> <|cite_start|> (Reference: Policy gradient methods for reinforcement learning with function approximation: Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy.) <|cite_end|> <|cite_start|> (Reference: Infinite-Horizon Policy-Gradient Estimation: Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a {\em biased} estimate of the gradient of the {\em average reward} in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter $\beta\in [0,1)$ (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter $\beta$ is related to the {\em mixing time} of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward) <|cite_end|>. Motivated by the empirical success of the policy gradient method and its variants, analyzing the convergence properties for these methods has long been one of the most active research topics in RL. Since the objective function $\cJ(\theta)$ is generally nonconcave, early works <|cite_start|> (Reference: Policy gradient methods for reinforcement learning with function approximation: Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy.) <|cite_end|> <|cite_start|> (Reference: Policy gradient in Lipschitz Markov Decision Processes: ) <|cite_end|>focused on the asymptotic convergence properties to a stationary point. By utilizing the special structure in (entropy regularized) MDPs, recent works <|cite_start|> (Reference: Neural Proximal/Trust Region Policy Optimization Attains Globally Optimal Policy: Proximal policy optimization and trust region policy optimization (PPO and TRPO) with actor and critic parametrized by neural networks achieve significant empirical success in deep reinforcement learning. However, due to nonconvexity, the global convergence of PPO and TRPO remains less understood, which separates theory from practice. In this paper, we prove that a variant of PPO and TRPO equipped with overparametrized neural networks converges to the globally optimal policy at a sublinear rate. The key to our analysis is the global convergence of infinite-dimensional mirror descent under a notion of one-point monotonicity, where the gradient and iterate are instantiated by neural networks. In particular, the desirable representation power and optimization geometry induced by the overparametrization of such neural networks allow them to accurately approximate the infinite-dimensional gradient and iterate.) <|cite_end|> <|cite_start|> (Reference: On the Global Convergence Rates of Softmax Policy Gradient Methods: We make three contributions toward better understanding policy gradient methods in the tabular setting. First, we show that with the true gradient, policy gradient with a softmax parametrization converges at a $O(1/t)$ rate, with constants depending on the problem and initialization. This result significantly expands the recent asymptotic convergence results. The analysis relies on two findings: that the softmax policy gradient satisfies a \L{}ojasiewicz inequality, and the minimum probability of an optimal action during optimization can be bounded in terms of its initial value. Second, we analyze entropy regularized policy gradient and show that it enjoys a significantly faster linear convergence rate $O(e^{-c \cdot t})$ toward softmax optimal policy $(c > 0)$. This result resolves an open question in the recent literature. Finally, combining the above two results and additional new $\Omega(1/t)$ lower bound results, we explain how entropy regularization improves policy optimization, even with the true gradient, from the perspective of convergence rate. The separation of rates is further explained using the notion of non-uniform \L{}ojasiewicz degree. These results provide a theoretical understanding of the impact of entropy and corroborate existing empirical studies.) <|cite_end|> <|cite_start|> (Reference: On the Theory of Policy Gradient Methods: Optimality, Approximation, and Distribution Shift: Policy gradient methods are among the most effective methods in challenging reinforcement learning problems with large state and/or action spaces. However, little is known about even their most basic theoretical convergence properties, including: if and how fast they converge to a globally optimal solution or how they cope with approximation error due to using a restricted class of parametric policies. This work provides provable characterizations of the computational, approximation, and sample size properties of policy gradient methods in the context of discounted Markov Decision Processes (MDPs). We focus on both: "tabular" policy parameterizations, where the optimal policy is contained in the class and where we show global convergence to the optimal policy; and parametric policy classes (considering both log-linear and neural policy classes), which may not contain the optimal policy and where we provide agnostic learning results. One central contribution of this work is in providing approximation guarantees that are average case -- which avoid explicit worst-case dependencies on the size of state space -- by making a formal connection to supervised learning under distribution shift. This characterization shows an important interplay between estimation error, approximation error, and exploration (as characterized through a precisely defined condition number).) <|cite_end|> <|cite_start|> (Reference: Softmax Policy Gradient Methods Can Take Exponential Time to Converge: The softmax policy gradient (PG) method, which performs gradient ascent under softmax policy parameterization, is arguably one of the de facto implementations of policy optimization in modern reinforcement learning. For $\gamma$-discounted infinite-horizon tabular Markov decision processes (MDPs), remarkable progress has recently been achieved towards establishing global convergence of softmax PG methods in finding a near-optimal policy. However, prior results fall short of delineating clear dependencies of convergence rates on salient parameters such as the cardinality of the state space $\mathcal{S}$ and the effective horizon $\frac{1}{1-\gamma}$, both of which could be excessively large. In this paper, we deliver a pessimistic message regarding the iteration complexity of softmax PG methods, despite assuming access to exact gradient computation. Specifically, we demonstrate that the softmax PG method with stepsize $\eta$ can take \[ \frac{1}{\eta} |\mathcal{S}|^{2^{\Omega\big(\frac{1}{1-\gamma}\big)}} ~\text{iterations} \] to converge, even in the presence of a benign policy initialization and an initial state distribution amenable to exploration (so that the distribution mismatch coefficient is not exceedingly large). This is accomplished by characterizing the algorithmic dynamics over a carefully-constructed MDP containing only three actions. Our exponential lower bound hints at the necessity of carefully adjusting update rules or enforcing proper regularization in accelerating PG methods.) <|cite_end|> <|cite_start|> (Reference: On the Convergence Rates of Policy Gradient Methods: We consider infinite-horizon discounted Markov decision problems with finite state and action spaces and study the convergence rates of the projected policy gradient method and a general class of policy mirror descent methods, all with direct parametrization in the policy space. First, we develop a theory of weak gradient-mapping dominance and use it to prove sharper sublinear convergence rate of the projected policy gradient method. Then we show that with geometrically increasing step sizes, a general class of policy mirror descent methods, including the natural policy gradient method and a projected Q-descent method, all enjoy a linear rate of convergence without relying on entropy or other strongly convex regularization. Finally, we also analyze the convergence rate of an inexact policy mirror descent method and estimate its sample complexity under a simple generative model.) <|cite_end|> <|cite_start|> (Reference: Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization: Natural policy gradient (NPG) methods are among the most widely used policy optimization algorithms in contemporary reinforcement learning. This class of methods is often applied in conjunction with entropy regularization -- an algorithmic scheme that encourages exploration -- and is closely related to soft policy iteration and trust region policy optimization. Despite the empirical success, the theoretical underpinnings for NPG methods remain limited even for the tabular setting. This paper develops $\textit{non-asymptotic}$ convergence guarantees for entropy-regularized NPG methods under softmax parameterization, focusing on discounted Markov decision processes (MDPs). Assuming access to exact policy evaluation, we demonstrate that the algorithm converges linearly -- or even quadratically once it enters a local region around the optimal policy -- when computing optimal value functions of the regularized MDP. Moreover, the algorithm is provably stable vis-\`a-vis inexactness of policy evaluation. Our convergence results accommodate a wide range of learning rates, and shed light upon the role of entropy regularization in enabling fast convergence.) <|cite_end|> <|cite_start|> (Reference: Policy Mirror Descent for Reinforcement Learning: Linear Convergence, New Sampling Complexity, and Generalized Problem Classes: We present new policy mirror descent (PMD) methods for solving reinforcement learning (RL) problems with either strongly convex or general convex regularizers. By exploring the structural properties of these overall highly nonconvex problems we show that the PMD methods exhibit fast linear rate of convergence to the global optimality. We develop stochastic counterparts of these methods, and establish an ${\cal O}(1/\epsilon)$ (resp., ${\cal O}(1/\epsilon^2)$) sampling complexity for solving these RL problems with strongly (resp., general) convex regularizers using different sampling schemes, where $\epsilon$ denote the target accuracy. We further show that the complexity for computing the gradients of these regularizers, if necessary, can be bounded by ${\cal O}\{(\log_\gamma \epsilon) [(1-\gamma)L/\mu]^{1/2}\log (1/\epsilon)\}$ (resp., ${\cal O} \{(\log_\gamma \epsilon ) (L/\epsilon)^{1/2}\}$)for problems with strongly (resp., general) convex regularizers. Here $\gamma$ denotes the discounting factor. To the best of our knowledge, these complexity bounds, along with our algorithmic developments, appear to be new in both optimization and RL literature. The introduction of these convex regularizers also greatly expands the flexibility and applicability of RL models.) <|cite_end|>
[ "<|reference_start|> PAGE-PG: A Simple and Loopless Variance-Reduced Policy Gradient Method with Probabilistic Gradient Estimation: Despite their success, policy gradient methods suffer from high variance of the gradient estimate, which can result in unsatisfactory sample complexity. Recently, numerous variance-reduced extensions of policy gradient methods with provably better sample complexity and competitive numerical performance have been proposed. After a compact survey on some of the main variance-reduced REINFORCE-type methods, we propose ProbAbilistic Gradient Estimation for Policy Gradient (PAGE-PG), a novel loopless variance-reduced policy gradient method based on a probabilistic switch between two types of updates. Our method is inspired by the PAGE estimator for supervised learning and leverages importance sampling to obtain an unbiased gradient estimator. We show that PAGE-PG enjoys a $\\mathcal{O}\\left( \\epsilon^{-3} \\right)$ average sample complexity to reach an $\\epsilon$-stationary solution, which matches the sample complexity of its most competitive counterparts under the same setting. A numerical evaluation confirms the competitive performance of our method on classical control tasks. <|reference_end|>", "<|reference_start|> Infinite-Horizon Policy-Gradient Estimation: Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a {\\em biased} estimate of the gradient of the {\\em average reward} in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter $\\beta\\in [0,1)$ (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter $\\beta$ is related to the {\\em mixing time} of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward <|reference_end|>", "<|reference_start|> Policy gradient methods for reinforcement learning with function\napproximation: Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy. <|reference_end|>", "<|reference_start|> Policy gradient in Lipschitz Markov Decision Processes: <|reference_end|>" ]
[ 37, 46, 47, 48 ]
{"<|cite_1|>": "ss-737863", "<|multi_cite_2_1|>": "ss-1284668", "<|multi_cite_2_2|>": "ss-2315934", "<|multi_cite_2_3|>": "arxiv-54263", "<|multi_cite_3_1|>": "ss-1262651", "<|multi_cite_3_2|>": "ss-750037", "<|multi_cite_3_3|>": "ss-809961", "<|cite_4|>": "ss-737863", "<|multi_cite_5_1|>": "arxiv-111378", "<|multi_cite_5_2|>": "arxiv-252537", "<|multi_cite_6_1|>": "arxiv-428516", "<|multi_cite_6_2|>": "arxiv-517534", "<|cite_7|>": "arxiv-520349", "<|multi_cite_8_1|>": "arxiv-276370", "<|multi_cite_8_2|>": "arxiv-450676", "<|multi_cite_8_3|>": "arxiv-512309", "<|multi_cite_9_1|>": "arxiv-318031", "<|multi_cite_9_2|>": "arxiv-342792", "<|cite_10|>": "arxiv-217063", "<|cite_11|>": "arxiv-217063", "<|multi_cite_12_1|>": "arxiv-182112", "<|multi_cite_12_2|>": "ss-745505", "<|multi_cite_12_3|>": "arxiv-265218", "<|multi_cite_12_4|>": "arxiv-278141", "<|multi_cite_13_1|>": "arxiv-251283", "<|multi_cite_13_2|>": "arxiv-318031", "<|multi_cite_13_3|>": "arxiv-342792", "<|cite_14|>": "ss-1651624", "<|multi_cite_15_1|>": "ss-846089", "<|multi_cite_15_2|>": "ss-767671", "<|multi_cite_15_3|>": "arxiv-21963", "<|multi_cite_16_1|>": "ss-846089", "<|multi_cite_16_2|>": "arxiv-21963", "<|multi_cite_16_3|>": "arxiv-210602", "<|multi_cite_16_4|>": "arxiv-248396", "<|multi_cite_16_5|>": "arxiv-356884", "<|cite_17|>": "arxiv-286360", "<|cite_18|>": "arxiv-396173", "<|multi_cite_19_1|>": "arxiv-162539", "<|multi_cite_19_2|>": "arxiv-224472", "<|multi_cite_19_3|>": "arxiv-251283", "<|multi_cite_19_4|>": "arxiv-350340", "<|multi_cite_19_5|>": "arxiv-211385", "<|multi_cite_19_6|>": "arxiv-396173", "<|multi_cite_20_1|>": "ss-846089", "<|multi_cite_20_2|>": "ss-767671", "<|multi_cite_20_3|>": "arxiv-21963", "<|multi_cite_21_1|>": "ss-767671", "<|multi_cite_21_2|>": "ss-1407849", "<|multi_cite_22_1|>": "arxiv-211331", "<|multi_cite_22_2|>": "arxiv-265218", "<|multi_cite_22_3|>": "arxiv-217063", "<|multi_cite_22_4|>": "arxiv-322935", "<|multi_cite_22_5|>": "arxiv-393484", "<|multi_cite_22_6|>": "arxiv-278141", "<|multi_cite_22_7|>": "arxiv-318031", "<|multi_cite_22_8|>": "arxiv-478984", "<|multi_cite_23_1|>": "arxiv-210602", "<|multi_cite_23_2|>": "arxiv-462162", "<|multi_cite_23_3|>": "arxiv-298134", "<|multi_cite_23_4|>": "arxiv-248396", "<|multi_cite_23_5|>": "arxiv-356884", "<|multi_cite_23_6|>": "arxiv-318031", "<|cite_24|>": "ss-737863", "<|multi_cite_25_1|>": "arxiv-276370", "<|multi_cite_25_2|>": "arxiv-450676", "<|multi_cite_25_3|>": "arxiv-512309", "<|cite_26|>": "arxiv-276370", "<|multi_cite_27_1|>": "ss-1068523", "<|multi_cite_27_2|>": "arxiv-117842", "<|multi_cite_27_3|>": "ss-1260141", "<|multi_cite_27_4|>": "arxiv-286360", "<|multi_cite_28_1|>": "arxiv-162539", "<|multi_cite_28_2|>": "arxiv-224472", "<|multi_cite_28_3|>": "arxiv-252829", "<|multi_cite_28_4|>": "arxiv-251283", "<|multi_cite_28_5|>": "arxiv-350340", "<|multi_cite_28_6|>": "arxiv-211385", "<|multi_cite_28_7|>": "arxiv-396173", "<|cite_29|>": "ss-1651624", "<|multi_cite_30_1|>": "ss-1262651", "<|multi_cite_30_2|>": "ss-750037", "<|multi_cite_30_3|>": "ss-809961", "<|cite_31|>": "arxiv-248561", "<|multi_cite_32_1|>": "arxiv-271281", "<|multi_cite_32_2|>": "arxiv-248561", "<|multi_cite_32_3|>": "ss-1651624", "<|multi_cite_33_1|>": "arxiv-352163", "<|multi_cite_33_2|>": "arxiv-396303"}
2208.06072
<|paper_start|> Title: Multiple RISs Assisted Cell-Free Networks With Two-timescale CSI: Performance Analysis and System Design Abstract: Multiple RISs Assisted Cell-Free Networks With Two-timescale CSI: Performance Analysis and System Design: Reconfigurable intelligent surface (RIS) can be employed in a cell-free system to create favorable propagation conditions from base stations (BSs) to users via configurable elements. However, prior works on RIS-aided cell-free system designs mainly rely on the instantaneous channel state information (CSI), which may incur substantial overhead due to extremely high dimensions of estimated channels. To mitigate this issue, a low-complexity algorithm via the two-timescale transmission protocol is proposed in this paper, where the joint beamforming at BSs and RISs is facilitated via alternating optimization framework to maximize the average weighted sum-rate. Specifically, the passive beamformers at RISs are optimized through the statistical CSI, and the transmit beamformers at BSs are based on the instantaneous CSI of effective channels. In this manner, a closed-form expression for the achievable weighted sum-rate is derived, which enables the evaluation of the impact of key parameters on system performance. To gain more insights, a special case without line-of-sight (LoS) components is further investigated, where a power gain on the order of $\mathcal{O}(M)$ is achieved, with $M$ being the BS antennas number. Numerical results validate the tightness of our derived analytical expression and show the fast convergence of the proposed algorithm. Findings illustrate that the performance of the proposed algorithm with two-timescale CSI is comparable to that with instantaneous CSI in low or moderate SNR regime. The impact of key system parameters such as the number of RIS elements, CSI settings and Rician factor is also evaluated. Moreover, the remarkable advantages from the adoption of the cell-free paradigm and the deployment of RISs are demonstrated intuitively. Introduction The proliferation of mobile phones and other portable devices continuously exacerbates the demand for data transmission in wireless networks. To support the mounting data traffic growth, the cell-free system was introduced in <|cite_start|> (Reference: Precoding and power optimization in cell-free massive mimo systems: Cell-free Massive multiple-input multiple-output (MIMO) comprises a large number of distributed low-cost low-power single antenna access points (APs) connected to a network controller. The number of AP antennas is significantly larger than the number of users. The system is not partitioned into cells and each user is served by all APs simultaneously. The simplest linear precoding schemes are conjugate beamforming and zero-forcing. Max–min power control provides equal throughput to all users and is considered in this paper. Surprisingly, under max–min power control, most APs are found to transmit at less than full power. The zero-forcing precoder significantly outperforms conjugate beamforming. For zero-forcing, a near-optimal power control algorithm is developed that is considerably simpler than exact max–min power control. An alternative to cell-free systems is small-cell operation in which each user is served by only one AP for which power optimization algorithms are also developed. Cell-free Massive MIMO is shown to provide five- to ten-fold improvement in 95%-likely per-user throughput over small-cell operation.) <|cite_end|> <|cite_start|> (Reference: Cell-Free Massive MIMO versus Small Cells: A Cell-Free Massive MIMO (multiple-input multiple-output) system comprises a very large number of distributed access points (APs)which simultaneously serve a much smaller number of users over the same time/frequency resources based on directly measured channel characteristics. The APs and users have only one antenna each. The APs acquire channel state information through time-division duplex operation and the reception of uplink pilot signals transmitted by the users. The APs perform multiplexing/de-multiplexing through conjugate beamforming on the downlink and matched filtering on the uplink. Closed-form expressions for individual user uplink and downlink throughputs lead to max-min power control algorithms. Max-min power control ensures uniformly good service throughout the area of coverage. A pilot assignment algorithm helps to mitigate the effects of pilot contamination, but power control is far more important in that regard. Cell-Free Massive MIMO has considerably improved performance with respect to a conventional small-cell scheme, whereby each user is served by a dedicated AP, in terms of both 95%-likely per-user throughput and immunity to shadow fading spatial correlation. Under uncorrelated shadow fading conditions, the cell-free scheme provides nearly 5-fold improvement in 95%-likely per-user throughput over the small-cell scheme, and 10-fold improvement when shadow fading is correlated.) <|cite_end|> <|cite_start|> (Reference: Local Partial Zero-Forcing Precoding for Cell-Free Massive MIMO: Cell-free Massive MIMO (multiple-input multiple-output) is a promising distributed network architecture for 5G-and-beyond systems. It guarantees ubiquitous coverage at high spectral efficiency (SE) by leveraging signal co-processing at multiple access points (APs), aggressive spatial user multiplexing and extraordinary macro-diversity gain. In this study, we propose two distributed precoding schemes, referred to as \textit{local partial zero-forcing} (PZF) and \textit{local protective partial zero-forcing} (PPZF), that further improve the spectral efficiency by providing an adaptable trade-off between interference cancelation and boosting of the desired signal, with no additional front-hauling overhead, and implementable by APs with very few antennas. We derive closed-form expressions for the achievable SE under the assumption of independent Rayleigh fading channel, channel estimation error and pilot contamination. PZF and PPZF can substantially outperform maximum ratio transmission and zero-forcing, and their performance is comparable to that achieved by regularized zero-forcing (RZF), which is a benchmark in the downlink. Importantly, these closed-form expressions can be employed to devise optimal (long-term) power control strategies that are also suitable for RZF, whose closed-form expression for the SE is not available.) <|cite_end|>, which has attracted extensive research interest due to its high spectral efficiency <|cite_start|> (Reference: {Performance of Cell-Free Massive MIMO With Joint User Clustering and Access Point Selection: We consider an uplink cell-free massive multiple-input multiple-output (MIMO) system, in which the access points are connected to the central processing unit (CPU) through a fronthaul network. This system has the advantages of wide coverage and flexible deployment. However, the performance of this system depends on a capacity-limited fronthaul, and when the fronthaul is saturated, the quality of service will be reduced. To address this issue, we propose a joint user clustering and AP selection scheme, which can reduce the pressure on the fronthaul link while taking into account the system performance and computational complexity. We first derive a closed-form expression for the uplink spectral efficiency over Rician fading channels. Based on the derived expression, we formulate the problem of maximizing the minimum uplink spectral efficiency across all users by jointly optimizing the large-scale fading decoding (LSFD) coefficient and power control coefficient. Then, combined with the optimization results and channel estimation error, a suboptimal access point selection scheme is proposed. In addition, we propose a user clustering scheme to further reduce the complexity of the AP selection scheme. The simulation results show that the joint user clustering and access point selection scheme can reduce the system fronthaul link pressure, while the performance degrades only slightly.) <|cite_end|>. Nevertheless, the traditional cell-free paradigm requires a large-scale deployment of BSs to guarantee favorable performance, leading to an unsatisfying energy efficiency performance due to enormous hardware and power expense <|cite_start|> (Reference: Energy-Efficient Non-Orthogonal Multicast and Unicast Transmission of Cell-Free Massive MIMO Systems with SWIPT: This work investigates the energy-efficient resource allocation for layered-division multiplexing (LDM) based non-orthogonal multicast and unicast transmission in cell-free massive multiple-input multiple-output (MIMO) systems, where each user equipment (UE) performs wireless information and power transfer simultaneously. To begin with, the achievable data rates for multicast and unicast services are derived in closed form, as well as the received radio frequency (RF) power at each UE. Based on the analytical results, a nonsmooth and nonconvex optimization problem for energy efficiency (EE) maximization is formulated, which is however a challenging fractional programming problem with complex constraints. To suit the massive access setting, a first-order algorithm is developed to find both initial feasible point and the nearly optimal solution. Moreover, an accelerated algorithm is designed to improve the convergence speed. Numerical results demonstrate that the proposed first-order algorithms can achieve almost the same EE as that of second-order approaches yet with much lower computational complexity, which provides insight into the superiority of the proposed algorithms for massive access in cell-free massive MIMO systems.) <|cite_end|>. Fortunately, an emerging technology named reconfigurable intelligent surface (RIS) <|cite_start|> (Reference: Beamforming Optimization for Wireless Network Aided by Intelligent Reflecting Surface with Discrete Phase Shifts: Intelligent reflecting surface (IRS) is a cost-effective solution for achieving high spectrum and energy efficiency in future wireless networks by leveraging massive low-cost passive elements that are able to reflect the signals with adjustable phase shifts. Prior works on IRS mainly consider continuous phase shifts at reflecting elements, which are practically difficult to implement due to the hardware limitation. In contrast, we study in this paper an IRS-aided wireless network, where an IRS with only a finite number of phase shifts at each element is deployed to assist in the communication from a multi-antenna access point (AP) to multiple single-antenna users. We aim to minimize the transmit power at the AP by jointly optimizing the continuous transmit precoding at the AP and the discrete reflect phase shifts at the IRS, subject to a given set of minimum signal-to-interference-plus-noise ratio (SINR) constraints at the user receivers. The considered problem is shown to be a mixed-integer non-linear program (MINLP) and thus is difficult to solve in general. To tackle this problem, we first study the single-user case with one user assisted by the IRS and propose both optimal and suboptimal algorithms for solving it. Besides, we analytically show that as compared to the ideal case with continuous phase shifts, the IRS with discrete phase shifts achieves the same squared power gain in terms of asymptotically large number of reflecting elements, while a constant proportional power loss is incurred that depends only on the number of phase-shift levels. The proposed designs for the single-user case are also extended to the general setup with multiple users among which some are aided by the IRS. Simulation results verify our performance analysis as well as the effectiveness of our proposed designs as compared to various benchmark schemes.) <|cite_end|> <|cite_start|> (Reference: Reconfigurable Intelligent Surfaces for Energy Efficiency in Wireless Communication: The adoption of a Reconfigurable Intelligent Surface (RIS) for downlink multi-user communication from a multi-antenna base station is investigated in this paper. We develop energy-efficient designs for both the transmit power allocation and the phase shifts of the surface reflecting elements, subject to individual link budget guarantees for the mobile users. This leads to non-convex design optimization problems for which to tackle we propose two computationally affordable approaches, capitalizing on alternating maximization, gradient descent search, and sequential fractional programming. Specifically, one algorithm employs gradient descent for obtaining the RIS phase coefficients, and fractional programming for optimal transmit power allocation. Instead, the second algorithm employs sequential fractional programming for the optimization of the RIS phase shifts. In addition, a realistic power consumption model for RIS-based systems is presented, and the performance of the proposed methods is analyzed in a realistic outdoor environment. In particular, our results show that the proposed RIS-based resource allocation methods are able to provide up to $300\%$ higher energy efficiency, in comparison with the use of regular multi-antenna amplify-and-forward relaying.) <|cite_end|> has been identified as a solution to address the above problems by creating favorable propagation conditions from BSs to UEs with low-cost and power-efficient elements. As such, the RIS has been investigated in various aspects and under different setups <|cite_start|> (Reference: {Weighted sum-rate maximization for intelligent reflecting surface enhanced wireless networks: Intelligent reflecting surface (IRS) is a romising solution to build a programmable wireless environment for future communication systems, in which the reflector elements steer the incident signal in fully customizable ways by passive beamforming. This work focuses on the downlink of an IRSaided multiuser multiple-input single-output (MISO) system. A practical IRS assumption is considered, in which the incident signal can only be shifted with discrete phase levels. Then, the weighted sum-rate of all users is maximized by joint optimizing the active beamforming at the base-station (BS) and the passive beamforming at the IRS. This non-convex problem is firstly decomposed via Lagrangian dual transform, and then the active and passive beamforming can be optimized alternatingly. In addition, an efficient algorithm with closed-form solutions is proposed for the passive beamforming, which is applicable to both the discrete phase- shift IRS and the continuous phaseshift IRS. Simulation results have verified the effectiveness of the proposed algorithm as compared to different benchmark schemes.) <|cite_end|> <|cite_start|> (Reference: Joint Optimization of Beamforming, Phase-Shifting and Power Allocation in a Multi-cluster IRS-NOMA Network: The combination of non-orthogonal multiple access (NOMA) and intelligent reflecting surface (IRS) is an efficient solution to significantly enhance the energy efficiency of the wireless communication system. In this paper, we focus on a downlink multi-cluster NOMA network, where each cluster is supported by one IRS. We aim to minimize the transmit power by jointly optimizing the beamforming, the power allocation and the phase shift of each IRS. The formulated problem is non-convex and challenging to solve due to the coupled variables, i.e., the beamforming vector, the power allocation coefficient and the phase shift matrix. To address this non-convex problem, we propose an alternating optimization based algorithm. Specifically, we divide the primal problem into the two subproblems for beamforming optimization and phase shifting feasiblity, where the two subproblems are solved iteratively. Moreover, to guarantee the feasibility of the beamforming optimization problem, an iterative algorithm is proposed to search the feasible initial points. To reduce the complexity, we also propose a simplified algorithm based on partial exhaustive search for this system model. Simulation results demonstrate that the proposed alternating algorithm can yield a better performance gain than the partial exhaustive search algorithm, OMA-IRS, and NOMA with random IRS phase shift.) <|cite_end|> <|cite_start|> (Reference: Reconfigurable Intelligent Surface Assisted Multiuser MISO Systems Exploiting Deep Reinforcement Learning: Recently, the reconfigurable intelligent surface (RIS), benefited from the breakthrough on the fabrication of programmable meta-material, has been speculated as one of the key enabling technologies for the future six generation (6G) wireless communication systems scaled up beyond massive multiple input multiple output (Massive-MIMO) technology to achieve smart radio environments. Employed as reflecting arrays, RIS is able to assist MIMO transmissions without the need of radio frequency chains resulting in considerable reduction in power consumption. In this paper, we investigate the joint design of transmit beamforming matrix at the base station and the phase shift matrix at the RIS, by leveraging recent advances in deep reinforcement learning (DRL). We first develop a DRL based algorithm, in which the joint design is obtained through trial-and-error interactions with the environment by observing predefined rewards, in the context of continuous state and action. Unlike the most reported works utilizing the alternating optimization techniques to alternatively obtain the transmit beamforming and phase shifts, the proposed DRL based algorithm obtains the joint design simultaneously as the output of the DRL neural network. Simulation results show that the proposed algorithm is not only able to learn from the environment and gradually improve its behavior, but also obtains the comparable performance compared with two state-of-the-art benchmarks. It is also observed that, appropriate neural network parameter settings will improve significantly the performance and convergence rate of the proposed algorithm.) <|cite_end|> <|cite_start|> (Reference: An Attention-Aided Deep Learning Framework for Massive MIMO Channel Estimation: Channel estimation is one of the key issues in practical massive multiple-input multiple-output (MIMO) systems. Compared with conventional estimation algorithms, deep learning (DL) based ones have exhibited great potential in terms of performance and complexity. In this paper, an attention mechanism, exploiting the channel distribution characteristics, is proposed to improve the estimation accuracy of highly separable channels with narrow angular spread by realizing the "divide-and-conquer" policy. Specifically, we introduce a novel attention-aided DL channel estimation framework for conventional massive MIMO systems and devise an embedding method to effectively integrate the attention mechanism into the fully connected neural network for the hybrid analog-digital (HAD) architecture. Simulation results show that in both scenarios, the channel estimation performance is significantly improved with the aid of attention at the cost of small complexity overhead. Furthermore, strong robustness under different system and channel parameters can be achieved by the proposed approach, which further strengthens its practical value. We also investigate the distributions of learned attention maps to reveal the role of attention, which endows the proposed approach with a certain degree of interpretability.) <|cite_end|> <|cite_start|> (Reference: Deep Reinforcement Learning-Based Intelligent Reflecting Surface for Secure Wireless Communications: In this paper, we study an intelligent reflecting surface (IRS)-aided wireless secure communication system for physical layer security, where an IRS is deployed to adjust its reflecting elements to secure the communication of multiple legitimate users in the presence of multiple eavesdroppers. Aiming to improve the system secrecy rate, a design problem for jointly optimizing the base station (BS)’s beamforming and the IRS’s reflecting beamforming is formulated considering different quality of service (QoS) requirements and time-varying channel conditions. As the system is highly dynamic and complex, a novel deep reinforcement learning (DRL)-based secure beamforming approach is firstly proposed to achieve the optimal beamforming policy against eavesdroppers in dynamic environments. Simulation results demonstrate that the proposed deep learning based secure beamforming approach can significantly improve the system secrecy performance compared with other approaches.) <|cite_end|> <|cite_start|> (Reference: Reconfigurable intelligent surfaces for smart wireless environments: channel estimation, system design and applications in 6G networks: ) <|cite_end|> <|cite_start|> (Reference: Low-cost intelligent reflecting surface aided Terahertz multiuser massive MIMO: design and analysis: ) <|cite_end|>. For instance, the joint optimization of beamformers at the BS and RIS has been widely studied in <|cite_start|> (Reference: {Weighted sum-rate maximization for intelligent reflecting surface enhanced wireless networks: Intelligent reflecting surface (IRS) is a romising solution to build a programmable wireless environment for future communication systems, in which the reflector elements steer the incident signal in fully customizable ways by passive beamforming. This work focuses on the downlink of an IRSaided multiuser multiple-input single-output (MISO) system. A practical IRS assumption is considered, in which the incident signal can only be shifted with discrete phase levels. Then, the weighted sum-rate of all users is maximized by joint optimizing the active beamforming at the base-station (BS) and the passive beamforming at the IRS. This non-convex problem is firstly decomposed via Lagrangian dual transform, and then the active and passive beamforming can be optimized alternatingly. In addition, an efficient algorithm with closed-form solutions is proposed for the passive beamforming, which is applicable to both the discrete phase- shift IRS and the continuous phaseshift IRS. Simulation results have verified the effectiveness of the proposed algorithm as compared to different benchmark schemes.) <|cite_end|> <|cite_start|> (Reference: Joint Optimization of Beamforming, Phase-Shifting and Power Allocation in a Multi-cluster IRS-NOMA Network: The combination of non-orthogonal multiple access (NOMA) and intelligent reflecting surface (IRS) is an efficient solution to significantly enhance the energy efficiency of the wireless communication system. In this paper, we focus on a downlink multi-cluster NOMA network, where each cluster is supported by one IRS. We aim to minimize the transmit power by jointly optimizing the beamforming, the power allocation and the phase shift of each IRS. The formulated problem is non-convex and challenging to solve due to the coupled variables, i.e., the beamforming vector, the power allocation coefficient and the phase shift matrix. To address this non-convex problem, we propose an alternating optimization based algorithm. Specifically, we divide the primal problem into the two subproblems for beamforming optimization and phase shifting feasiblity, where the two subproblems are solved iteratively. Moreover, to guarantee the feasibility of the beamforming optimization problem, an iterative algorithm is proposed to search the feasible initial points. To reduce the complexity, we also propose a simplified algorithm based on partial exhaustive search for this system model. Simulation results demonstrate that the proposed alternating algorithm can yield a better performance gain than the partial exhaustive search algorithm, OMA-IRS, and NOMA with random IRS phase shift.) <|cite_end|>. Capitalizing on the recent advancements in artificial intelligence (AI), the deep reinforcement learning is used to tackle the RIS phase-shift design <|cite_start|> (Reference: Reconfigurable Intelligent Surface Assisted Multiuser MISO Systems Exploiting Deep Reinforcement Learning: Recently, the reconfigurable intelligent surface (RIS), benefited from the breakthrough on the fabrication of programmable meta-material, has been speculated as one of the key enabling technologies for the future six generation (6G) wireless communication systems scaled up beyond massive multiple input multiple output (Massive-MIMO) technology to achieve smart radio environments. Employed as reflecting arrays, RIS is able to assist MIMO transmissions without the need of radio frequency chains resulting in considerable reduction in power consumption. In this paper, we investigate the joint design of transmit beamforming matrix at the base station and the phase shift matrix at the RIS, by leveraging recent advances in deep reinforcement learning (DRL). We first develop a DRL based algorithm, in which the joint design is obtained through trial-and-error interactions with the environment by observing predefined rewards, in the context of continuous state and action. Unlike the most reported works utilizing the alternating optimization techniques to alternatively obtain the transmit beamforming and phase shifts, the proposed DRL based algorithm obtains the joint design simultaneously as the output of the DRL neural network. Simulation results show that the proposed algorithm is not only able to learn from the environment and gradually improve its behavior, but also obtains the comparable performance compared with two state-of-the-art benchmarks. It is also observed that, appropriate neural network parameter settings will improve significantly the performance and convergence rate of the proposed algorithm.) <|cite_end|>, channel estimation <|cite_start|> (Reference: An Attention-Aided Deep Learning Framework for Massive MIMO Channel Estimation: Channel estimation is one of the key issues in practical massive multiple-input multiple-output (MIMO) systems. Compared with conventional estimation algorithms, deep learning (DL) based ones have exhibited great potential in terms of performance and complexity. In this paper, an attention mechanism, exploiting the channel distribution characteristics, is proposed to improve the estimation accuracy of highly separable channels with narrow angular spread by realizing the "divide-and-conquer" policy. Specifically, we introduce a novel attention-aided DL channel estimation framework for conventional massive MIMO systems and devise an embedding method to effectively integrate the attention mechanism into the fully connected neural network for the hybrid analog-digital (HAD) architecture. Simulation results show that in both scenarios, the channel estimation performance is significantly improved with the aid of attention at the cost of small complexity overhead. Furthermore, strong robustness under different system and channel parameters can be achieved by the proposed approach, which further strengthens its practical value. We also investigate the distributions of learned attention maps to reveal the role of attention, which endows the proposed approach with a certain degree of interpretability.) <|cite_end|>, secure communication <|cite_start|> (Reference: Deep Reinforcement Learning-Based Intelligent Reflecting Surface for Secure Wireless Communications: In this paper, we study an intelligent reflecting surface (IRS)-aided wireless secure communication system for physical layer security, where an IRS is deployed to adjust its reflecting elements to secure the communication of multiple legitimate users in the presence of multiple eavesdroppers. Aiming to improve the system secrecy rate, a design problem for jointly optimizing the base station (BS)’s beamforming and the IRS’s reflecting beamforming is formulated considering different quality of service (QoS) requirements and time-varying channel conditions. As the system is highly dynamic and complex, a novel deep reinforcement learning (DRL)-based secure beamforming approach is firstly proposed to achieve the optimal beamforming policy against eavesdroppers in dynamic environments. Simulation results demonstrate that the proposed deep learning based secure beamforming approach can significantly improve the system secrecy performance compared with other approaches.) <|cite_end|>, etc. {Moreover, since RIS does not require extra hardware implementation <|cite_start|> (Reference: Hybrid Beamforming for Reconfigurable Intelligent Surface based Multi-user Communications: Achievable Rates with Limited Discrete Phase Shifts: Reconfigurable intelligent surface (RIS) has drawn considerable attention from the research society recently, which creates favorable propagation conditions by controlling the phase shifts of the reflected waves at the surface, thereby enhancing wireless transmissions. In this paper, we study a downlink multi-user system where the transmission from a multi-antenna base station (BS) to various users is achieved by the RIS reflecting the incident signals of the BS towards the users. Unlike most existing works, we consider the practical case where only a limited number of discrete phase shifts can be realized by the finite-sized RIS. Based on the reflection-dominated one-hop propagation model between the BS and users via the RIS, a hybrid beamforming scheme is proposed and the sum-rate maximization problem is formulated. Specifically, the continuous digital beamforming and discrete RIS-based analog beamforming are performed at the BS and the RIS, respectively, and an iterative algorithm is designed to solve this problem. Both theoretical analysis and numerical validations show that the RIS-based system can achieve a good sum-rate performance by setting a reasonable size of RIS and a small number of discrete phase shifts.) <|cite_end|>, it is natural to envision a cell-free system integrating RISs, which can reap both advantages of these two technologies. Hence, compared with the conventional cell-free system, a lower level of power consumption is required to achieve a satisfying performance. In other words, the fusion of RISs into a cell-free system increases degrees of freedom to enhance the performance with low cost and power consumption. In this regard, there have been several preliminary explorations on RIS-aided cell-free systems <|cite_start|> (Reference: A joint precoding framework for wideband reconfigurable intelligent surface-aided cell-free network: Thanks to the strong ability against the inter-cell interference, cell-free network is considered as a promising technique to improve network capacity. However, further capacity improvement requires to deploy more base stations (BSs) with high cost and power consumption. To address this issue, inspired by the recently developed reconfigurable intelligent surface (RIS) technique, we propose the concept of RIS-aided cell-free network to improve the capacity with low cost and power consumption. The key idea is to replace some of the required BSs by low-cost and energy-efficient RISs. Then, in a wideband RIS-aided cell-free network, we formulate the problem of joint precoding design at BSs and RISs to maximize the network capacity. Due to the non-convexity and high complexity of the formulated problem, we develop an alternating optimization framework to solve this challenging problem. In particular, we decouple this problem via fractional programming, and solve the subproblems alternatively. Note that most of the scenarios considered in existing works are special cases of the general scenario studied in this paper, and the proposed joint precoding framework can serve as a general solution to maximize the capacity in most existing RIS-aided scenarios. Finally, simulation results demonstrate that, compared with the conventional cell-free network, the network capacity under the proposed scheme can be improved significantly.) <|cite_end|> <|cite_start|> (Reference: Beyond Cell-free MIMO: Energy Efficient Reconfigurable Intelligent Surface Aided Cell-free MIMO Communications: Cell-free systems can effectively eliminate the inter-cell interference by enabling multiple base stations (BSs) to cooperatively serve users without cell boundaries at the expense of high costs of hardware and power sources due to the large-scale deployment of BSs. To tackle this issue, the low-cost reconfigurable intelligent surface (RIS) can serve as a promising technique to improve the energy efficiency of cell-free systems. In this paper, we consider an RIS aided cell-free MIMO system where multiple RISs are deployed around BSs and users to create favorable propagation conditions via reconfigurable reflections in a low-cost way, thereby enhancing cell-free MIMO communications. To maximize the energy efficiency, a hybrid beamforming (HBF) scheme consisting of the digital beamforming at BSs and the RIS-based analog beamforming is proposed. The energy efficiency maximization problem is formulated and an iterative algorithm is designed to solve this problem. The impact of the transmit power, the number of RIS, and the RIS size on energy efficiency are investigated. Both theoretical analysis and simulation results reveal that the optimal energy efficiency depends on the numbers of RISs and the RIS size. Numerical evaluations also show that the proposed system can achieve a higher energy efficiency than conventional ones.) <|cite_end|> <|cite_start|> (Reference: Decentralized Beamforming Design for Intelligent Reflecting Surface-enhanced Cell-free Networks: Cell-free networks are considered as a promising distributed network architecture to satisfy the increasing number of users and high rate expectations in beyond-5G systems. However, to further enhance network capacity, an increasing number of high-cost base stations (BSs) are required. To address this problem and inspired by the cost-effective intelligent reflecting surface (IRS) technique, we propose a fully decentralized design framework for cooperative beamforming in IRS-aided cell-free networks. We first transform the centralized weighted sum-rate maximization problem into a tractable consensus optimization problem, and then an incremental alternating direction method of multipliers (ADMM) algorithm is proposed to locally update the beamformer. The complexity and convergence of the proposed method are analyzed, and these results show that the performance of the new scheme can asymptotically approach that of the centralized one as the number of iterations increases. Results also show that IRSs can significantly increase the system sum-rate of cell-free networks and the proposed method outperforms existing decentralized methods.) <|cite_end|>. Specifically, authors in <|cite_start|> (Reference: A joint precoding framework for wideband reconfigurable intelligent surface-aided cell-free network: Thanks to the strong ability against the inter-cell interference, cell-free network is considered as a promising technique to improve network capacity. However, further capacity improvement requires to deploy more base stations (BSs) with high cost and power consumption. To address this issue, inspired by the recently developed reconfigurable intelligent surface (RIS) technique, we propose the concept of RIS-aided cell-free network to improve the capacity with low cost and power consumption. The key idea is to replace some of the required BSs by low-cost and energy-efficient RISs. Then, in a wideband RIS-aided cell-free network, we formulate the problem of joint precoding design at BSs and RISs to maximize the network capacity. Due to the non-convexity and high complexity of the formulated problem, we develop an alternating optimization framework to solve this challenging problem. In particular, we decouple this problem via fractional programming, and solve the subproblems alternatively. Note that most of the scenarios considered in existing works are special cases of the general scenario studied in this paper, and the proposed joint precoding framework can serve as a general solution to maximize the capacity in most existing RIS-aided scenarios. Finally, simulation results demonstrate that, compared with the conventional cell-free network, the network capacity under the proposed scheme can be improved significantly.) <|cite_end|> formulated the joint beamforming design at the BSs and RISs in the wideband scenario to maximize the network capacity, while the work in <|cite_start|> (Reference: Beyond Cell-free MIMO: Energy Efficient Reconfigurable Intelligent Surface Aided Cell-free MIMO Communications: Cell-free systems can effectively eliminate the inter-cell interference by enabling multiple base stations (BSs) to cooperatively serve users without cell boundaries at the expense of high costs of hardware and power sources due to the large-scale deployment of BSs. To tackle this issue, the low-cost reconfigurable intelligent surface (RIS) can serve as a promising technique to improve the energy efficiency of cell-free systems. In this paper, we consider an RIS aided cell-free MIMO system where multiple RISs are deployed around BSs and users to create favorable propagation conditions via reconfigurable reflections in a low-cost way, thereby enhancing cell-free MIMO communications. To maximize the energy efficiency, a hybrid beamforming (HBF) scheme consisting of the digital beamforming at BSs and the RIS-based analog beamforming is proposed. The energy efficiency maximization problem is formulated and an iterative algorithm is designed to solve this problem. The impact of the transmit power, the number of RIS, and the RIS size on energy efficiency are investigated. Both theoretical analysis and simulation results reveal that the optimal energy efficiency depends on the numbers of RISs and the RIS size. Numerical evaluations also show that the proposed system can achieve a higher energy efficiency than conventional ones.) <|cite_end|> aimed to maximize the energy efficiency. In addition, <|cite_start|> (Reference: Decentralized Beamforming Design for Intelligent Reflecting Surface-enhanced Cell-free Networks: Cell-free networks are considered as a promising distributed network architecture to satisfy the increasing number of users and high rate expectations in beyond-5G systems. However, to further enhance network capacity, an increasing number of high-cost base stations (BSs) are required. To address this problem and inspired by the cost-effective intelligent reflecting surface (IRS) technique, we propose a fully decentralized design framework for cooperative beamforming in IRS-aided cell-free networks. We first transform the centralized weighted sum-rate maximization problem into a tractable consensus optimization problem, and then an incremental alternating direction method of multipliers (ADMM) algorithm is proposed to locally update the beamformer. The complexity and convergence of the proposed method are analyzed, and these results show that the performance of the new scheme can asymptotically approach that of the centralized one as the number of iterations increases. Results also show that IRSs can significantly increase the system sum-rate of cell-free networks and the proposed method outperforms existing decentralized methods.) <|cite_end|> proposed a fully decentralized design framework for cooperative beamforming.} From the aforementioned works in RIS-aided cell-free systems, the instantaneous channel state information (CSI) of all links is assumed to be perfectly known at BSs. However, this is impractical, since accurate CSI acquisition may incur significant overhead due to a large number of RIS elements <|cite_start|> (Reference: Channel Estimation for RIS-Empowered Multi-User MISO Wireless Communications: Reconfigurable Intelligent Surfaces (RISs) have been recently considered as an energy-efficient solution for future wireless networks due to their fast and low-power configuration, which has increased potential in enabling massive connectivity and low-latency communications. Accurate and low-overhead channel estimation in RIS-based systems is one of the most critical challenges due to the usually large number of RIS unit elements and their distinctive hardware constraints. In this paper, we focus on the uplink of a RIS-empowered multi-user Multiple Input Single Output (MISO) uplink communication systems and propose a channel estimation framework based on the parallel factor decomposition to unfold the resulting cascaded channel model. We present two iterative estimation algorithms for the channels between the base station and RIS, as well as the channels between RIS and users. One is based on alternating least squares (ALS), while the other uses vector approximate message passing to iteratively reconstruct two unknown channels from the estimated vectors. To theoretically assess the performance of the ALS-based algorithm, we derived its estimation Cram\'er-Rao Bound (CRB). We also discuss the downlink achievable sum rate computation with estimated channels and different precoding schemes for the base station. Our extensive simulation results show that our algorithms outperform benchmark schemes and that the ALS technique achieves the CRB. It is also demonstrated that the sum rate using the estimated channels always reach that of perfect channels under various settings, thus, verifying the effectiveness and robustness of the proposed estimation algorithms.) <|cite_end|>. Responding to this, several works have proposed the idea of using statistical CSI\footnote{{The statistical CSI refers to the information for LoS components, the path-loss coefficients and Rician $K$ factors, which can be viewed as the constant in several coherence intervals.}} for the design of RIS-assisted systems <|cite_start|> (Reference: {RIS-assisted multi-user MISO communications exploiting statistical CSI: Reconfigurable intelligent surface (RIS) is a promising solution to build a programmable wireless environment with reconfigurable passive elements, which can achieve high spectral and energy efficiency. In this paper, we investigate the ergodic capacity of RIS-assisted multi-user multiple-input single-output (MISO) wireless systems in both uplink and downlink scenarios. Unlike most of prior works, where instantaneous channel state information (CSI) is assumed, we consider the realistic scenario with only statistical CSI. For both scenarios, we first present an analytical expressions for the ergodic sum capacity of the system. Based on which, the joint power control (or transmit beamforming) and phase shift design problem maximizing the ergodic sum capacity is formulated. Capitalizing on the alternating direction method of multipliers (ADMM), fractional programming (FP) and alternating optimization (AO) methods, efficient suboptimal solutions are obtained for the non-convex design problems. Simulation results are presented to validate the accuracy of the analytical ergodic sum capacity expressions and evaluate the impact of key system parameters such as CSI, Rician $K$ factor, number of RIS elements, and RIS location on the ergodic capacity performance. The findings suggest that the proposed statistical CSI design achieves decent performance compared with the instantaneous CSI based design. Moreover, a signal hot spot can be created when placing the RIS close to the users.) <|cite_end|> <|cite_start|> (Reference: Weighted Sum-Rate of Intelligent Reflecting Surface Aided Multiuser Downlink Transmission with Statistical {CSI: Intelligent reflecting surface (IRS) is a newly emerged technology that can increase the energy and spectral efficiency of wireless communication systems. This paper considers an IRS-aided multi-user multiple-input single-output (MISO) communication system, and presents a detailed analysis and optimization framework for the weighted sum-rate (WSR) of the downlink transmission over Rician fading channels. Unlike most of the prior works where the active beamformer at the base station (BS) and passive beamformer at the IRS are jointly designed based on the instantaneous channel state information (CSI), this paper proposes a low-complexity transmission protocol where the IRS passive beamforming and BS power allocation coefficient vector are optimized in the large timescale based on the statistical CSI, and the BS transmit beamforming is designed in the small timescale based on only the instantaneous CSI of the effective BS-user channels. Therefore, the channel training overhead in each channel coherence interval under our proposed protocol is independent of the number of IRS reflecting elements, which is in sharp contrast to most of the prior works. By considering maximum-ratio transmit beamforming at the BS, we derive a lower bound of the ergodic WSR in closed-form. Then, we propose an efficient algorithm to jointly optimize the IRS passive beamforming and BS power allocation coefficient vector for maximizing the ergodic WSR lower bound. Numerical results validate the tightness of our derived WSR bound and show that the proposed scheme outperforms various existing schemes in terms of complexity or capacity performance.) <|cite_end|> <|cite_start|> (Reference: Large System Achievable Rate Analysis of RIS-Assisted MIMO Wireless Communication with Statistical CSIT: Reconfigurable intelligent surface (RIS) is an emerging technology to enhance wireless communication in terms of energy cost and system performance by equipping a considerable quantity of nearly passive reflecting elements. This study focuses on a downlink RIS-assisted multiple-input multiple-output (MIMO) wireless communication system that comprises three communication links of Rician channel, including base station (BS) to RIS, RIS to user, and BS to user. The objective is to design an optimal transmit covariance matrix at BS and diagonal phase-shifting matrix at RIS to maximize the achievable ergodic rate by exploiting the statistical channel state information at BS. Therefore, a large-system approximation of the achievable ergodic rate is derived using the replica method in large dimension random matrix theory. This large-system approximation enables the identification of asymptotic-optimal transmit covariance and diagonal phase-shifting matrices using an alternating optimization algorithm. Simulation results show that the large-system results are consistent with the achievable ergodic rate calculated by Monte Carlo averaging. The results verify that the proposed algorithm can significantly enhance the RIS-assisted MIMO system performance.) <|cite_end|>. However, the utilization of statistical CSI for both active and passive beamformers may severely degrade the system performance. As such, a novel countermeasure named two-timescale transmission protocol <|cite_start|> (Reference: Intelligent reflecting surface enhanced wireless networks: two-timescale beamforming optimization: Intelligent reflecting surface (IRS) has drawn a lot of attention recently as a promising new solution to achieve high spectral and energy efficiency for future wireless networks. By utilizing massive low-cost passive reflecting elements, the wireless propagation environment becomes controllable and thus can be made favorable for improving the communication performance. Prior works on IRS mainly rely on the instantaneous channel state information (I-CSI), which, however, is practically difficult to obtain for IRS-associated links due to its passive operation and large number of reflecting elements. To overcome this difficulty, we propose in this paper a new two-timescale (TTS) transmission protocol to maximize the achievable average sum-rate for an IRS-aided multiuser system under the general correlated Rician channel model. Specifically, the passive IRS phase shifts are first optimized based on the statistical CSI (S-CSI) of all links, which varies much slowly as compared to their I-CSI; while the transmit beamforming/precoding vectors at the access point (AP) are then designed to cater to the I-CSI of the users’ effective fading channels with the optimized IRS phase shifts, thus significantly reducing the channel training overhead and passive beamforming design complexity over the existing schemes based on the I-CSI of all channels. Besides, for ease of practical implementation, we consider discrete phase shifts at each reflecting element of the IRS. For the single-user case, an efficient penalty dual decomposition (PDD)-based algorithm is proposed, where the IRS phase shifts are updated in parallel to reduce the computational time. For the multiuser case, we propose a general TTS stochastic successive convex approximation (SSCA) algorithm by constructing a quadratic surrogate of the objective function, which cannot be explicitly expressed in closed-form. Simulation results are presented to validate the effectiveness of our proposed algorithms and evaluate the impact of S-CSI and channel correlation on the system performance.) <|cite_end|> is quite suitable for the RIS-aided cell-free systems. On the one hand, the passive beamformers at RISs are optimized by exploiting the statistical CSI, without having to acquire CSI in every time slot. On the other hand, the active beamformers at BSs are designed through the instantaneous CSI of effective channels, where the process of channel estimation is almost the same as that in conventional multiple-input-multiple-output (MIMO) systems. While the potential of the two-timescale beamforming in the RIS-aided cell-free system is conceivable, fundamental understanding and design guidelines are still lacking. Motivated by this, we propose an effective and low-complexity solution to design this system. The main contributions of this paper can be summarized as follows: \begin{itemize} \item The closed-form expression of the achievable weighted sum-rate is derived, which facilitates the understanding on the impact of key system parameters, such as Rician $K$ factor, the number of BS antennas, and the number of RIS elements on the achievable rate. Furthermore, we investigate a special case without line-of-sight (LoS) components to gain more insights, in which the weighted sum-rate increases logarithmically with $\mathcal{O}(M)$, with $M$ being the number of BS antennas. Therefore, the system will have significant benefits offered by the adoption of a large number of BS antennas. \item An achievable weighted sum-rate maximization problem under the two-timescale transmission protocol is formulated and decomposed into two subproblems via an alternating optimization framework. Specifically, a penalty dual decomposition (PDD)-based method is conceived to optimize RISs beamformers based on the statistical CSI, while a primal dual subgradient (PDS)-based method is proposed to design BSs beamformers. Moreover, theoretical analyses on the properties of the proposed algorithm, i.e., the convergence behavior and the computational complexity, are also derived. \item Finally, simulation results are presented to validate the tightness of our derived analytical expression and show the fast convergence of our proposed algorithm. Findings illustrate that the performance of the proposed algorithm with two-timescale CSI is comparable to that with instantaneous CSI in low or moderate SNR regimes. The impact of key system parameters such as the number of RIS elements, CSI settings and Rician factor is also evaluated. Moreover, the remarkable advantages from the adoption of the cell-free paradigm and the deployment of RISs are demonstrated intuitively. \end{itemize} \subsection{Structure and Notations} The rest of this paper is organized as follows. In Section~\ref{model}, we present the channel model and transmission protocol. The closed-form expression of the achievable weighted sum-rate and a special case without LoS components are analyzed in Section~\ref{analysis}. Then, the average weighted sum-rate maximization problem is formulated and the design of RISs phase shifts and BSs power allocation coefficients are elaborated in Section~\ref{design}. In Section~\ref{simulation}, numerical results are provided to validate the tightness of analytical expressions, evaluate the proposed algorithm performance and demonstrate the impacts of key parameters. Finally, we conclude the paper in Section~\ref{conclusion}. {\it Notation}: We use bold lower case letters to denote vectors and lower case letters to denote scalars. $\text{Re}\{ \cdot \}$ represents the real part of a complex value. $\mathbf{I}_N$ denotes an identity matrix with subscript $N$ being the matrix dimension. The operators $\mathbb{E}\{ \cdot \}$, $\text{tr}(\cdot)$ and $\|\cdot\|$ stand for the expectation, trace and Euclidean norm operations, respectively. The superscripts $(\cdot)^{*}$, $(\cdot)^{T}$ and $(\cdot)^H$ are denoted as the conjugate, transpose, and conjugate-transpose operations, respectively. $\mathcal{CN}(\mathbf{a},\mathbf{B})$ represents the symmetric complex-valued Gaussian distribution with mean $\mathbf{a}$ and convariance matrix $\mathbf{B}$. The operation $\text{diag}(\mathbf{x})$ generates a diagonal matrix with the elements of $\mathbf{x}$ along its main diagonal, while $\text{diag}(\mathbf{X})$ means the block diagonal operation. In addition, the operation $\text{Diag}(\mathbf{A})$ returns a column vector of the main diagonal elements of $\mathbf{A}$. <|paper_end|>
[ "<|reference_start|> Precoding and power optimization in cell-free massive mimo systems: Cell-free Massive multiple-input multiple-output (MIMO) comprises a large number of distributed low-cost low-power single antenna access points (APs) connected to a network controller. The number of AP antennas is significantly larger than the number of users. The system is not partitioned into cells and each user is served by all APs simultaneously. The simplest linear precoding schemes are conjugate beamforming and zero-forcing. Max–min power control provides equal throughput to all users and is considered in this paper. Surprisingly, under max–min power control, most APs are found to transmit at less than full power. The zero-forcing precoder significantly outperforms conjugate beamforming. For zero-forcing, a near-optimal power control algorithm is developed that is considerably simpler than exact max–min power control. An alternative to cell-free systems is small-cell operation in which each user is served by only one AP for which power optimization algorithms are also developed. Cell-free Massive MIMO is shown to provide five- to ten-fold improvement in 95%-likely per-user throughput over small-cell operation. <|reference_end|>", "<|reference_start|> Cell-Free Massive MIMO versus Small Cells: A Cell-Free Massive MIMO (multiple-input multiple-output) system comprises a very large number of distributed access points (APs)which simultaneously serve a much smaller number of users over the same time/frequency resources based on directly measured channel characteristics. The APs and users have only one antenna each. The APs acquire channel state information through time-division duplex operation and the reception of uplink pilot signals transmitted by the users. The APs perform multiplexing/de-multiplexing through conjugate beamforming on the downlink and matched filtering on the uplink. Closed-form expressions for individual user uplink and downlink throughputs lead to max-min power control algorithms. Max-min power control ensures uniformly good service throughout the area of coverage. A pilot assignment algorithm helps to mitigate the effects of pilot contamination, but power control is far more important in that regard. Cell-Free Massive MIMO has considerably improved performance with respect to a conventional small-cell scheme, whereby each user is served by a dedicated AP, in terms of both 95%-likely per-user throughput and immunity to shadow fading spatial correlation. Under uncorrelated shadow fading conditions, the cell-free scheme provides nearly 5-fold improvement in 95%-likely per-user throughput over the small-cell scheme, and 10-fold improvement when shadow fading is correlated. <|reference_end|>", "<|reference_start|> Channel Estimation for RIS-Empowered Multi-User MISO Wireless Communications: Reconfigurable Intelligent Surfaces (RISs) have been recently considered as an energy-efficient solution for future wireless networks due to their fast and low-power configuration, which has increased potential in enabling massive connectivity and low-latency communications. Accurate and low-overhead channel estimation in RIS-based systems is one of the most critical challenges due to the usually large number of RIS unit elements and their distinctive hardware constraints. In this paper, we focus on the uplink of a RIS-empowered multi-user Multiple Input Single Output (MISO) uplink communication systems and propose a channel estimation framework based on the parallel factor decomposition to unfold the resulting cascaded channel model. We present two iterative estimation algorithms for the channels between the base station and RIS, as well as the channels between RIS and users. One is based on alternating least squares (ALS), while the other uses vector approximate message passing to iteratively reconstruct two unknown channels from the estimated vectors. To theoretically assess the performance of the ALS-based algorithm, we derived its estimation Cram\\'er-Rao Bound (CRB). We also discuss the downlink achievable sum rate computation with estimated channels and different precoding schemes for the base station. Our extensive simulation results show that our algorithms outperform benchmark schemes and that the ALS technique achieves the CRB. It is also demonstrated that the sum rate using the estimated channels always reach that of perfect channels under various settings, thus, verifying the effectiveness and robustness of the proposed estimation algorithms. <|reference_end|>", "<|reference_start|> Weighted Sum-Rate of Intelligent Reflecting Surface Aided Multiuser Downlink Transmission with Statistical {CSI: Intelligent reflecting surface (IRS) is a newly emerged technology that can increase the energy and spectral efficiency of wireless communication systems. This paper considers an IRS-aided multi-user multiple-input single-output (MISO) communication system, and presents a detailed analysis and optimization framework for the weighted sum-rate (WSR) of the downlink transmission over Rician fading channels. Unlike most of the prior works where the active beamformer at the base station (BS) and passive beamformer at the IRS are jointly designed based on the instantaneous channel state information (CSI), this paper proposes a low-complexity transmission protocol where the IRS passive beamforming and BS power allocation coefficient vector are optimized in the large timescale based on the statistical CSI, and the BS transmit beamforming is designed in the small timescale based on only the instantaneous CSI of the effective BS-user channels. Therefore, the channel training overhead in each channel coherence interval under our proposed protocol is independent of the number of IRS reflecting elements, which is in sharp contrast to most of the prior works. By considering maximum-ratio transmit beamforming at the BS, we derive a lower bound of the ergodic WSR in closed-form. Then, we propose an efficient algorithm to jointly optimize the IRS passive beamforming and BS power allocation coefficient vector for maximizing the ergodic WSR lower bound. Numerical results validate the tightness of our derived WSR bound and show that the proposed scheme outperforms various existing schemes in terms of complexity or capacity performance. <|reference_end|>" ]
[ 0, 1, 26, 28 ]
{"<|multi_cite_1_1|>": "ss-684990", "<|multi_cite_1_2|>": "arxiv-92967", "<|multi_cite_1_3|>": "arxiv-221572", "<|cite_2|>": "ss-1177301", "<|cite_3|>": "arxiv-279583", "<|multi_cite_4_1|>": "arxiv-208582", "<|multi_cite_4_2|>": "arxiv-176441", "<|multi_cite_5_1|>": "ss-989720", "<|multi_cite_5_2|>": "arxiv-289694", "<|multi_cite_5_3|>": "arxiv-249968", "<|multi_cite_5_4|>": "arxiv-362147", "<|multi_cite_5_5|>": "ss-1276606", "<|multi_cite_5_6|>": "ss-1225947", "<|multi_cite_5_7|>": "ss-1788474", "<|multi_cite_6_1|>": "ss-989720", "<|multi_cite_6_2|>": "arxiv-289694", "<|cite_7|>": "arxiv-249968", "<|cite_8|>": "arxiv-362147", "<|cite_9|>": "ss-1276606", "<|cite_10|>": "arxiv-231774", "<|multi_cite_11_1|>": "arxiv-247365", "<|multi_cite_11_2|>": "arxiv-304071", "<|multi_cite_11_3|>": "arxiv-273602", "<|cite_12|>": "arxiv-247365", "<|cite_13|>": "arxiv-304071", "<|cite_14|>": "arxiv-273602", "<|cite_15|>": "arxiv-282591", "<|multi_cite_16_1|>": "ss-721306", "<|multi_cite_16_2|>": "ss-764902", "<|multi_cite_16_3|>": "arxiv-327851", "<|cite_17|>": "ss-890914"}
2307.16535
<|paper_start|> Title: Introducing and Interfacing with Cybersecurity -- A Cards Approach Abstract: Introducing and Interfacing with Cybersecurity -- A Cards Approach: Cybersecurity is an important topic which is often viewed as one that is inaccessible due to steep learning curves and a perceived requirement of needing specialist knowledge. With a constantly changing threat landscape, practical solutions such as best-practices are employed, but the number of critical cybersecurity-related incidents remains high. To address these concerns, the National Cyber Security Centre published a Cybersecurity Body of Knowledge (CyBOK) to provide a comprehensive information base used to advise and underpin cybersecurity learning. Unfortunately, CyBOK contains over 1000 pages of in-depth material and may not be easy to navigate for novice individuals. Furthermore, it does not allow for easy expression of various cybersecurity scenarios that such individuals may be exposed to. As a solution to these two issues, we propose the use of a playing cards format to provide introductory cybersecurity knowledge that supports learning and discussion, using CyBOK as the foundation for the technical content. Upon evaluation in two user studies, we found that 80% of the participants agreed the cards provided them with introductory knowledge of cybersecurity topics, and 70% agreed the cards provided an interface for discussing topics and enabled them to make links between attacks, vulnerabilities and defences. Introduction \label{sec:introduction} Cybersecurity remains a fundamental concern to users of computer systems, with security often being overlooked due to its portrayal as a subject pertaining to issues of perceived technical difficulty, steep learning curves and a requirement of specialist knowledge and/or expertise <|cite_start|> (Reference: Journal of Homeland Security and Emergency Management The Cybersecurity Triad : Government , Private Sector Partners , and the Engaged Cybersecurity Citizen: In May 2009, the Obama administration released its, Cyberspace Policy Review: Assuring a Trusted and Resilient Information and Communications Infrastructure, which it expected would lay the groundwork for a new national cybersecurity strategy. Staking out separate policy development space, Congressional leaders began hearings and introduced legislation. The most significant – the Cybersecurity Act of 2009 – proposed major changes in current federal government approaches. The common starting point of all of these reform efforts is that current federal organization and current national cybersecurity policy is inadequate for the task of securing cyberspace. This article analyzes past federal reorganization efforts in response to the last technological revolution with serious national security implications – nuclear technology -and the more recent response to homeland security. While much of the current cybersecurity debate leans toward radical reforming, we counsel an incremental approach to reorganization that builds on the hard work of the last decade combined with a genuine reconceptualization of the threat solution set. Borrowing from the language of the nuclear era, we call for cybersecurity to rest on a balanced triad of intergovernmental relations, private corporate involvement, and active cyber citizenship as a resilient model that can manage this new and challenging security environment. In particular, we introduce the third leg as a critical new concept that has been absent from standard policy debate. The road to cybersecurity is destined to be long, circuitous, and difficult. Extensive negotiations between federal, state, local, and private sector leaders loom. No truly significant federal policy reform can be achieved without considering the intergovernmental policy dimensions combined with the overall threat perception driving those reforms. Success will remain elusive if government to private business relations do not improve and much will be undermined if the general public remains inactive in contributing to national cybersecurity.) <|cite_end|> <|cite_start|> (Reference: Thinking across stovepipes: Using a holistic development strategy to build the cybersecurity workforce: This article proposes a holistic approach to developing the cybersecurity workforce based on careful integration of workforce development strategies into a plan that involves educators, career professionals, employers, and policymakers. First, it motivates this by describing how other fields such as medicine have successfully done this and arguing that cyber security is, like medicine, inherently cross-disciplinary at multiple levels of expertise and performance, making it similar in complexity to the medical profession and thus a good candidate for some of the solutions developed there. The article then focuses on one element of a holistic strategy – education -- and discusses the findings of a recent workshop on cybersecurity education. It then places those findings in the context of the broader discussion and suggests some practical steps. They encourage computer science educators, human resources professionals, and the functional experts from disciplines that will attract computer science graduates to think beyond their “stovepiped” fields and collaborate so that holistic, integrated solutions can be developed, accepted, and implemented.) <|cite_end|>. While the security foundations of computer-based systems have improved over time, limiting the potential for, or mitigating the effects of, attacks arising from vulnerabilities, requires the involvement of all users of these systems (e.g. the general population) and is a necessary step to improve the understanding of cybersecurity <|cite_start|> (Reference: Users Are Not the Enemy: Many system security departments treat users as a security risk to be controlled. The general consensus is that most users are careless and unmotivated when it comes to system security. In a recent study, we found that users may indeed compromise computer security mechanisms, such as password authentication, both knowing and unknowingly. A closer analysis, however, revealed that such behavior is often caused by the way in which security mechanisms are implemented, and users’ lack of knowledge. We argue that to change this state of affairs, security departments need to communicate more with users, and adopt a usercentered design approach.) <|cite_end|>. Moreover, the increasing complexity and diversity of the threat landscape for cybersecurity <|cite_start|> (Reference: Crime, security and information communication technologies: The changing cybersecurity threat landscape and its implications for regulation and policing: Networked digital technologies have transformed crime to a point that ‘cybercrime’ is here to stay. In the future, society will be forced to respond to a broad variety of networked crimes that will increase both the complexity of crime investigation and prevention, whilst also deepening the regulative challenges. As cybercrime has become an inescapable feature of the Internet landscape, constructive management and system development to mitigate cybercrime threats and harms are imperatives. This chapter explores the changing cybersecurity threat landscape and its implications for regulation and policing. It considers how networked and digital technologies have affected society and crime; it identifies how the cybersecurity threat and crime landscape have changed and considers how digital technologies affect our ability to regulate them. It also suggests how we might understand cybercrime before outlining both the technological developments that will drive future cybercrime and also the consequences of failing to respond to those changes.) <|cite_end|> <|cite_start|> (Reference: Risks of Increase in the IoT Devices: The Internet of Things (IoT) involves various objects and communication methods. IoT manufacturers are continually creating new devices that are unprecedented in the past analysis. The security of IoT involves different aspects including confidentiality, integrity, and authentication. However, most of the current risk assessment methodologies are for general-purpose software systems and hence lack a holistic approach for assessing risks in IoT systems especially due to the diversity of IoT systems. This paper discusses the risks related to continually increasing IoT devices and the cautionary measures to consider in developing these devices.) <|cite_end|> further substantiates the need for improving understanding of cybersecurity. In the domain of software engineering, practical solutions to achieve this include activities such as the documentation of vulnerabilities of computer systems and updating respective knowledge bases. Open databases such as the Common Vulnerabilities and Exposures (CVE) <|cite_start|> (Reference: Predicting Vulnerability Type in Common Vulnerabilities and Exposures (CVE) Database with Machine Learning Classifiers: Vulnerability type is not part of the standard CVE scheme so the ability to determine it only on the basis of text description would be a very useful for automated vulnerability handling. The growing number of hardware and software vulnerabilities discovered every year makes it more difficult for manual classification of the vulnerabilities types. This justifies the need for automatic machine learning classification. In this study we research the performance of base ML classifier algorithms, such as Linear Support Vector Classification, Naive Bayes, and Random Forest Classifier. To measure the performance of our classifiers, we use precision, recall, and f1-score evaluation metrics. Previous studies have focused on machine learning methods predicting platform vendor and products, vulnerability scoring, software vulnerabilities exploitation. Our study aims to show that machine learning is suitable for automated vulnerability type classification.) <|cite_end|> and Common Weakness Enumeration (CWE) <|cite_start|> (Reference: Common weakness enumeration (CWE) status update: This paper is a status update on the Common Weakness Enumeration (CWE) initiative [1], one of the efforts focused on improving the utility and effectiveness of code-based security assessment technology. As hoped, the CWE initiative has helped to dramatically accelerate the use of tool-based assurance arguments in reviewing software systems for security issues and invigorated the investigation of code implementation, design, and architecture issues with automation.) <|cite_end|>, have played a pivotal role in raising the awareness of known vulnerabilities such that appropriate defensive measures can be developed or updated. While these reference databases are well maintained, they may still appear complex to the general population and may contribute to the already existing problems of inaccessibility and specialist requirements that are pinned against the topic of cybersecurity. Because of this, several knowledge bases have been developed to inform and underpin cybersecurity education and training <|cite_start|> (Reference: National Initiative for Cybersecurity Education (NICE) Cybersecurity Workforce Framework (Portuguese translation): This publication describes the National Initiative for Cybersecurity Education (NICE) Cybersecurity Workforce Framework (NICE Framework), a reference structure that describes the interdisciplinary nature of the cybersecurity work. It serves as a fundamental reference resource for describing and sharing information about cybersecurity work and the knowledge, skills, and abilities (KSAs) needed to complete tasks that can strengthen the cybersecurity posture of an organization. As a common, consistent lexicon that categorizes and describes cybersecurity work, the NICE Framework improves communication about how to identify, recruit, develop, and retain cybersecurity talent. The NICE Framework is a reference source from which organizations or sectors can develop additional publications or tools that meet their needs to define or provide guidance on different aspects of cybersecurity workforce development, planning, training, and education.) <|cite_end|>, which aim to address these issues at a high-school or higher-education level. Although they may be a useful learning resource for providing key cybersecurity knowledge, their primary purpose is to be used by those who are already knowledgeable in cybersecurity to develop further curricula to teach those who may have little-to-no knowledge of cybersecurity. Furthermore, among these knowledge bases, there may be some key topics which are not covered and their format and density may not be perceived as accessible to novice users. Thus, this may directly impact one's ability to understand key cybersecurity topics but also to make links between these topics to capture real-world cybersecurity scenarios. Ultimately, the weaknesses of existing solutions regarding limitations of accessibility, steep learning curves and a perceived requirement of specialist knowledge/expertise, must be ameliorated by a new solution that provides an answer to the following research questions. Specifically, can a new solution: \begin{questions}[leftmargin=*,align=left] \item Provide introductory cybersecurity knowledge to novice users? \item Provide material for expressing interpretation and documentation of key cybersecurity topics, which can support independent learning and self-efficacy? \item Act as an index for the CyBOK knowledge base which provides an interface for discussion on key cybersecurity topics? \item Provide links between key cybersecurity topics, allowing the generation of concepts which can capture various cybersecurity scenarios? \end{questions} In this paper, we provide an answer to these research questions by proposing the use of a playing cards format as a medium to provide: introductory knowledge of key cybersecurity topics, acting as an index for the CyBOK knowledge base; support independent learning and self-efficacy; and allow for links to be made between key cybersecurity topics to capture real-world scenarios. The novelty of this work is three-fold. We first present the design principles for the cybersecurity cards to address these limitations. Second, we provide an evaluation of the cards in a workshop with masters-level students to understand whether the cards satisfy the aforementioned provisions. The output of this evaluation is a second revised deck of the cybersecurity cards. Third, we carried out the same workshop but with a different demographic to the first, with participants at late primary and early secondary school level (ages ranging from 10 to 15 years old, mean 12.8 years). The remainder of this paper is organised as follows. Section~\ref{sec:background} provides background and related work, as well as the selection procedure we applied to the production of our cybersecurity cards using the CyBOK knowledge base and the limitations of other approaches. The design principles applied to the cybersecurity cards, as well as the initial implementation (Version 1), are described in Section~\ref{sec:cards}. An evaluation of Version 1 of the cards is provided in Section~\ref{sec:evaluation}. In Section~\ref{sec:cards1}, we present Version 2 of the cards as a result of the findings from the first evaluation, as well as a further evaluation of the second version of the cards in Section~\ref{sec:evaluation2}. In Section~\ref{sec:discussion}, we provide a discussion of the results from both evaluations and the paper concludes in Section~\ref{sec:conclusion}. Related Work \label{sec:background} The need for practical and easy-to-learn cybersecurity learning material is a constant problem which stems from the evolving nature of cybersecurity and computing technologies as the number of connected users and devices scales. In recent years, the number of critical cybersecurity incidents have increased significantly, correlating with increasing numbers of online users during the Covid-19 pandemic, for example, as well as an increase in the adoption of various connected computer systems in day-to-day activities. Among these incidents, research shows that around 95\% of cybersecurity breaches occur as a result of human error and that organisations lack the sophistication, interest and/or knowledge to handle these threats. It has been shown that those in cybersecurity careers require a set of skills, involving the abilities to carry out various tasks at any time in non-traditional environments, and adapt to the dynamic nature of these environments. In the domain of software engineering, basic cybersecurity training such as password best-practices and multi-factor authentication are employed for individuals to conform to, with the aim of alleviating concerns and mitigating the potential for liabilities that arise as a result of cybersecurity-related incidents <|cite_start|> (Reference: A review of BYOD security challenges, solutions and policy best practices: Recently, many employees' own smartphones and tablets; where they use these devices in the workplace either for personal use or to perform functional tasks. Perhaps this considered as the main motivation for numerous organization to pay attention to ‘bring your own device’ business trend; which is related to the use of personal devices for business purposes. For an organization, this brings some opportunities and security risks. The establish of BYOD policies always be a tough task, as an organization needs to avoid security risks and increase employee productivity. However, the associated security risks with BYOD policy can be managed by developing an effective security policy and adopt some technical security control and procedures. This research paper covers the review of current BYOD security challenges and issues, security solutions and policy best practices in organizational perspective. Moreover, a comprehensive security policy model presented and discussed.) <|cite_end|> <|cite_start|> (Reference: Securing your remote workforce against new phishing attacks: ) <|cite_end|>. It has been identified that a large number of Android applications contain security-related code snippets copied and pasted from Stack Overflow, of which nearly 98\% contained at least one insecure code snippet <|cite_start|> (Reference: Stack Overflow Considered Harmful? The Impact of Copy&Paste on Android Application Security: Online programming discussion platforms such as Stack Overflow serve as a rich source of information for software developers. Available information include vibrant discussions and oftentimes ready-to-use code snippets. Anecdotes report that software developers copy and paste code snippets from those information sources for convenience reasons. Such behavior results in a constant flow of community-provided code snippets into production software. To date, the impact of this behaviour on code security is unknown. We answer this highly important question by quantifying the proliferation of security-related code snippets from Stack Overflow in Android applications available on Google Play. Access to the rich source of information available on Stack Overflow including ready-to-use code snippets provides huge benefits for software developers. However, when it comes to code security there are some caveats to bear in mind: Due to the complex nature of code security, it is very difficult to provide ready-to-use and secure solutions for every problem. Hence, integrating a security-related code snippet from Stack Overflow into production software requires caution and expertise. Unsurprisingly, we observed insecure code snippets being copied into Android applications millions of users install from Google Play every day. To quantitatively evaluate the extent of this observation, we scanned Stack Overflow for code snippets and evaluated their security score using a stochastic gradient descent classifier. In order to identify code reuse in Android applications, we applied state-of-the-art static analysis. Our results are alarming: 15.4% of the 1.3 million Android applications we analyzed, contained security-related code snippets from Stack Overflow. Out of these 97.9% contain at least one insecure code snippet.) <|cite_end|>. The value of security information depends strongly on its source <|cite_start|> (Reference: Identifying patterns in informal sources of security information: Computer users have access to computer security information from many different sources, but few people receive explicit computer security training. Despite this lack of formal education, users regularly make many important security decisions, such as “Should I click on this potentially shady link?” or “Should I enter my password into this form?” For these decisions, much knowledge comes from incidental and informal learning. To better understand differences in the security-related information available to users for such learning, we compared three informal sources of computer security information: news articles, web pages containing computer security advice, and stories about the experiences of friends and family. Using a Latent Dirichlet Allocation topic model, we found that security information from peers usually focuses on who conducts attacks, information containing expertise focuses instead on how attacks are conducted, and information from the news focuses on the consequences of attacks. These differences may prevent users from understanding the persistence and frequency of seemingly mundane threats (viruses, phishing), or from associating protective measures with the generalized threats the users are concerned about (hackers). Our findings highlight the potential for sources of informal security education to create patterns in user knowledge that affect their ability to make good security decisions.) <|cite_end|> and reputable information sources are only useful, so long as they are well-understood and perceived as actionable <|cite_start|> (Reference: Educational Design Research for the Development of a Collectible Card Game for Cybersecurity Learning: ) <|cite_end|> <|cite_start|> (Reference: A Comprehensive Quality Evaluation of Security and Privacy Advice on the Web: End users learn defensive security behaviors from a variety of channels, including a plethora of security advice given in on-line articles. A great deal of e ff ort is devoted to getting users to follow this advice. Surprisingly then, little is known about the quality of this advice: Is it comprehensible? Is it actionable? Is it e ff ective? To answer these questions, we first conduct a large-scale, user-driven measurement study to identify 374 unique recommended behaviors contained within 1,264 documents of online security and privacy advice. Second, we develop and validate measurement approaches for evaluating the quality – comprehensibility, perceived actionability, and perceived e ffi cacy – of security advice. Third, we deploy these measurement approaches to evaluate the 374 unique pieces of security advice in a user-study with 1,586 users and 41 professional security experts. Our results suggest a crisis of advice prioritization. The majority of advice is perceived by the most users to be at least somewhat actionable, and somewhat comprehensible. Yet, both users and experts struggle to prioritize this advice. For example, experts perceive 89% of the hundreds of studied behaviors as being e ff ective, and identify 118 of them as being among the “top 5” things users should do, leaving end-users on their own to prioritize and take action to protect themselves.) <|cite_end|>. While sites such as StackOverflow are reputable for providing actionable solutions, it is clear that the security of solutions are not well understood. For novice individuals, such as those who write and/or deploy software code without formal software engineering training, it may not be true that they may fully comprehend the impact of not adhering to security best-practices. To address this, various curricula guidelines and knowledge frameworks have been developed for cybersecurity, covering a range of fundamental topics ranging from software and hardware security, to networks and cyber-physical systems. The Joint Task Force (JTF) on Cybersecurity Education proposed a draft of curricular guidance on cybersecurity to support educational efforts in the USA <|cite_start|> (Reference: Cybersecurity curricular guidelines: The Cybersecurity Curricular Guidelines, a joint effort of the ACM, IEEE Computer Society, AIS SIGSAC, and IFIP WG 11.8, were created to provide developers of cybersecurity curricula with guidelines for material to include. The curricular guidelines have eight knowledge areas, broken down into knowledge units and topics. Underlying cross-cutting concepts provide linkages among the knowledge areas. Disciplinary lenses enable the developer to emphasize the knowledge units appropriate to the goals of the developed curricula. Each knowledge area also includes a list of essential concepts that all curricula should cover to an appropriate depth. The guidelines can be linked to workforce frameworks and certification criteria as well as academic curricula.) <|cite_end|>. They designed a framework model for a body of knowledge that covers six knowledge areas which several concepts span over, targeting specific disciplines and application areas that pertain to the demographic of cybersecurity professionals. The National Initiative for Cybersecurity Education (NICE) <|cite_start|> (Reference: National Initiative for Cybersecurity Education (NICE) Cybersecurity Workforce Framework (Portuguese translation): This publication describes the National Initiative for Cybersecurity Education (NICE) Cybersecurity Workforce Framework (NICE Framework), a reference structure that describes the interdisciplinary nature of the cybersecurity work. It serves as a fundamental reference resource for describing and sharing information about cybersecurity work and the knowledge, skills, and abilities (KSAs) needed to complete tasks that can strengthen the cybersecurity posture of an organization. As a common, consistent lexicon that categorizes and describes cybersecurity work, the NICE Framework improves communication about how to identify, recruit, develop, and retain cybersecurity talent. The NICE Framework is a reference source from which organizations or sectors can develop additional publications or tools that meet their needs to define or provide guidance on different aspects of cybersecurity workforce development, planning, training, and education.) <|cite_end|> is a cybersecurity workforce framework, developed by NIST in the USA, which aims to provide a foundation for describing and sharing information about knowledge, skills and abilities in cybersecurity to strengthen an organisation's cybersecurity. The National Cyber Security Center (NCSC) in the UK proposed a Certified Master's Program that defines several pathways to address knowledge and skill gaps in cybersecurity education, which describe what topics must be covered and to what depth. While all these frameworks tend to agree on key cybersecurity topics that must be understood, they only promote greater emphasis on a subset of topics. For example, NICE covers a wide range of key topics but gaps exist such as with topics related to cyber-physical systems and human factors. The NCSC Certified Master's Program does not place much emphasis on attacks and defences, but in contrast focuses on key topics such as software security. The Cybersecurity Body of Knowledge (CyBOK) is a knowledge base developed by University of Bristol funded by the NCSC. It was developed to encompass the wide variety of topics within the field of cybersecurity and to show that it also spans multiple disciplines. In practice, it has been successful in providing a framework for NCSC certified degrees and academic/professional training programmes. CyBOK is decomposed into 21 knowledge areas (KAs) (as of version 1.1), each introduced by a reference document and a set of topics presented as a branch of the overall {\em Knowledge Tree} (Figure~\ref{fig:cybokfulltree}). Each of these knowledge areas are organised into a hierarchy of between 3 to 5 categories that present as a tree of topics. For each KA in CyBOK, there are a number of chapters that form an encyclopedic collection of knowledge of key concepts that are based on state-of-the-art academic literature. These key concepts are known as {\em Topics}, with some {\em Topics} decomposed further into a set of more specialised subjects ({\em Sub-Topics}). For example, the category of {\em Software Security} in the {\em Software and Platform Security} KA contains 4 overarching themes, split into 20 sub-topics (e.g. structured output generation vulnerabilities), each of which describe further specialised information (e.g. sql injection). \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{img/cybokfulltree.pdf} \caption{ \centering Partial View of CyBOK 1.1 Knowledge Tree. The knowledge areas and topics that are highlighted show the subset of the CyBOK knowledge base that was selected due to the link to the domain of software engineering. } \label{fig:cybokfulltree} \end{figure*} It has been shown that, in comparison with other knowledge frameworks, CyBOK covers a wider range of knowledge areas and does not have gaps that are present within other frameworks <|cite_start|> (Reference: Mirror, Mirror, On the Wall: What are we Teaching Them All? Characterising the Focus of Cybersecurity Curricular Frameworks: Many cybersecurity curricular frameworks exist, but are they all equal? If a student takes a course based on one framework, what should they expect to get out of it? Different frameworks have different emphasis and will shape the courses implementing them leading to varying skill sets. This is not bad, but such biases should be clear. The Cybersecurity Body of Knowledge (CyBOK) is a broad guide to foundational cybersecurity knowledge developed through consultation with industry and academia. Using the knowledge areas from CyBOK as a basis for comparison, we characterise 4 curricular frameworks and find that different frameworks have different emphasis) <|cite_end|>. While CyBOK facilitates a body of knowledge which attributes to the production of material for cybersecurity education and professional training, there are some weaknesses which may render it an inaccessible resource to more novice individuals such as those in the domain of software engineering who write or deploy code with no formal software engineering training. First, the links between meaning and relationships among topics and sub-topics vary across the entire Knowledge Tree, which prevents easy expression of various cybersecurity scenarios. Second, the material across the CyBOK knowledge base and its indexing structure is not easy to traverse for novice users. Gonzalez et al. <|cite_start|> (Reference: Exploring CyBOK with Topic Modeling Techniques: ) <|cite_end|> show that it would be difficult for novice individuals to infer the links between various topics, given that some follow either a single predominant theme or span several topics themselves. Ultimately, to support novice users as well as those more experienced, key cybersecurity knowledge provided by knowledge bases such as CyBOK require adequate presentation that can facilitate independent learning whilst also providing a suitable interface for discussion of various cybersecurity scenarios to make the links between meaning and relationships among topics. Aside from knowledge frameworks, cybersecurity information has also been presented in other ways. Capture the Flag (CTF) activities provide a series of competitive exercises used to find vulnerabilities in computer systems and applications and have shown to be a valuable learning tool <|cite_start|> (Reference: Shell We Play A Game$\{$CTF-as-a-service$\}$ for Security Education: The United States is facing a cyber-security crisis. The supply-side of the cyber-security workforce is not keeping pace with demand: The 2015 (ISC) Global Information Security Workforce Study predicts a shortfall of 1.5 million global information security jobs by 2020 [6]. The lack of qualified cyber-security workforce gives rise to high-profile security incidents, such as the recent Office of Personal Management data breach, where hackers stole 21 million personal files containing sensitive background check information [4]. In addition, attacks against the nation’s critical infrastructure can have devastating effect that go well beyond the financial losses we are witnessing today. The rise in the sophistication of the modern hacker—who waits patiently, quietly leveraging vulnerabilities on one system to compromise another, then slowly exfiltrating sensitive data—demands an equal rise in the skills of security professionals and security-minded developers. Therefore, we must train the next generation of security professionals who will secure the software systems that run companies, organizations, and the nation’s critical infrastructure. Security training requires that developers acquire both the skills necessary to find security vulnerabilities in software, as well as the skills to fix existing flawed software. The knowledge that comes from studying vulnerabilities and vulnerability patterns provides students with the hands-on expertise to complement the theoretical security skills of protection, detection, and response. Live cyber-security exercises are an excellent tool to help teach and reinforce security concepts in students. In the traditional cyber-security exercise concept, also called Capture The Flag (CTF) competitions, the students attempt to discover one or more vulnerabilities in a piece of software (which the organizers created) and then prove that they found a vulnerability by crafting an exploit that takes advantage of the vulnerability, stealing a piece of information from the service (i.e., the flag). At the same time, the students develop patches and defense mechanisms to prevent the exploitation of the vulnerabilities. In this way, the students receive hands-on experience finding vulnerabilities, crafting exploits, and patching services. Previous research work on this topic has shown that not only do the students learn during the competitions, but they also experience significant learning in preparing for the competition [1]. Unfortunately, live attack-defense cyber-security competitions place a significant time and effort burden on the organizers, because they require a careful design of the infrastructure and a complex network configuration, including complex routing, network filters, and traffic anonymization1. In addition, the creation of vulnerable services requires a skill set that many security educators lack. This limits considerably the adoption of attack-defense live competitions in security curricula.) <|cite_end|> <|cite_start|> (Reference: Open Source and Commercial Capture The Flag Cyber Security Learning Platforms-A Case Study: The use of gamified learning platforms as a method of introducing cyber security education, training and awareness has risen greatly. With this rise, the availability of platforms to create, host or otherwise provide the challenges that make up the foundation of this education has also increased. In order to identify the best of these platforms, we need a method to compare their feature sets. In this paper, we compare related work on identifying the best platforms for a gamified cyber security learning platform as well as contemporary literature that describes the most needed feature sets for an ideal platform. We then use this to develop a metric for comparing these platforms, before then applying this metric to popular current platforms.) <|cite_end|>. Thomas et al. <|cite_start|> (Reference: Educational Design Research for the Development of a Collectible Card Game for Cybersecurity Learning: ) <|cite_end|> propose the use of a collectible card game (CCG) as a means of teaching cybersecurity to high school students, given the benefits of prevalence culturally to all age groups (familiarity) and encouraging the understanding of competitive strategy and mistake-making as a way of learning <|cite_start|> (Reference: Collectible Card Games as Learning Tools: ) <|cite_end|>. Anvik et al. <|cite_start|> (Reference: Program wars: a card game for learning programming and cybersecurity concepts: Although there are many computer science learning games with the goal of teaching programming, such games typically require the person to either learn an existing programming language or the game's own specialized language. This can be intimidating, confusing or frustrating for an individual when they cannot get their "program" to work correctly (e.g. syntax error, infinite loop). Additionally, such games commonly use a puzzle-solving approach that does not appeal to some demographics. This paper presents a programming-language-independent approach to teaching fundamental programming and cybersecurity concepts using simple vocabulary. This approach also uses the familiar activity of playing cards against opponents to create a more dynamic and engaging learning experience. The approach is demonstrated by a web-based game called Program Wars. Results from a user study show that players are able to effectively connect game concepts to actual programming language structures; however, whether players' comprehension of computer programming is improved is unclear.) <|cite_end|> propose the use of a web-based card game for learning programming and cybersecurity concepts, using simple vocabulary to create ubiquitous learning experiences. Denning et al. <|cite_start|> (Reference: Control-Alt-Hack: the design and evaluation of a card game for computer security awareness and education: We scoped, designed, produced, and evaluated the effectiveness of a recreational tabletop card game created to raise awareness of and alter perceptions regarding-computer security. We discuss our process, the challenges that arose, and the decisions we made to address those challenges. As of May 2013, we have shipped approximately 800 free copies to 150 educators. We analyze and report on feedback from 22 of these educators about their experiences using Control-Alt-Hack with over 450 students in classroom and non-classroom contexts. The responses from the 14 educators who reported on their use of the game in a classroom context variously indicated that: their students' awareness of computer security as a complex and interesting field was increased (11/14); they would use the game again in their classroom (10/14); and they would recommend the game to others (13/14). Of note, 2 of the 14 classroom educators reported that they would not have otherwise covered the material. Additionally, we present results from user studies with 11 individuals and find that their responses indicate that 8 of the 11 had an increased awareness of computer security or a changed perception; furthermore, all of our intended goals are touched upon in their responses.) <|cite_end|> propose the use of a tabletop card game, Control-Alt-Hack, with the aim of providing awareness training for cybersecurity, arguing that playing card games can provide a reachable foundation for providing digestible cybersecurity information to large audiences. However, while these gamified approaches show various levels of success, there are limitations. First, many of these different approaches have different target personas and goals. Second, card game approaches such as Control-Alt-Hack <|cite_start|> (Reference: Control-Alt-Hack: the design and evaluation of a card game for computer security awareness and education: We scoped, designed, produced, and evaluated the effectiveness of a recreational tabletop card game created to raise awareness of and alter perceptions regarding-computer security. We discuss our process, the challenges that arose, and the decisions we made to address those challenges. As of May 2013, we have shipped approximately 800 free copies to 150 educators. We analyze and report on feedback from 22 of these educators about their experiences using Control-Alt-Hack with over 450 students in classroom and non-classroom contexts. The responses from the 14 educators who reported on their use of the game in a classroom context variously indicated that: their students' awareness of computer security as a complex and interesting field was increased (11/14); they would use the game again in their classroom (10/14); and they would recommend the game to others (13/14). Of note, 2 of the 14 classroom educators reported that they would not have otherwise covered the material. Additionally, we present results from user studies with 11 individuals and find that their responses indicate that 8 of the 11 had an increased awareness of computer security or a changed perception; furthermore, all of our intended goals are touched upon in their responses.) <|cite_end|> do not cover a broad range of key cybersecurity topics, such as those identified by knowledge frameworks such as CyBOK, and do not adequately highlight the links between vulnerabilities, attacks and defences. Specifically, attacks are typically highlighted first, which does not help users understand how attacks present themselves (opportunistic vulnerability targeting) and how to protect against them. Third, while CTF activities, for example, are beneficial in this aspect <|cite_start|> (Reference: Open Source and Commercial Capture The Flag Cyber Security Learning Platforms-A Case Study: The use of gamified learning platforms as a method of introducing cyber security education, training and awareness has risen greatly. With this rise, the availability of platforms to create, host or otherwise provide the challenges that make up the foundation of this education has also increased. In order to identify the best of these platforms, we need a method to compare their feature sets. In this paper, we compare related work on identifying the best platforms for a gamified cyber security learning platform as well as contemporary literature that describes the most needed feature sets for an ideal platform. We then use this to develop a metric for comparing these platforms, before then applying this metric to popular current platforms.) <|cite_end|> <|cite_start|> (Reference: Shell We Play A Game$\{$CTF-as-a-service$\}$ for Security Education: The United States is facing a cyber-security crisis. The supply-side of the cyber-security workforce is not keeping pace with demand: The 2015 (ISC) Global Information Security Workforce Study predicts a shortfall of 1.5 million global information security jobs by 2020 [6]. The lack of qualified cyber-security workforce gives rise to high-profile security incidents, such as the recent Office of Personal Management data breach, where hackers stole 21 million personal files containing sensitive background check information [4]. In addition, attacks against the nation’s critical infrastructure can have devastating effect that go well beyond the financial losses we are witnessing today. The rise in the sophistication of the modern hacker—who waits patiently, quietly leveraging vulnerabilities on one system to compromise another, then slowly exfiltrating sensitive data—demands an equal rise in the skills of security professionals and security-minded developers. Therefore, we must train the next generation of security professionals who will secure the software systems that run companies, organizations, and the nation’s critical infrastructure. Security training requires that developers acquire both the skills necessary to find security vulnerabilities in software, as well as the skills to fix existing flawed software. The knowledge that comes from studying vulnerabilities and vulnerability patterns provides students with the hands-on expertise to complement the theoretical security skills of protection, detection, and response. Live cyber-security exercises are an excellent tool to help teach and reinforce security concepts in students. In the traditional cyber-security exercise concept, also called Capture The Flag (CTF) competitions, the students attempt to discover one or more vulnerabilities in a piece of software (which the organizers created) and then prove that they found a vulnerability by crafting an exploit that takes advantage of the vulnerability, stealing a piece of information from the service (i.e., the flag). At the same time, the students develop patches and defense mechanisms to prevent the exploitation of the vulnerabilities. In this way, the students receive hands-on experience finding vulnerabilities, crafting exploits, and patching services. Previous research work on this topic has shown that not only do the students learn during the competitions, but they also experience significant learning in preparing for the competition [1]. Unfortunately, live attack-defense cyber-security competitions place a significant time and effort burden on the organizers, because they require a careful design of the infrastructure and a complex network configuration, including complex routing, network filters, and traffic anonymization1. In addition, the creation of vulnerable services requires a skill set that many security educators lack. This limits considerably the adoption of attack-defense live competitions in security curricula.) <|cite_end|>, a key disadvantage pertains to novice users wherein competitions rely on technical expertise and the ability to traverse computer systems using various command-line tools and other bespoke applications <|cite_start|> (Reference: Capture the flag unplugged: an offline cyber competition: In order to meet the cybersecurity workforce demand, it is important to raise cybersecurity interest among the youth. Just like ACM programming competitions, Capture the Flag (CTF) competitions allow students to learn cybersecurity skills in a fun and engaging way. It is an effective platform to increase students' interest in cybersecurity and prepare them for defending against real cyber attackers. A typical CTF competition requires at least some basic technical security knowledge and months of diligent preparation. For this very reason, many computer science students do not feel qualified to participate in CTF competitions, and as a result, do not even try. To overcome this lack of confidence while at the same time raising awareness about the cybersecurity profession in a realistic fashion, we have developed the CTF Unplugged project, as inspired by the CS Unplugged project. The primary goal is to teach students with little or no technical knowledge about the different cybersecurity challenges that a cybersecurity professional must address and the problem-solving skills needed for a cybersecurity career, all without direct use of technology. The effectiveness of CTF unplugged project has been evaluated after exposing 36 high school students participating in the Tennessee Tech University GenCyber Camp to these activities this past summer. Students reported a significant gain in knowledge, confidence and comfort level after participation.) <|cite_end|>, or requiring (at a minimum) a basic understanding of cybersecurity concepts in order to progress in finding vulnerabilities <|cite_start|> (Reference: Capture the flag as cyber security introduction: Introducing technical concepts to students with little to no technical background can be a challenging task for any teacher to achieve. The concept of gamification has been introduced recently as a method to motivate students by taking a variety of techniques found in popular games, and adding them into educational modules. Extending from this notion, it has been found that capture the flag (CTF) style competitions are a successful way to introduce students to a variety of technical concepts within the standard computer science curriculum. During the 2015 summer, a CTF was run at several GenCyber camps across the country with the primary goal of introducing high school students to various computer security and digital forensics topics without requiring that the students have any background in these topics. We found that this method of breaking down concepts into singular challenges, and tying these challenges together in a competitive environment was widely successful at not only introducing students to these concepts, but also motivating continued learning after the camps ended. This paper will analyze both the successes of the effort as well as the limitations discovered through using this technique.) <|cite_end|>. <|paper_end|>
[ "<|reference_start|> Stack Overflow Considered Harmful? The Impact of Copy&Paste on Android Application Security: Online programming discussion platforms such as Stack Overflow serve as a rich source of information for software developers. Available information include vibrant discussions and oftentimes ready-to-use code snippets. Anecdotes report that software developers copy and paste code snippets from those information sources for convenience reasons. Such behavior results in a constant flow of community-provided code snippets into production software. To date, the impact of this behaviour on code security is unknown. We answer this highly important question by quantifying the proliferation of security-related code snippets from Stack Overflow in Android applications available on Google Play. Access to the rich source of information available on Stack Overflow including ready-to-use code snippets provides huge benefits for software developers. However, when it comes to code security there are some caveats to bear in mind: Due to the complex nature of code security, it is very difficult to provide ready-to-use and secure solutions for every problem. Hence, integrating a security-related code snippet from Stack Overflow into production software requires caution and expertise. Unsurprisingly, we observed insecure code snippets being copied into Android applications millions of users install from Google Play every day. To quantitatively evaluate the extent of this observation, we scanned Stack Overflow for code snippets and evaluated their security score using a stochastic gradient descent classifier. In order to identify code reuse in Android applications, we applied state-of-the-art static analysis. Our results are alarming: 15.4% of the 1.3 million Android applications we analyzed, contained security-related code snippets from Stack Overflow. Out of these 97.9% contain at least one insecure code snippet. <|reference_end|>", "<|reference_start|> Cybersecurity curricular guidelines: The Cybersecurity Curricular Guidelines, a joint effort of the ACM, IEEE Computer Society, AIS SIGSAC, and IFIP WG 11.8, were created to provide developers of cybersecurity curricula with guidelines for material to include. The curricular guidelines have eight knowledge areas, broken down into knowledge units and topics. Underlying cross-cutting concepts provide linkages among the knowledge areas. Disciplinary lenses enable the developer to emphasize the knowledge units appropriate to the goals of the developed curricula. Each knowledge area also includes a list of essential concepts that all curricula should cover to an appropriate depth. The guidelines can be linked to workforce frameworks and certification criteria as well as academic curricula. <|reference_end|>", "<|reference_start|> Exploring CyBOK with Topic Modeling Techniques: <|reference_end|>", "<|reference_start|> Open Source and Commercial Capture The Flag Cyber Security Learning Platforms-A Case Study: The use of gamified learning platforms as a method of introducing cyber security education, training and awareness has risen greatly. With this rise, the availability of platforms to create, host or otherwise provide the challenges that make up the foundation of this education has also increased. In order to identify the best of these platforms, we need a method to compare their feature sets. In this paper, we compare related work on identifying the best platforms for a gamified cyber security learning platform as well as contemporary literature that describes the most needed feature sets for an ideal platform. We then use this to develop a metric for comparing these platforms, before then applying this metric to popular current platforms. <|reference_end|>" ]
[ 10, 14, 17, 25 ]
{"<|multi_cite_1_1|>": "ss-1836381", "<|multi_cite_1_2|>": "ss-1836382", "<|cite_2|>": "ss-1230091", "<|multi_cite_3_1|>": "ss-1836383", "<|multi_cite_3_3|>": "ss-951820", "<|cite_4|>": "ss-858853", "<|cite_5|>": "ss-1420345", "<|multi_cite_6_2|>": "ss-1222315", "<|multi_cite_11_2|>": "ss-1836384", "<|multi_cite_11_3|>": "ss-1836385", "<|cite_12|>": "arxiv-136807", "<|cite_13|>": "ss-1218982", "<|multi_cite_14_1|>": "ss-1836386", "<|multi_cite_14_2|>": "ss-948624", "<|cite_15|>": "ss-1836387", "<|cite_16|>": "ss-1222315", "<|cite_21|>": "ss-1115835", "<|cite_22|>": "ss-1836388", "<|multi_cite_23_1|>": "ss-1836389", "<|multi_cite_23_3|>": "ss-1836390", "<|cite_24|>": "ss-1836386", "<|cite_25|>": "ss-1532229", "<|cite_26|>": "ss-1836391", "<|cite_27|>": "ss-1836392", "<|cite_28|>": "ss-1836392", "<|multi_cite_29_1|>": "ss-1836390", "<|multi_cite_29_2|>": "ss-1836389", "<|cite_30|>": "ss-1836393", "<|cite_31|>": "ss-1345855"}
2311.09879-0
<|paper_start|> Title: Cross-Layer Optimization for Statistical QoS Provision in C-RAN with Finite-Length Coding Abstract: Cross-Layer Optimization for Statistical QoS Provision in C-RAN with Finite-Length Coding: The cloud radio access network (C-RAN) has become the foundational structure for various emerging communication paradigms, leveraging the flexible deployment of distributed access points (APs) and centralized task processing. In this paper, we propose a cross-layer optimization framework based on a practical finite-length coding communication system in C-RAN, aiming at maximizing bandwidth efficiency while providing statistical quality of service (QoS) for individual services. Based on the theoretical results from effective capacity and finite-length coding, we formulate a joint optimization problem involving modulation and coding schemes (MCS), retransmission count, initial bandwidth allocation and AP selection, which reflects the coordinated decision of parameters across the physical layer, data link layer and transport layer. To tackle such a mixed-integer nonlinear programming (MINLP) problem, we firstly decompose it into a transmission parameter decision (TPD) sub-problem and a user association (UA) sub-problem, which can be solved by a binary search-based algorithm and an auction-based algorithm respectively. Simulation results demonstrate that the proposed model can accurately capture the impact of QoS requirements and channel quality on the optimal transmission parameters. Furthermore, compared with fixed transmission parameter setting, the proposed algorithms achieve the bandwidth efficiency gain up to 27.87% under various traffic and channel scenarios. Introduction \IEEEPARstart{T}{he} advent of the fifth-generation and beyond (5G/B5G) mobile communication technology has paved the way for emerging communication paradigms and innovative services, such as industrial automation, virtual reality (VR), remote training <|cite_start|> (Reference: A survey of 5G technologies: regulatory, standardization and industrial perspectives: ) <|cite_end|>, <|cite_start|> (Reference: A survey on 5G usage scenarios and traffic models: The fifth-generation mobile initiative, 5G, is a tremendous and collective effort to specify, standardize, design, manufacture, and deploy the next cellular network generation. 5G networks will support demanding services such as enhanced Mobile Broadband, Ultra-Reliable and Low Latency Communications and massive Machine-Type Communications, which will require data rates of tens of Gbps, latencies of few milliseconds and connection densities of millions of devices per square kilometer. This survey presents the most significant use cases expected for 5G including their corresponding scenarios and traffic models. First, the paper analyzes the characteristics and requirements for 5G communications, considering aspects such as traffic volume, network deployments, and main performance targets. Secondly, emphasizing the definition of performance evaluation criteria for 5G technologies, the paper reviews related proposals from principal standards development organizations and industry alliances. Finally, well-defined and significant 5G use cases are provided. As a result, these guidelines will help and ease the performance evaluation of current and future 5G innovations, as well as the dimensioning of 5G future deployments.) <|cite_end|>. The rapid expansion of these services is in turn providing a fertile ground for further advancements in radio technology, driven by the escalating demand for enhanced connectivity. As the crucial ``last mile'' of data delivery, the radio access network (RAN) assumes critical significance in meeting the stringent requirements of these services <|cite_start|> (Reference: End-to-end congestion control approaches for high throughput and low delay in 4G/5G cellular networks: ) <|cite_end|>, particularly when operating alongside ultra-high-speed wired links, thus circumventing the limitations imposed by user cables. Consequently, novel radio technologies and network paradigms have been developed, such as massive multiple-input-multiple-output (massive MIMO) and user-centric networks (UCN) <|cite_start|> (Reference: User-centric Cell-free Massive MIMO Networks: A Survey of Opportunities, Challenges and Solutions: Densification of network base stations is indispensable to achieve the stringent Quality of Service (QoS) requirements of future mobile networks. However, with a dense deployment of transmitters, interference management becomes an arduous task. To solve this issue, exploring radically new network architectures with intelligent coordination and cooperation capabilities is crucial. This survey paper investigates the emerging user-centric cell-free massive Multiple-input multiple-output (MIMO) network architecture that sets a foundation for future mobile networks. Such networks use a dense deployment of distributed units (DUs) to serve users; the crucial difference from the traditional cellular paradigm is that a specific serving cluster of DUs is defined for each user. This framework provides macro diversity, power efficiency, interference management, and robust connectivity. Most importantly, the user-centric approach eliminates cell edges, thus contributing to uniform coverage and performance for users across the network area. We present here a guide to the key challenges facing the deployment of this network scheme and contemplate the solutions being proposed for the main bottlenecks facing cell-free communications. Specifically, we survey the literature targeting the fronthaul, then we scan the details of the channel estimation required, resource allocation, delay, and scalability issues. Furthermore, we highlight some technologies that can provide a management platform for this scheme such as distributed software-defined network (SDN). Our article serves as a check point that delineates the current status and indicates future directions for this area in a comprehensive manner.) <|cite_end|>. Among them, the 5G cloud RAN (C-RAN) has garnered considerable attention from both academia and industry, owing to its distinctive deployment structure and significant commercial potential <|cite_start|> (Reference: Recent research in cloud radio access network (C-RAN) for 5G cellular systems - A survey: ) <|cite_end|> <|cite_start|> (Reference: Are Heterogeneous Cloud-Based Radio Access Networks Cost Effective?: Mobile networks of the future are predicted to be much denser than today's networks in order to cater to increasing user demands. In this context, cloud based radio access networks have garnered significant interest as a cost effective solution to the problem of coping with denser networks and providing higher data rates. However, to the best knowledge of the authors, a quantitative analysis of the cost of such networks is yet to be undertaken. This paper develops a theoretic framework that enables computation of the deployment cost of a network (modeled using various spatial point processes) to answer the question posed by the paper's title. Then, the framework obtained is used along with a complexity model, which enables computing the information processing costs of a network, to compare the deployment cost of a cloud based network against that of a traditional LTE network, and to analyze why they are more economical. Using this framework and an exemplary budget, this paper shows that cloud-based radio access networks require approximately 10 to 15% less capital expenditure per square kilometer than traditional LTE networks. It also demonstrates that the cost savings depend largely on the costs of base stations and the mix of backhaul technologies used to connect base stations with data centers.) <|cite_end|>. On the one hand, it leverages cloud computing and virtualization technologies to decouple the base station (BS) into the baseband unit (BBU) and the remote radio head (RRH). The BBU is responsible for baseband signal processing, while the RRH focuses on signal amplification and modulation. This centralization of computing units, combined with the distributed deployment of radio frequency (RF) units, forms the technological underpinning for various emerging technologies, such as mobile edge computing (MEC) and coordinated multipoint (CoMP) transmission <|cite_start|> (Reference: When the User-Centric Network Meets Mobile Edge Computing: Challenges and Optimization: As an emergent computing paradigm, mobile edge computing (MEC) can provide users with strong computing, storage, and communication services by moving the server to the user side. In recent years, applications such as virtual reality and augmented reality have brought higher requirements on transmission and computing capabilities. However, in the traditional cellular-based MEC, users at the edge of the cell will suffer severe signal attenuation and inter-cell interference, leading to a great reduction in achievable rate and proneness to transmission outage and offloading failure. To overcome this limitation, we combine the user-centric network (UCN) with MEC computing services and propose a novel framework called user-centric MEC (UCMEC). Through the dense deployment of access points, UCMEC can provide users with efficient, reliable, low-cost user-centric wireless transmission and edge computing services. To further exploit the benefits of UCMEC, we jointly optimize the task partition, transmit power control, and computing resource allocation decision to minimize the total energy consumption under delay constraints. Simulation results show that our proposed optimization scheme can bring users lower energy consumption and delay, and higher successful offloading probability than traditional MEC.) <|cite_end|> <|cite_start|> (Reference: {Delay aware resource allocation with radio remote head cooperation in user-centric C-RAN: High spectral efficiency and low latency are required to provide ubiquitous communication for the emerging applications in 5G wireless communication networks. In this letter, we propose a novel framework that considers these requirements simultaneously by integrating the notion of effective capacity (EC) into orthogonal frequency division multiple access (OFDMA) cloud-radio access networks (C-RAN) where the users select the distributed radio remote heads (RRHs) based on their specific delay requirements to transmit over different subcarriers cooperatively. Consequently, an optimization problem is defined to maximize the EC under the average peak power constraint and the delay requirements. The problem is combinatorial and non-convex and an algorithm based on the duality and alternating optimization algorithms is proposed, which is efficiently computed with good accuracy. Simulation and analytical results demonstrate that the proposed solution has a near-optimal performance and there is a trade-off between delay and spectral efficiency. Moreover, the cooperation between RRHs can considerably improve the system throughput.) <|cite_end|> <|cite_start|> (Reference: CoMP transmission in downlink NOMA-based heterogeneous cloud radio access networks: In this paper, we investigate the integration between the coordinated multipoint (CoMP) transmission and the non-orthogonal multiple access (NOMA) in downlink heterogeneous cloud radio access networks (H-CRANs). In H-CRAN, low-power high-density small remote radio heads (SRRHs) are underlaid by high-power low-density macro RRH (MRRH). However, co-channel deployment of the different RRHs gives rise to the problem of inter-cell interference that significantly affects system performance especially the cell-edge users. Thus, the users are first categorized into Non-CoMP users and CoMP users based on the relation between the useful signal to the dominant interference signal. The Non-CoMP user is the user equipment (UEs) having high signal-to-interference-plus-noise-ratio ( $\mathtt {SINR}$ ) and hence associates with only one RRH. On the other hand, the CoMP user, cell-edge user, is the UE that experiences less distinctive received power with the best two RRHs. In the proposed CoMP-NOMA framework, each RRH schedules CoMP-UE and non-CoMP-UE over the same transmission channel using NOMA. We first design an analytical framework based on tools from the stochastic geometry to evaluate the performance of the proposed framework (CoMP-NOMA) which is based on H-CRAN in terms of the average achievable data rate for each NOMA UE. We then examine the spectral efficiency of the proposed CoMP-NOMA based H-CRAN. Simulation results are provided to validate the accuracy of the analytical models and to reveal the superiority of the proposed CoMP-NOMA framework compared with conventional CoMP orthogonal multiple access (CoMP-OMA) techniques. By reaping the benefits of both JT-CoMP and NOMA, we prove that the proposed framework can successfully deal with the inter-cell interference by using CoMP and improve the network’s spectral efficiency through NOMA technique. We also show that, with an appropriate power allocation coefficient setting at the Non-CoMP-UEs, a fairness performance can be achieved between the CoMP-UEs and the Non-CoMP-UEs.) <|cite_end|>. On the other hand, the sharing of BBU pools among multiple operators through computing unit rentals presents an effective approach to reduce operational costs <|cite_start|> (Reference: Are Heterogeneous Cloud-Based Radio Access Networks Cost Effective?: Mobile networks of the future are predicted to be much denser than today's networks in order to cater to increasing user demands. In this context, cloud based radio access networks have garnered significant interest as a cost effective solution to the problem of coping with denser networks and providing higher data rates. However, to the best knowledge of the authors, a quantitative analysis of the cost of such networks is yet to be undertaken. This paper develops a theoretic framework that enables computation of the deployment cost of a network (modeled using various spatial point processes) to answer the question posed by the paper's title. Then, the framework obtained is used along with a complexity model, which enables computing the information processing costs of a network, to compare the deployment cost of a cloud based network against that of a traditional LTE network, and to analyze why they are more economical. Using this framework and an exemplary budget, this paper shows that cloud-based radio access networks require approximately 10 to 15% less capital expenditure per square kilometer than traditional LTE networks. It also demonstrates that the cost savings depend largely on the costs of base stations and the mix of backhaul technologies used to connect base stations with data centers.) <|cite_end|>. Considering the immense potential and the derivative architectures of C-RAN, extensive research has been conducted in recent years to propose enhanced control schemes at each layer of the system. The throughput is maximized through joint optimization in <|cite_start|> (Reference: Throughput Maximization in Cloud-Radio Access Networks using Cross-Layer Network Coding: Cloud radio access networks (C-RANs) are promising paradigms for the fifth-generation (5G) networks due to their interference management capabilities. In a C-RAN, a central processor (CP) is responsible for coordinating multiple Remote Radio Heads (RRHs) and scheduling users to their radio resource blocks (RRBs). In this paper, we develop a novel cross-layer network coding (CLNC) approach that proposes to optimize RRH’s transmit powers and user’s rates in making the coding decisions. As such, cross-layer throughput of the network is maximized. The joint user scheduling, file encoding, and power adaptation problem is solved by designing a subgraph for each RRB, in which each vertex represents potential user-RRH associations, encoded files, transmission rates, and power levels (PLs) for one RRB. It is then shown that the C-RAN throughput maximization problem is equivalent to a maximum-weight clique problem over the union of all such subgraphs, called herein the CRAN-CLNC graph. Numerical results revealed that the proposed joint and iterative schemes offer improved throughput performances as compared to the existing algorithms in the literature. Compared to our proposed joint scheme, our proposed iterative scheme has a certain degradation, roughly in the range of 9%–14%. This small degradation in the throughput performance of the iterative scheme comes at the achieved low computational complexity as compared to the high complexity of the joint scheme.) <|cite_end|> <|cite_start|> (Reference: {Message-passing-based dynamic point selection for coordinated multipoint transmission: This letter develops a dynamic point selection strategy for coordinated multipoint transmission using a message-passing approach. The dynamic determination of the best transmit point for individual users with the objective of the sum-rate maximization can be cast as a bipartite b-matching problem, the computational cost of which, however, becomes quickly intractable with the increasing number of users. Therefore, this letter develops a message-passing algorithm that solves this computationally demanding challenge. Simulation results show that the proposed algorithm outperforms existing greedy-style approaches and provides a very efficient solution for the maximal sum-rate configuration.) <|cite_end|> <|cite_start|> (Reference: Cross-layer cloud offloading with quality of service guarantees in Fog-RANs: Fog radio access networks (F-RANs) have recently been postulated as an innovative solution to improve the fronthaul capacities of cloud base stations (CBSs). This architecture extends the CBS service by involving enhanced remote radio heads (eRRHs), which can pre-store and transmit popular files at the network edge (i.e., close to the end users). This is referred to as caching, and it allows the offloading of CBS resources, e.g., time and frequency. Recent works have been proposed to use rate-aware network coding in order to exploit the previously downloaded popular files at the users’ devices. As such, the CBS offloading is maximized. However, the users’ achieved Quality of Service (QoS), and the standard F-RANs physical-layer resource optimization have not received any attention to date. This paper proposes use of an innovative cross-layer network coding (CLNC) to address the above-mentioned issues. The proposed CLNC scheme is not only aware of different users’ rates but also controls the rates by jointly optimizing coding combinations, users-eRRHs/power zones (PZs) assignments, and transmission power in the PZs. Using a graph theoretical representation, we formulate the joint cross-layer CBS offloading and QoS guarantee problem and show its NP-hardness. Joint and iterative heuristic approaches are then developed to solve this problem using greedy vertex search and coloring techniques. The proposed approaches are finally validated and tested against the existing algorithms in the literature.) <|cite_end|> <|cite_start|> (Reference: A Joint Reinforcement-Learning Enabled Caching and Cross-Layer Network Code in F-RAN With D2D Communications: In this paper, we leverage reinforcement learning (RL) and cross-layer network coding (CLNC) for efficiently pre-fetching requested contents to the local caches and delivering these contents to requesting users in a downlink fog-radio access network (F-RAN) with device-to-device (D2D) communications. In the considered system, fog access points (F-APs) and cache-enabled D2D (CE-D2D) users are equipped with local caches that alleviate traffic burden at the fronthaul and facilitate rapid delivery of the users’ contents. To this end, the CLNC scheme optimizes the coding decisions, transmission rates, and power levels of both F-APs and CE-D2D users, and RL scheme optimizes caching strategy. A joint content placement and delivery problem is formulated as an optimization problem with a goal to maximize system sum-rate. The problem is an NP-hard problem. To efficiently solve it, we first develop an innovative decentralized CLNC coalition formation (CLNC-CF) switch algorithm to obtain a stable solution for the content delivery problem, where F-APs and CE-D2D users utilize CLNC resource allocation. By considering statistics of channel and users’ content request into account, we then develop a multi-agent RL algorithm for optimizing the content placement at both F-APs and CE-D2D users. Simulation results show that the proposed joint CLNC-CF-RL framework can effectively improve the sum-rate by up to 30%, 60%, and 150%, respectively, compared to: 1) an optimal uncoded algorithm, 2) a standard rate-aware-NC algorithm, and 3) a benchmark classical NC with network-layer optimization.) <|cite_end|>, considering constraints such as RRH associations, transmit power and bandwidth allocation. In the context of green communications, the minimization of transmit power or maximization of energy efficiency under specific service requirements has been extensively studied in <|cite_start|> (Reference: {Double deep Q-network-based energy-efficient resource allocation in cloud radio access network: Cloud radio access network (CRAN) has been shown as an effective means to boost network performance. Such gain stems from the intelligent management of remote radio heads (RRHs) in terms of on/off operation mode and power consumption. Most conventional resource allocation (RA) methods, however, optimize the network utility without considering the switching overhead of RRHs in adjacent time intervals. When the network environment becomes time-correlated, mathematical optimization is not directly applicable. In this paper, we aim to optimize the energy efficiency (EE) subject to the constraints on per-RRH transmission power and user data rates. To this end, we formulate the EE problem as a Markov decision process (MDP) and subsequently adopt deep reinforcement learning (DRL) technique to reap the cumulative EE rewards. Our starting point is the deep Q network (DQN), which is a combination of deep learning and Q-learning. In each time slot, DQN configures the status of RRHs yielding the largest Q-value (known as state-action value) prior to solving a power minimization problem for active RRHs. To overcome the Q-value overestimation issue of DQN, we propose a Double DQN (DDQN) framework that obtains optimal reward better than DQN by separating the selected action from the target Q-value generator. Simulation results validate that the DDQN-based RA method is more energy-efficient than the DQN-based RA algorithm and a baseline solution.) <|cite_end|> <|cite_start|> (Reference: Cross-Layer Resource Allocation With Elastic Service Scaling in Cloud Radio Access Network: Cloud radio access network (C-RAN) aims to improve spectrum and energy efficiency of wireless networks by migrating conventional distributed base station functionalities into a centralized cloud baseband unit (BBU) pool. We propose and investigate a cross-layer resource allocation model for C-RAN to minimize the overall system power consumption in the BBU pool, fiber links and the remote radio heads (RRHs). We characterize the cross-layer resource allocation problem as a mixed-integer nonlinear programming (MINLP), which jointly considers elastic service scaling, RRH selection, and joint beamforming. The MINLP is however a combinatorial optimization problem and NP-hard. We relax the original MINLP problem into an extended sum-utility maximization (ESUM) problem, and propose two different solution approaches. We also propose a low-complexity Shaping-and-Pruning (SP) algorithm to obtain a sparse solution for the active RRH set. Simulation results suggest that the average sparsity of the solution given by our SP algorithm is close to that obtained by a recently proposed greedy selection algorithm, which has higher computational complexity. Furthermore, our proposed cross-layer resource allocation is more energy efficient than the greedy selection and successive selection algorithms.) <|cite_end|> <|cite_start|> (Reference: Cross-layer Optimization for Ultra-reliable and Low-latency Radio Access Networks: In this paper, we propose a framework for cross-layer optimization to ensure ultra-high reliability and ultra-low latency in radio access networks, where both transmission delay and queueing delay are considered. With short transmission time, the blocklength of channel codes is finite, and the Shannon Capacity cannot be used to characterize the maximal achievable rate with given transmission error probability. With randomly arrived packets, some packets may violate the queueing delay. Moreover, since the queueing delay is shorter than the channel coherence time in typical scenarios, the required transmit power to guarantee the queueing delay and transmission error probability will become unbounded even with spatial diversity. To ensure the required quality-of-service (QoS) with finite transmit power, a proactive packet dropping mechanism is introduced. Then, the overall packet loss probability includes transmission error probability, queueing delay violation probability, and packet dropping probability. We optimize the packet dropping policy, power allocation policy, and bandwidth allocation policy to minimize the transmit power under the QoS constraint. The optimal solution is obtained, which depends on both channel and queue state information. Simulation and numerical results validate our analysis, and show that setting packet loss probabilities equal is a near optimal solution.) <|cite_end|> <|cite_start|> (Reference: Energy-Efficient Joint Congestion Control and Resource Optimization in Heterogeneous Cloud Radio Access Networks: The heterogeneous cloud radio access network (HCRAN) is a promising paradigm which integrates the advantages of cloud radio access network (C-RAN) and heterogeneous network (HetNet). In this paper, we study the joint congestion control and resource optimization to explore the energy efficiency (EE)-guaranteed tradeoff between throughput utility and delay performance in a downlink slotted H-CRAN. We formulate the considered problem as a stochastic optimization problem, which maximizes the utility of average throughput and maintains the network stability subject to required EE constraint and transmit power consumption constraints by traffic admission control, user association, resource block allocation and power allocation. Leveraging on the Lyapunov optimization technique, the stochastic optimization problem can be transformed and decomposed into three separate subproblems which can be solved concurrently at each slot. The third mixed-integer nonconvex subproblem is efficiently solved utilizing the continuity relaxation of binary variables and the Lagrange dual decomposition method. Theoretical analysis shows that the proposal can quantitatively control the throughput-delay performance tradeoff with required EE performance. Simulation results consolidate the theoretical analysis and demonstrate the advantages of the proposal from the prospective of queue stability and power consumption.) <|cite_end|> <|cite_start|> (Reference: Energy-efficient power allocation for distributed antenna systems with proportional fairness: In this paper, we propose an energy-efficient power allocation scheme for the downlink multiuser distributed antenna systems. The objective is to maximize the energy efficiency (EE) under the constraints on per-antenna transmit power and proportional data rates among users. Since EE function is typically defined in fractional form, it is computationally complex to optimize the EE directly. We first convert the nonlinear fractional problem into an equivalent but better tractable problem, based on which we derive an iterative algorithm to obtain the global optimum of the considered problem. On top of being optimal, the solution is also lightweight since only a single-variable nonlinear equation needs to be solved. Furthermore, we can flexibly switch to another mode of operation which delivers relatively higher spectral efficiency (SE) at the expense of losing EE. Numerical simulations and complexity analysis validate that the proposed scheme achieves much higher SE and EE performance with stricter satisfaction of the proportional rate constraints and lower complexity, compared to the state-of-the-art scheme in literature.) <|cite_end|> <|cite_start|> (Reference: Energy-Efficient Resource Allocation in OFDM Systems with Distributed Antennas: In this paper, we develop an energy-efficient resource-allocation scheme with proportional fairness for downlink multiuser orthogonal frequency-division multiplexing (OFDM) systems with distributed antennas. Our aim is to maximize energy efficiency (EE) under the constraints of the overall transmit power of each remote access unit (RAU), proportional fairness data rates, and bit error rates (BERs). Because of the nonconvex nature of the optimization problem, obtaining the optimal solution is extremely computationally complex. Therefore, we develop a low-complexity suboptimal algorithm, which separates subcarrier allocation and power allocation. For the low-complexity algorithm, we first allocate subcarriers by assuming equal power distribution. Then, by exploiting the properties of fractional programming, we transform the nonconvex optimization problem in fractional form into an equivalent optimization problem in subtractive form, which includes a tractable solution. Next, an optimal energy-efficient power-allocation algorithm is developed to maximize EE while maintaining proportional fairness. Through computer simulation, we demonstrate the effectiveness of the proposed low-complexity algorithm and illustrate the fundamental tradeoff between energy- and spectral-efficient transmission designs.) <|cite_end|> <|cite_start|> (Reference: Computation Offloading for IoT in C-RAN: Optimization and Deep Learning: We consider computation offloading for Internet-of-things (IoT) applications in multiple-input-multiple-output (MIMO) cloud-radio-access-network (C-RAN). Due to the limited battery life and computational capability in the IoT devices (IoTDs), the computational tasks of the IoTDs are offloaded to a MIMO C-RAN, where a MIMO radio resource head (RRH) is connected to a baseband unit (BBU) through a capacity-limited fronthaul link, facilitated by the spatial filtering and uniform scalar quantization. We formulate a computation offloading optimization problem to minimize the total transmit power of the IoTDs while satisfying the latency requirement of the computational tasks, and find that the problem is non-convex. To obtain a feasible solution, firstly the spatial filtering matrix is locally optimized at the MIMO RRH. Subsequently, we leverage the alternating optimization framework for joint optimization on the residual variables at the BBU, where the baseband combiner is obtained in a closed-form, the resource allocation sub-problem is solved through successive inner convexification, and the number of quantization bits is obtained by a line-search method. As a low-complexity approach, we deploy a supervised deep learning method, which is trained with the solutions to our optimization algorithm. Numerical results validate the effectiveness of the proposed algorithm and the deep learning method.) <|cite_end|>. Authors in <|cite_start|> (Reference: CoMP transmission in downlink NOMA-based heterogeneous cloud radio access networks: In this paper, we investigate the integration between the coordinated multipoint (CoMP) transmission and the non-orthogonal multiple access (NOMA) in downlink heterogeneous cloud radio access networks (H-CRANs). In H-CRAN, low-power high-density small remote radio heads (SRRHs) are underlaid by high-power low-density macro RRH (MRRH). However, co-channel deployment of the different RRHs gives rise to the problem of inter-cell interference that significantly affects system performance especially the cell-edge users. Thus, the users are first categorized into Non-CoMP users and CoMP users based on the relation between the useful signal to the dominant interference signal. The Non-CoMP user is the user equipment (UEs) having high signal-to-interference-plus-noise-ratio ( $\mathtt {SINR}$ ) and hence associates with only one RRH. On the other hand, the CoMP user, cell-edge user, is the UE that experiences less distinctive received power with the best two RRHs. In the proposed CoMP-NOMA framework, each RRH schedules CoMP-UE and non-CoMP-UE over the same transmission channel using NOMA. We first design an analytical framework based on tools from the stochastic geometry to evaluate the performance of the proposed framework (CoMP-NOMA) which is based on H-CRAN in terms of the average achievable data rate for each NOMA UE. We then examine the spectral efficiency of the proposed CoMP-NOMA based H-CRAN. Simulation results are provided to validate the accuracy of the analytical models and to reveal the superiority of the proposed CoMP-NOMA framework compared with conventional CoMP orthogonal multiple access (CoMP-OMA) techniques. By reaping the benefits of both JT-CoMP and NOMA, we prove that the proposed framework can successfully deal with the inter-cell interference by using CoMP and improve the network’s spectral efficiency through NOMA technique. We also show that, with an appropriate power allocation coefficient setting at the Non-CoMP-UEs, a fairness performance can be achieved between the CoMP-UEs and the Non-CoMP-UEs.) <|cite_end|> <|cite_start|> (Reference: Systematic resource allocation in cloud RAN with caching as a service under two timescales: Recently, cloud radio access network (C-RAN) with caching as a service (CaaS) was proposed to merge the functionalities of communication, computing, and caching (CC&C) together. In this paper, we dissect the interactions of CC&C in C-RAN with CaaS from two dimensions: physical resource dimension and time dimension. In the physical resource dimension, we identify how to segment the baseband unit (BBU) pool resources (i.e., computation and storage) into different types of virtual machines (VMs). In the time dimension, we address how the long-term resource segmentation in the BBU pool impacts on the short-term transmit beamforming at the remote radio heads. We formulate the problem as a stochastic mixed-integer nonlinear programming (SMINLP) to minimize the system cost, including the server cost, VM cost and wireless transmission cost. After a series of approximation, including sample average approximation, successive convex approximation, and semidefinite relaxation, the SMINLP is approximated as a global consensus problem. The alternating direction method of multipliers (ADMM) is utilized to obtain the solution in a parallel fashion. Simulation results verify the convergence of our proposed algorithm, and also confirm that the proposed scheme is more cost-saving than that without considering the integration of CC&C.) <|cite_end|> <|cite_start|> (Reference: Economically optimal MS association for multimedia content delivery in cache-enabled heterogeneous cloud radio access networks: In cache-enabled heterogeneous cloud radio access networks (HC-RANs), mobile station (MS) association for multimedia content delivery should consider both the content caching location and the wireless channel quality. This paper studies economically optimal MS association to tradeoff the cache-hit ratio and the ratio of MSs with satisfied quality of service (QoS). When the associated enhanced remote radio unit (eRRU) stores the requesting content, the content can be fetched directly from the local cache. Otherwise, fronthaul has to be used to fetch the content. The use of fronthaul resource and cache is treated as costs, and payments of QoS-satisfied MSs are treated as incomes. Thus, the economic MS association is formulated as an optimization problem to maximize the system utility, i.e., total profit of the network operator, which is defined as the difference between incomes and costs. A belief propagation-based method is employed to solve the problem on a developed factor graph. Simulation results show that the proposed economically optimal MS association achieves much higher profit than the existing schemes and works well in the network with various loads. Moreover, the profit of the proposed scheme can be improved with inter-cell interference coordination. For the case with extremely skewed content popularity, the proposed scheme can avoid MS overloading at eRRUs storing most popular multimedia contents. Furthermore, it can support more MSs with satisfied QoS, which leads to a higher profit.) <|cite_end|>integrate various system benefits into utility functions to achieve joint optimization of system performance. Nevertheless, many works primarily focus on pursuing optimal performance based on given system resources without considering the specific QoS requirements of individual services <|cite_start|> (Reference: CoMP transmission in downlink NOMA-based heterogeneous cloud radio access networks: In this paper, we investigate the integration between the coordinated multipoint (CoMP) transmission and the non-orthogonal multiple access (NOMA) in downlink heterogeneous cloud radio access networks (H-CRANs). In H-CRAN, low-power high-density small remote radio heads (SRRHs) are underlaid by high-power low-density macro RRH (MRRH). However, co-channel deployment of the different RRHs gives rise to the problem of inter-cell interference that significantly affects system performance especially the cell-edge users. Thus, the users are first categorized into Non-CoMP users and CoMP users based on the relation between the useful signal to the dominant interference signal. The Non-CoMP user is the user equipment (UEs) having high signal-to-interference-plus-noise-ratio ( $\mathtt {SINR}$ ) and hence associates with only one RRH. On the other hand, the CoMP user, cell-edge user, is the UE that experiences less distinctive received power with the best two RRHs. In the proposed CoMP-NOMA framework, each RRH schedules CoMP-UE and non-CoMP-UE over the same transmission channel using NOMA. We first design an analytical framework based on tools from the stochastic geometry to evaluate the performance of the proposed framework (CoMP-NOMA) which is based on H-CRAN in terms of the average achievable data rate for each NOMA UE. We then examine the spectral efficiency of the proposed CoMP-NOMA based H-CRAN. Simulation results are provided to validate the accuracy of the analytical models and to reveal the superiority of the proposed CoMP-NOMA framework compared with conventional CoMP orthogonal multiple access (CoMP-OMA) techniques. By reaping the benefits of both JT-CoMP and NOMA, we prove that the proposed framework can successfully deal with the inter-cell interference by using CoMP and improve the network’s spectral efficiency through NOMA technique. We also show that, with an appropriate power allocation coefficient setting at the Non-CoMP-UEs, a fairness performance can be achieved between the CoMP-UEs and the Non-CoMP-UEs.) <|cite_end|> <|cite_start|> (Reference: Cross-Layer Resource Allocation With Elastic Service Scaling in Cloud Radio Access Network: Cloud radio access network (C-RAN) aims to improve spectrum and energy efficiency of wireless networks by migrating conventional distributed base station functionalities into a centralized cloud baseband unit (BBU) pool. We propose and investigate a cross-layer resource allocation model for C-RAN to minimize the overall system power consumption in the BBU pool, fiber links and the remote radio heads (RRHs). We characterize the cross-layer resource allocation problem as a mixed-integer nonlinear programming (MINLP), which jointly considers elastic service scaling, RRH selection, and joint beamforming. The MINLP is however a combinatorial optimization problem and NP-hard. We relax the original MINLP problem into an extended sum-utility maximization (ESUM) problem, and propose two different solution approaches. We also propose a low-complexity Shaping-and-Pruning (SP) algorithm to obtain a sparse solution for the active RRH set. Simulation results suggest that the average sparsity of the solution given by our SP algorithm is close to that obtained by a recently proposed greedy selection algorithm, which has higher computational complexity. Furthermore, our proposed cross-layer resource allocation is more energy efficient than the greedy selection and successive selection algorithms.) <|cite_end|> <|cite_start|> (Reference: Throughput Maximization in Cloud-Radio Access Networks using Cross-Layer Network Coding: Cloud radio access networks (C-RANs) are promising paradigms for the fifth-generation (5G) networks due to their interference management capabilities. In a C-RAN, a central processor (CP) is responsible for coordinating multiple Remote Radio Heads (RRHs) and scheduling users to their radio resource blocks (RRBs). In this paper, we develop a novel cross-layer network coding (CLNC) approach that proposes to optimize RRH’s transmit powers and user’s rates in making the coding decisions. As such, cross-layer throughput of the network is maximized. The joint user scheduling, file encoding, and power adaptation problem is solved by designing a subgraph for each RRB, in which each vertex represents potential user-RRH associations, encoded files, transmission rates, and power levels (PLs) for one RRB. It is then shown that the C-RAN throughput maximization problem is equivalent to a maximum-weight clique problem over the union of all such subgraphs, called herein the CRAN-CLNC graph. Numerical results revealed that the proposed joint and iterative schemes offer improved throughput performances as compared to the existing algorithms in the literature. Compared to our proposed joint scheme, our proposed iterative scheme has a certain degradation, roughly in the range of 9%–14%. This small degradation in the throughput performance of the iterative scheme comes at the achieved low computational complexity as compared to the high complexity of the joint scheme.) <|cite_end|> <|cite_start|> (Reference: {Message-passing-based dynamic point selection for coordinated multipoint transmission: This letter develops a dynamic point selection strategy for coordinated multipoint transmission using a message-passing approach. The dynamic determination of the best transmit point for individual users with the objective of the sum-rate maximization can be cast as a bipartite b-matching problem, the computational cost of which, however, becomes quickly intractable with the increasing number of users. Therefore, this letter develops a message-passing algorithm that solves this computationally demanding challenge. Simulation results show that the proposed algorithm outperforms existing greedy-style approaches and provides a very efficient solution for the maximal sum-rate configuration.) <|cite_end|> <|cite_start|> (Reference: Cross-layer Optimization for Ultra-reliable and Low-latency Radio Access Networks: In this paper, we propose a framework for cross-layer optimization to ensure ultra-high reliability and ultra-low latency in radio access networks, where both transmission delay and queueing delay are considered. With short transmission time, the blocklength of channel codes is finite, and the Shannon Capacity cannot be used to characterize the maximal achievable rate with given transmission error probability. With randomly arrived packets, some packets may violate the queueing delay. Moreover, since the queueing delay is shorter than the channel coherence time in typical scenarios, the required transmit power to guarantee the queueing delay and transmission error probability will become unbounded even with spatial diversity. To ensure the required quality-of-service (QoS) with finite transmit power, a proactive packet dropping mechanism is introduced. Then, the overall packet loss probability includes transmission error probability, queueing delay violation probability, and packet dropping probability. We optimize the packet dropping policy, power allocation policy, and bandwidth allocation policy to minimize the transmit power under the QoS constraint. The optimal solution is obtained, which depends on both channel and queue state information. Simulation and numerical results validate our analysis, and show that setting packet loss probabilities equal is a near optimal solution.) <|cite_end|> <|cite_start|> (Reference: {Joint communication and computing resource allocation in 5G cloud radio access networks: Cloud-radio access network (C-RAN) is regarded as a promising solution to manage heterogeneity and scalability of future wireless networks. The centralized cooperative resource allocation and interference cancellation methods in C-RAN significantly reduce the interference levels to provide high data rates. However, the centralized solution is not scalable due to the dense deployment of small cells with fractional frequency reuse, causing severe inter-tier and inter-cell interference turning the resource allocation and user association into a more challenging problem. In this paper, we investigate joint communication and computing resource allocation along with user association, and baseband unit (BBU) and remote radio head (RRH) mapping in C-RANs. We initially establish a queueing model in C-RAN, followed by formulation of two optimization problems for communication [e.g., resource blocks (RBs) and power] and computing [e.g., virtual machines (VMs)] resources allocation with the aim to minimize mean response time. User association along with the RB allocation, interference, and queueing stability constraints are considered in the communication resource optimization problem. The computing resource optimization problem considers BBU-RRH mapping and VM allocation for small cells, constrained to BBU server capacity and queueing stability. To solve the communication and computing resource optimization problem, we propose a joint resource allocation solution that considers a double-sided auction based distributed resource allocation (DS-ADRA) method, where small cell base stations and users jointly participate using the concept of auction theory. The proposed method is evaluated via simulations by considering the effect of bandwidth utilization percentage, signal-to-interference ratio threshold value and the number of users. The results show that the proposed method can be successfully implemented for 5G C-RANs.) <|cite_end|> <|cite_start|> (Reference: Computation Offloading for IoT in C-RAN: Optimization and Deep Learning: We consider computation offloading for Internet-of-things (IoT) applications in multiple-input-multiple-output (MIMO) cloud-radio-access-network (C-RAN). Due to the limited battery life and computational capability in the IoT devices (IoTDs), the computational tasks of the IoTDs are offloaded to a MIMO C-RAN, where a MIMO radio resource head (RRH) is connected to a baseband unit (BBU) through a capacity-limited fronthaul link, facilitated by the spatial filtering and uniform scalar quantization. We formulate a computation offloading optimization problem to minimize the total transmit power of the IoTDs while satisfying the latency requirement of the computational tasks, and find that the problem is non-convex. To obtain a feasible solution, firstly the spatial filtering matrix is locally optimized at the MIMO RRH. Subsequently, we leverage the alternating optimization framework for joint optimization on the residual variables at the BBU, where the baseband combiner is obtained in a closed-form, the resource allocation sub-problem is solved through successive inner convexification, and the number of quantization bits is obtained by a line-search method. As a low-complexity approach, we deploy a supervised deep learning method, which is trained with the solutions to our optimization algorithm. Numerical results validate the effectiveness of the proposed algorithm and the deep learning method.) <|cite_end|>. Moreover, other optimization efforts that take into account traffic characteristics have not fully characterized the utilization of underlying resources <|cite_start|> (Reference: Economically optimal MS association for multimedia content delivery in cache-enabled heterogeneous cloud radio access networks: In cache-enabled heterogeneous cloud radio access networks (HC-RANs), mobile station (MS) association for multimedia content delivery should consider both the content caching location and the wireless channel quality. This paper studies economically optimal MS association to tradeoff the cache-hit ratio and the ratio of MSs with satisfied quality of service (QoS). When the associated enhanced remote radio unit (eRRU) stores the requesting content, the content can be fetched directly from the local cache. Otherwise, fronthaul has to be used to fetch the content. The use of fronthaul resource and cache is treated as costs, and payments of QoS-satisfied MSs are treated as incomes. Thus, the economic MS association is formulated as an optimization problem to maximize the system utility, i.e., total profit of the network operator, which is defined as the difference between incomes and costs. A belief propagation-based method is employed to solve the problem on a developed factor graph. Simulation results show that the proposed economically optimal MS association achieves much higher profit than the existing schemes and works well in the network with various loads. Moreover, the profit of the proposed scheme can be improved with inter-cell interference coordination. For the case with extremely skewed content popularity, the proposed scheme can avoid MS overloading at eRRUs storing most popular multimedia contents. Furthermore, it can support more MSs with satisfied QoS, which leads to a higher profit.) <|cite_end|> <|cite_start|> (Reference: Systematic resource allocation in cloud RAN with caching as a service under two timescales: Recently, cloud radio access network (C-RAN) with caching as a service (CaaS) was proposed to merge the functionalities of communication, computing, and caching (CC&C) together. In this paper, we dissect the interactions of CC&C in C-RAN with CaaS from two dimensions: physical resource dimension and time dimension. In the physical resource dimension, we identify how to segment the baseband unit (BBU) pool resources (i.e., computation and storage) into different types of virtual machines (VMs). In the time dimension, we address how the long-term resource segmentation in the BBU pool impacts on the short-term transmit beamforming at the remote radio heads. We formulate the problem as a stochastic mixed-integer nonlinear programming (SMINLP) to minimize the system cost, including the server cost, VM cost and wireless transmission cost. After a series of approximation, including sample average approximation, successive convex approximation, and semidefinite relaxation, the SMINLP is approximated as a global consensus problem. The alternating direction method of multipliers (ADMM) is utilized to obtain the solution in a parallel fashion. Simulation results verify the convergence of our proposed algorithm, and also confirm that the proposed scheme is more cost-saving than that without considering the integration of CC&C.) <|cite_end|>. In addition, the majority of studies make decisions on user scheduling and power, bandwidth, and computational resource allocation based on ideal infinite-length channel codes, where Shannon capacity is considered as the actual throughput of users <|cite_start|> (Reference: {Double deep Q-network-based energy-efficient resource allocation in cloud radio access network: Cloud radio access network (CRAN) has been shown as an effective means to boost network performance. Such gain stems from the intelligent management of remote radio heads (RRHs) in terms of on/off operation mode and power consumption. Most conventional resource allocation (RA) methods, however, optimize the network utility without considering the switching overhead of RRHs in adjacent time intervals. When the network environment becomes time-correlated, mathematical optimization is not directly applicable. In this paper, we aim to optimize the energy efficiency (EE) subject to the constraints on per-RRH transmission power and user data rates. To this end, we formulate the EE problem as a Markov decision process (MDP) and subsequently adopt deep reinforcement learning (DRL) technique to reap the cumulative EE rewards. Our starting point is the deep Q network (DQN), which is a combination of deep learning and Q-learning. In each time slot, DQN configures the status of RRHs yielding the largest Q-value (known as state-action value) prior to solving a power minimization problem for active RRHs. To overcome the Q-value overestimation issue of DQN, we propose a Double DQN (DDQN) framework that obtains optimal reward better than DQN by separating the selected action from the target Q-value generator. Simulation results validate that the DDQN-based RA method is more energy-efficient than the DQN-based RA algorithm and a baseline solution.) <|cite_end|> <|cite_start|> (Reference: CoMP transmission in downlink NOMA-based heterogeneous cloud radio access networks: In this paper, we investigate the integration between the coordinated multipoint (CoMP) transmission and the non-orthogonal multiple access (NOMA) in downlink heterogeneous cloud radio access networks (H-CRANs). In H-CRAN, low-power high-density small remote radio heads (SRRHs) are underlaid by high-power low-density macro RRH (MRRH). However, co-channel deployment of the different RRHs gives rise to the problem of inter-cell interference that significantly affects system performance especially the cell-edge users. Thus, the users are first categorized into Non-CoMP users and CoMP users based on the relation between the useful signal to the dominant interference signal. The Non-CoMP user is the user equipment (UEs) having high signal-to-interference-plus-noise-ratio ( $\mathtt {SINR}$ ) and hence associates with only one RRH. On the other hand, the CoMP user, cell-edge user, is the UE that experiences less distinctive received power with the best two RRHs. In the proposed CoMP-NOMA framework, each RRH schedules CoMP-UE and non-CoMP-UE over the same transmission channel using NOMA. We first design an analytical framework based on tools from the stochastic geometry to evaluate the performance of the proposed framework (CoMP-NOMA) which is based on H-CRAN in terms of the average achievable data rate for each NOMA UE. We then examine the spectral efficiency of the proposed CoMP-NOMA based H-CRAN. Simulation results are provided to validate the accuracy of the analytical models and to reveal the superiority of the proposed CoMP-NOMA framework compared with conventional CoMP orthogonal multiple access (CoMP-OMA) techniques. By reaping the benefits of both JT-CoMP and NOMA, we prove that the proposed framework can successfully deal with the inter-cell interference by using CoMP and improve the network’s spectral efficiency through NOMA technique. We also show that, with an appropriate power allocation coefficient setting at the Non-CoMP-UEs, a fairness performance can be achieved between the CoMP-UEs and the Non-CoMP-UEs.) <|cite_end|> <|cite_start|> (Reference: Cross-Layer Resource Allocation With Elastic Service Scaling in Cloud Radio Access Network: Cloud radio access network (C-RAN) aims to improve spectrum and energy efficiency of wireless networks by migrating conventional distributed base station functionalities into a centralized cloud baseband unit (BBU) pool. We propose and investigate a cross-layer resource allocation model for C-RAN to minimize the overall system power consumption in the BBU pool, fiber links and the remote radio heads (RRHs). We characterize the cross-layer resource allocation problem as a mixed-integer nonlinear programming (MINLP), which jointly considers elastic service scaling, RRH selection, and joint beamforming. The MINLP is however a combinatorial optimization problem and NP-hard. We relax the original MINLP problem into an extended sum-utility maximization (ESUM) problem, and propose two different solution approaches. We also propose a low-complexity Shaping-and-Pruning (SP) algorithm to obtain a sparse solution for the active RRH set. Simulation results suggest that the average sparsity of the solution given by our SP algorithm is close to that obtained by a recently proposed greedy selection algorithm, which has higher computational complexity. Furthermore, our proposed cross-layer resource allocation is more energy efficient than the greedy selection and successive selection algorithms.) <|cite_end|> <|cite_start|> (Reference: Throughput Maximization in Cloud-Radio Access Networks using Cross-Layer Network Coding: Cloud radio access networks (C-RANs) are promising paradigms for the fifth-generation (5G) networks due to their interference management capabilities. In a C-RAN, a central processor (CP) is responsible for coordinating multiple Remote Radio Heads (RRHs) and scheduling users to their radio resource blocks (RRBs). In this paper, we develop a novel cross-layer network coding (CLNC) approach that proposes to optimize RRH’s transmit powers and user’s rates in making the coding decisions. As such, cross-layer throughput of the network is maximized. The joint user scheduling, file encoding, and power adaptation problem is solved by designing a subgraph for each RRB, in which each vertex represents potential user-RRH associations, encoded files, transmission rates, and power levels (PLs) for one RRB. It is then shown that the C-RAN throughput maximization problem is equivalent to a maximum-weight clique problem over the union of all such subgraphs, called herein the CRAN-CLNC graph. Numerical results revealed that the proposed joint and iterative schemes offer improved throughput performances as compared to the existing algorithms in the literature. Compared to our proposed joint scheme, our proposed iterative scheme has a certain degradation, roughly in the range of 9%–14%. This small degradation in the throughput performance of the iterative scheme comes at the achieved low computational complexity as compared to the high complexity of the joint scheme.) <|cite_end|> <|cite_start|> (Reference: {Joint communication and computing resource allocation in 5G cloud radio access networks: Cloud-radio access network (C-RAN) is regarded as a promising solution to manage heterogeneity and scalability of future wireless networks. The centralized cooperative resource allocation and interference cancellation methods in C-RAN significantly reduce the interference levels to provide high data rates. However, the centralized solution is not scalable due to the dense deployment of small cells with fractional frequency reuse, causing severe inter-tier and inter-cell interference turning the resource allocation and user association into a more challenging problem. In this paper, we investigate joint communication and computing resource allocation along with user association, and baseband unit (BBU) and remote radio head (RRH) mapping in C-RANs. We initially establish a queueing model in C-RAN, followed by formulation of two optimization problems for communication [e.g., resource blocks (RBs) and power] and computing [e.g., virtual machines (VMs)] resources allocation with the aim to minimize mean response time. User association along with the RB allocation, interference, and queueing stability constraints are considered in the communication resource optimization problem. The computing resource optimization problem considers BBU-RRH mapping and VM allocation for small cells, constrained to BBU server capacity and queueing stability. To solve the communication and computing resource optimization problem, we propose a joint resource allocation solution that considers a double-sided auction based distributed resource allocation (DS-ADRA) method, where small cell base stations and users jointly participate using the concept of auction theory. The proposed method is evaluated via simulations by considering the effect of bandwidth utilization percentage, signal-to-interference ratio threshold value and the number of users. The results show that the proposed method can be successfully implemented for 5G C-RANs.) <|cite_end|> <|cite_start|> (Reference: {Message-passing-based dynamic point selection for coordinated multipoint transmission: This letter develops a dynamic point selection strategy for coordinated multipoint transmission using a message-passing approach. The dynamic determination of the best transmit point for individual users with the objective of the sum-rate maximization can be cast as a bipartite b-matching problem, the computational cost of which, however, becomes quickly intractable with the increasing number of users. Therefore, this letter develops a message-passing algorithm that solves this computationally demanding challenge. Simulation results show that the proposed algorithm outperforms existing greedy-style approaches and provides a very efficient solution for the maximal sum-rate configuration.) <|cite_end|> <|cite_start|> (Reference: Cross-layer Optimization for Ultra-reliable and Low-latency Radio Access Networks: In this paper, we propose a framework for cross-layer optimization to ensure ultra-high reliability and ultra-low latency in radio access networks, where both transmission delay and queueing delay are considered. With short transmission time, the blocklength of channel codes is finite, and the Shannon Capacity cannot be used to characterize the maximal achievable rate with given transmission error probability. With randomly arrived packets, some packets may violate the queueing delay. Moreover, since the queueing delay is shorter than the channel coherence time in typical scenarios, the required transmit power to guarantee the queueing delay and transmission error probability will become unbounded even with spatial diversity. To ensure the required quality-of-service (QoS) with finite transmit power, a proactive packet dropping mechanism is introduced. Then, the overall packet loss probability includes transmission error probability, queueing delay violation probability, and packet dropping probability. We optimize the packet dropping policy, power allocation policy, and bandwidth allocation policy to minimize the transmit power under the QoS constraint. The optimal solution is obtained, which depends on both channel and queue state information. Simulation and numerical results validate our analysis, and show that setting packet loss probabilities equal is a near optimal solution.) <|cite_end|> <|cite_start|> (Reference: Energy-Efficient Joint Congestion Control and Resource Optimization in Heterogeneous Cloud Radio Access Networks: The heterogeneous cloud radio access network (HCRAN) is a promising paradigm which integrates the advantages of cloud radio access network (C-RAN) and heterogeneous network (HetNet). In this paper, we study the joint congestion control and resource optimization to explore the energy efficiency (EE)-guaranteed tradeoff between throughput utility and delay performance in a downlink slotted H-CRAN. We formulate the considered problem as a stochastic optimization problem, which maximizes the utility of average throughput and maintains the network stability subject to required EE constraint and transmit power consumption constraints by traffic admission control, user association, resource block allocation and power allocation. Leveraging on the Lyapunov optimization technique, the stochastic optimization problem can be transformed and decomposed into three separate subproblems which can be solved concurrently at each slot. The third mixed-integer nonconvex subproblem is efficiently solved utilizing the continuity relaxation of binary variables and the Lagrange dual decomposition method. Theoretical analysis shows that the proposal can quantitatively control the throughput-delay performance tradeoff with required EE performance. Simulation results consolidate the theoretical analysis and demonstrate the advantages of the proposal from the prospective of queue stability and power consumption.) <|cite_end|> <|cite_start|> (Reference: Systematic resource allocation in cloud RAN with caching as a service under two timescales: Recently, cloud radio access network (C-RAN) with caching as a service (CaaS) was proposed to merge the functionalities of communication, computing, and caching (CC&C) together. In this paper, we dissect the interactions of CC&C in C-RAN with CaaS from two dimensions: physical resource dimension and time dimension. In the physical resource dimension, we identify how to segment the baseband unit (BBU) pool resources (i.e., computation and storage) into different types of virtual machines (VMs). In the time dimension, we address how the long-term resource segmentation in the BBU pool impacts on the short-term transmit beamforming at the remote radio heads. We formulate the problem as a stochastic mixed-integer nonlinear programming (SMINLP) to minimize the system cost, including the server cost, VM cost and wireless transmission cost. After a series of approximation, including sample average approximation, successive convex approximation, and semidefinite relaxation, the SMINLP is approximated as a global consensus problem. The alternating direction method of multipliers (ADMM) is utilized to obtain the solution in a parallel fashion. Simulation results verify the convergence of our proposed algorithm, and also confirm that the proposed scheme is more cost-saving than that without considering the integration of CC&C.) <|cite_end|> <|cite_start|> (Reference: Cross-layer cloud offloading with quality of service guarantees in Fog-RANs: Fog radio access networks (F-RANs) have recently been postulated as an innovative solution to improve the fronthaul capacities of cloud base stations (CBSs). This architecture extends the CBS service by involving enhanced remote radio heads (eRRHs), which can pre-store and transmit popular files at the network edge (i.e., close to the end users). This is referred to as caching, and it allows the offloading of CBS resources, e.g., time and frequency. Recent works have been proposed to use rate-aware network coding in order to exploit the previously downloaded popular files at the users’ devices. As such, the CBS offloading is maximized. However, the users’ achieved Quality of Service (QoS), and the standard F-RANs physical-layer resource optimization have not received any attention to date. This paper proposes use of an innovative cross-layer network coding (CLNC) to address the above-mentioned issues. The proposed CLNC scheme is not only aware of different users’ rates but also controls the rates by jointly optimizing coding combinations, users-eRRHs/power zones (PZs) assignments, and transmission power in the PZs. Using a graph theoretical representation, we formulate the joint cross-layer CBS offloading and QoS guarantee problem and show its NP-hardness. Joint and iterative heuristic approaches are then developed to solve this problem using greedy vertex search and coloring techniques. The proposed approaches are finally validated and tested against the existing algorithms in the literature.) <|cite_end|> <|cite_start|> (Reference: A Joint Reinforcement-Learning Enabled Caching and Cross-Layer Network Code in F-RAN With D2D Communications: In this paper, we leverage reinforcement learning (RL) and cross-layer network coding (CLNC) for efficiently pre-fetching requested contents to the local caches and delivering these contents to requesting users in a downlink fog-radio access network (F-RAN) with device-to-device (D2D) communications. In the considered system, fog access points (F-APs) and cache-enabled D2D (CE-D2D) users are equipped with local caches that alleviate traffic burden at the fronthaul and facilitate rapid delivery of the users’ contents. To this end, the CLNC scheme optimizes the coding decisions, transmission rates, and power levels of both F-APs and CE-D2D users, and RL scheme optimizes caching strategy. A joint content placement and delivery problem is formulated as an optimization problem with a goal to maximize system sum-rate. The problem is an NP-hard problem. To efficiently solve it, we first develop an innovative decentralized CLNC coalition formation (CLNC-CF) switch algorithm to obtain a stable solution for the content delivery problem, where F-APs and CE-D2D users utilize CLNC resource allocation. By considering statistics of channel and users’ content request into account, we then develop a multi-agent RL algorithm for optimizing the content placement at both F-APs and CE-D2D users. Simulation results show that the proposed joint CLNC-CF-RL framework can effectively improve the sum-rate by up to 30%, 60%, and 150%, respectively, compared to: 1) an optimal uncoded algorithm, 2) a standard rate-aware-NC algorithm, and 3) a benchmark classical NC with network-layer optimization.) <|cite_end|> <|cite_start|> (Reference: Computation Offloading for IoT in C-RAN: Optimization and Deep Learning: We consider computation offloading for Internet-of-things (IoT) applications in multiple-input-multiple-output (MIMO) cloud-radio-access-network (C-RAN). Due to the limited battery life and computational capability in the IoT devices (IoTDs), the computational tasks of the IoTDs are offloaded to a MIMO C-RAN, where a MIMO radio resource head (RRH) is connected to a baseband unit (BBU) through a capacity-limited fronthaul link, facilitated by the spatial filtering and uniform scalar quantization. We formulate a computation offloading optimization problem to minimize the total transmit power of the IoTDs while satisfying the latency requirement of the computational tasks, and find that the problem is non-convex. To obtain a feasible solution, firstly the spatial filtering matrix is locally optimized at the MIMO RRH. Subsequently, we leverage the alternating optimization framework for joint optimization on the residual variables at the BBU, where the baseband combiner is obtained in a closed-form, the resource allocation sub-problem is solved through successive inner convexification, and the number of quantization bits is obtained by a line-search method. As a low-complexity approach, we deploy a supervised deep learning method, which is trained with the solutions to our optimization algorithm. Numerical results validate the effectiveness of the proposed algorithm and the deep learning method.) <|cite_end|>. Only a few references consider practical finite-length coding communication schemes and the associated bit error rate <|cite_start|> (Reference: Economically optimal MS association for multimedia content delivery in cache-enabled heterogeneous cloud radio access networks: In cache-enabled heterogeneous cloud radio access networks (HC-RANs), mobile station (MS) association for multimedia content delivery should consider both the content caching location and the wireless channel quality. This paper studies economically optimal MS association to tradeoff the cache-hit ratio and the ratio of MSs with satisfied quality of service (QoS). When the associated enhanced remote radio unit (eRRU) stores the requesting content, the content can be fetched directly from the local cache. Otherwise, fronthaul has to be used to fetch the content. The use of fronthaul resource and cache is treated as costs, and payments of QoS-satisfied MSs are treated as incomes. Thus, the economic MS association is formulated as an optimization problem to maximize the system utility, i.e., total profit of the network operator, which is defined as the difference between incomes and costs. A belief propagation-based method is employed to solve the problem on a developed factor graph. Simulation results show that the proposed economically optimal MS association achieves much higher profit than the existing schemes and works well in the network with various loads. Moreover, the profit of the proposed scheme can be improved with inter-cell interference coordination. For the case with extremely skewed content popularity, the proposed scheme can avoid MS overloading at eRRUs storing most popular multimedia contents. Furthermore, it can support more MSs with satisfied QoS, which leads to a higher profit.) <|cite_end|> <|cite_start|> (Reference: Energy-efficient power allocation for distributed antenna systems with proportional fairness: In this paper, we propose an energy-efficient power allocation scheme for the downlink multiuser distributed antenna systems. The objective is to maximize the energy efficiency (EE) under the constraints on per-antenna transmit power and proportional data rates among users. Since EE function is typically defined in fractional form, it is computationally complex to optimize the EE directly. We first convert the nonlinear fractional problem into an equivalent but better tractable problem, based on which we derive an iterative algorithm to obtain the global optimum of the considered problem. On top of being optimal, the solution is also lightweight since only a single-variable nonlinear equation needs to be solved. Furthermore, we can flexibly switch to another mode of operation which delivers relatively higher spectral efficiency (SE) at the expense of losing EE. Numerical simulations and complexity analysis validate that the proposed scheme achieves much higher SE and EE performance with stricter satisfaction of the proportional rate constraints and lower complexity, compared to the state-of-the-art scheme in literature.) <|cite_end|> <|cite_start|> (Reference: Energy-Efficient Resource Allocation in OFDM Systems with Distributed Antennas: In this paper, we develop an energy-efficient resource-allocation scheme with proportional fairness for downlink multiuser orthogonal frequency-division multiplexing (OFDM) systems with distributed antennas. Our aim is to maximize energy efficiency (EE) under the constraints of the overall transmit power of each remote access unit (RAU), proportional fairness data rates, and bit error rates (BERs). Because of the nonconvex nature of the optimization problem, obtaining the optimal solution is extremely computationally complex. Therefore, we develop a low-complexity suboptimal algorithm, which separates subcarrier allocation and power allocation. For the low-complexity algorithm, we first allocate subcarriers by assuming equal power distribution. Then, by exploiting the properties of fractional programming, we transform the nonconvex optimization problem in fractional form into an equivalent optimization problem in subtractive form, which includes a tractable solution. Next, an optimal energy-efficient power-allocation algorithm is developed to maximize EE while maintaining proportional fairness. Through computer simulation, we demonstrate the effectiveness of the proposed low-complexity algorithm and illustrate the fundamental tradeoff between energy- and spectral-efficient transmission designs.) <|cite_end|>, but in-depth analyses are lacking <|cite_start|> (Reference: Energy-efficient power allocation for distributed antenna systems with proportional fairness: In this paper, we propose an energy-efficient power allocation scheme for the downlink multiuser distributed antenna systems. The objective is to maximize the energy efficiency (EE) under the constraints on per-antenna transmit power and proportional data rates among users. Since EE function is typically defined in fractional form, it is computationally complex to optimize the EE directly. We first convert the nonlinear fractional problem into an equivalent but better tractable problem, based on which we derive an iterative algorithm to obtain the global optimum of the considered problem. On top of being optimal, the solution is also lightweight since only a single-variable nonlinear equation needs to be solved. Furthermore, we can flexibly switch to another mode of operation which delivers relatively higher spectral efficiency (SE) at the expense of losing EE. Numerical simulations and complexity analysis validate that the proposed scheme achieves much higher SE and EE performance with stricter satisfaction of the proportional rate constraints and lower complexity, compared to the state-of-the-art scheme in literature.) <|cite_end|> <|cite_start|> (Reference: Energy-Efficient Resource Allocation in OFDM Systems with Distributed Antennas: In this paper, we develop an energy-efficient resource-allocation scheme with proportional fairness for downlink multiuser orthogonal frequency-division multiplexing (OFDM) systems with distributed antennas. Our aim is to maximize energy efficiency (EE) under the constraints of the overall transmit power of each remote access unit (RAU), proportional fairness data rates, and bit error rates (BERs). Because of the nonconvex nature of the optimization problem, obtaining the optimal solution is extremely computationally complex. Therefore, we develop a low-complexity suboptimal algorithm, which separates subcarrier allocation and power allocation. For the low-complexity algorithm, we first allocate subcarriers by assuming equal power distribution. Then, by exploiting the properties of fractional programming, we transform the nonconvex optimization problem in fractional form into an equivalent optimization problem in subtractive form, which includes a tractable solution. Next, an optimal energy-efficient power-allocation algorithm is developed to maximize EE while maintaining proportional fairness. Through computer simulation, we demonstrate the effectiveness of the proposed low-complexity algorithm and illustrate the fundamental tradeoff between energy- and spectral-efficient transmission designs.) <|cite_end|>. Remarkably, performance analyses based on Shannon capacity not only overestimate the actual performance of communication systems but also lack analysis of the impact of transmission parameters on service performance in finite-length channel coding communication, which is the obstacle we aim to overcome in this work. Aforementioned researches only optimize the transmission in one or two layers of the system, which has natural limitations compared with cross-layer optimization that jointly considers the upper-layer applications and the lower-layer resources. In other words, applications with diverse quality-of-service (QoS) requirements drive optimization across multiple layers. The ultra-reliable low-latency communication (uRLLC) services, for example, require strong reliability ($\sim$99.999\%) and ultra-low latency ($\sim$1ms), while the enhanced mobile broadband (eMBB) applications expect high capacity with some tolerance for loss <|cite_start|> (Reference: A survey on 5G usage scenarios and traffic models: The fifth-generation mobile initiative, 5G, is a tremendous and collective effort to specify, standardize, design, manufacture, and deploy the next cellular network generation. 5G networks will support demanding services such as enhanced Mobile Broadband, Ultra-Reliable and Low Latency Communications and massive Machine-Type Communications, which will require data rates of tens of Gbps, latencies of few milliseconds and connection densities of millions of devices per square kilometer. This survey presents the most significant use cases expected for 5G including their corresponding scenarios and traffic models. First, the paper analyzes the characteristics and requirements for 5G communications, considering aspects such as traffic volume, network deployments, and main performance targets. Secondly, emphasizing the definition of performance evaluation criteria for 5G technologies, the paper reviews related proposals from principal standards development organizations and industry alliances. Finally, well-defined and significant 5G use cases are provided. As a result, these guidelines will help and ease the performance evaluation of current and future 5G innovations, as well as the dimensioning of 5G future deployments.) <|cite_end|>. If the same robustness guarantee and latency tolerance is applied uniformly to all types of business, some will fail to fulfill the requirements, while others will waste scarce wireless resources. On the other hand, the distributed RF units provide a structural foundation for implementing coordinated and collaborative transmission/reception strategies between RRHs, thereby improving spectrum efficiency and energy efficiency <|cite_start|> (Reference: Recent research in cloud radio access network (C-RAN) for 5G cellular systems - A survey: ) <|cite_end|> <|cite_start|> (Reference: {Double deep Q-network-based energy-efficient resource allocation in cloud radio access network: Cloud radio access network (CRAN) has been shown as an effective means to boost network performance. Such gain stems from the intelligent management of remote radio heads (RRHs) in terms of on/off operation mode and power consumption. Most conventional resource allocation (RA) methods, however, optimize the network utility without considering the switching overhead of RRHs in adjacent time intervals. When the network environment becomes time-correlated, mathematical optimization is not directly applicable. In this paper, we aim to optimize the energy efficiency (EE) subject to the constraints on per-RRH transmission power and user data rates. To this end, we formulate the EE problem as a Markov decision process (MDP) and subsequently adopt deep reinforcement learning (DRL) technique to reap the cumulative EE rewards. Our starting point is the deep Q network (DQN), which is a combination of deep learning and Q-learning. In each time slot, DQN configures the status of RRHs yielding the largest Q-value (known as state-action value) prior to solving a power minimization problem for active RRHs. To overcome the Q-value overestimation issue of DQN, we propose a Double DQN (DDQN) framework that obtains optimal reward better than DQN by separating the selected action from the target Q-value generator. Simulation results validate that the DDQN-based RA method is more energy-efficient than the DQN-based RA algorithm and a baseline solution.) <|cite_end|>
[ "<|reference_start|> A survey of 5G technologies: regulatory, standardization and industrial perspectives: <|reference_end|>", "<|reference_start|> Systematic resource allocation in cloud RAN with caching as a service under two timescales: Recently, cloud radio access network (C-RAN) with caching as a service (CaaS) was proposed to merge the functionalities of communication, computing, and caching (CC&C) together. In this paper, we dissect the interactions of CC&C in C-RAN with CaaS from two dimensions: physical resource dimension and time dimension. In the physical resource dimension, we identify how to segment the baseband unit (BBU) pool resources (i.e., computation and storage) into different types of virtual machines (VMs). In the time dimension, we address how the long-term resource segmentation in the BBU pool impacts on the short-term transmit beamforming at the remote radio heads. We formulate the problem as a stochastic mixed-integer nonlinear programming (SMINLP) to minimize the system cost, including the server cost, VM cost and wireless transmission cost. After a series of approximation, including sample average approximation, successive convex approximation, and semidefinite relaxation, the SMINLP is approximated as a global consensus problem. The alternating direction method of multipliers (ADMM) is utilized to obtain the solution in a parallel fashion. Simulation results verify the convergence of our proposed algorithm, and also confirm that the proposed scheme is more cost-saving than that without considering the integration of CC&C. <|reference_end|>", "<|reference_start|> CoMP transmission in downlink NOMA-based heterogeneous cloud radio access networks: In this paper, we investigate the integration between the coordinated multipoint (CoMP) transmission and the non-orthogonal multiple access (NOMA) in downlink heterogeneous cloud radio access networks (H-CRANs). In H-CRAN, low-power high-density small remote radio heads (SRRHs) are underlaid by high-power low-density macro RRH (MRRH). However, co-channel deployment of the different RRHs gives rise to the problem of inter-cell interference that significantly affects system performance especially the cell-edge users. Thus, the users are first categorized into Non-CoMP users and CoMP users based on the relation between the useful signal to the dominant interference signal. The Non-CoMP user is the user equipment (UEs) having high signal-to-interference-plus-noise-ratio ( $\\mathtt {SINR}$ ) and hence associates with only one RRH. On the other hand, the CoMP user, cell-edge user, is the UE that experiences less distinctive received power with the best two RRHs. In the proposed CoMP-NOMA framework, each RRH schedules CoMP-UE and non-CoMP-UE over the same transmission channel using NOMA. We first design an analytical framework based on tools from the stochastic geometry to evaluate the performance of the proposed framework (CoMP-NOMA) which is based on H-CRAN in terms of the average achievable data rate for each NOMA UE. We then examine the spectral efficiency of the proposed CoMP-NOMA based H-CRAN. Simulation results are provided to validate the accuracy of the analytical models and to reveal the superiority of the proposed CoMP-NOMA framework compared with conventional CoMP orthogonal multiple access (CoMP-OMA) techniques. By reaping the benefits of both JT-CoMP and NOMA, we prove that the proposed framework can successfully deal with the inter-cell interference by using CoMP and improve the network’s spectral efficiency through NOMA technique. We also show that, with an appropriate power allocation coefficient setting at the Non-CoMP-UEs, a fairness performance can be achieved between the CoMP-UEs and the Non-CoMP-UEs. <|reference_end|>", "<|reference_start|> Systematic resource allocation in cloud RAN with caching as a service under two timescales: Recently, cloud radio access network (C-RAN) with caching as a service (CaaS) was proposed to merge the functionalities of communication, computing, and caching (CC&C) together. In this paper, we dissect the interactions of CC&C in C-RAN with CaaS from two dimensions: physical resource dimension and time dimension. In the physical resource dimension, we identify how to segment the baseband unit (BBU) pool resources (i.e., computation and storage) into different types of virtual machines (VMs). In the time dimension, we address how the long-term resource segmentation in the BBU pool impacts on the short-term transmit beamforming at the remote radio heads. We formulate the problem as a stochastic mixed-integer nonlinear programming (SMINLP) to minimize the system cost, including the server cost, VM cost and wireless transmission cost. After a series of approximation, including sample average approximation, successive convex approximation, and semidefinite relaxation, the SMINLP is approximated as a global consensus problem. The alternating direction method of multipliers (ADMM) is utilized to obtain the solution in a parallel fashion. Simulation results verify the convergence of our proposed algorithm, and also confirm that the proposed scheme is more cost-saving than that without considering the integration of CC&C. <|reference_end|>" ]
[ 0, 32, 34, 41 ]
{"<|cite_1|>": "ss-707874", "<|cite_2|>": "ss-1275217", "<|cite_3|>": "ss-1332644", "<|cite_4|>": "arxiv-337808", "<|multi_cite_5_1|>": "ss-956304", "<|multi_cite_5_2|>": "arxiv-74445", "<|multi_cite_6_1|>": "ss-2441234", "<|multi_cite_6_2|>": "ss-2511335", "<|multi_cite_6_3|>": "ss-1728877", "<|cite_7|>": "arxiv-74445", "<|multi_cite_8_1|>": "ss-1234742", "<|multi_cite_8_2|>": "ss-2511336", "<|multi_cite_8_3|>": "ss-1269780", "<|multi_cite_8_4|>": "ss-1246738", "<|multi_cite_9_1|>": "ss-2511337", "<|multi_cite_9_2|>": "ss-1310877", "<|multi_cite_9_3|>": "arxiv-120213", "<|multi_cite_9_4|>": "arxiv-92375", "<|multi_cite_9_5|>": "ss-1805329", "<|multi_cite_9_6|>": "ss-1142186", "<|multi_cite_9_7|>": "arxiv-225219", "<|multi_cite_10_1|>": "ss-1728877", "<|multi_cite_10_2|>": "ss-1107124", "<|multi_cite_10_3|>": "ss-2477931", "<|multi_cite_11_1|>": "ss-1728877", "<|multi_cite_11_2|>": "ss-1310877", "<|multi_cite_11_3|>": "ss-1234742", "<|multi_cite_11_4|>": "ss-2511336", "<|multi_cite_11_5|>": "arxiv-120213", "<|multi_cite_11_6|>": "ss-2511338", "<|multi_cite_11_7|>": "arxiv-225219", "<|multi_cite_12_1|>": "ss-2477931", "<|multi_cite_12_2|>": "ss-1107124", "<|multi_cite_13_1|>": "ss-2511337", "<|multi_cite_13_2|>": "ss-1728877", "<|multi_cite_13_3|>": "ss-1310877", "<|multi_cite_13_4|>": "ss-1234742", "<|multi_cite_13_5|>": "ss-2511338", "<|multi_cite_13_6|>": "ss-2511336", "<|multi_cite_13_7|>": "arxiv-120213", "<|multi_cite_13_8|>": "arxiv-92375", "<|multi_cite_13_9|>": "ss-1107124", "<|multi_cite_13_10|>": "ss-1269780", "<|multi_cite_13_11|>": "ss-1246738", "<|multi_cite_13_12|>": "arxiv-225219", "<|multi_cite_14_1|>": "ss-2477931", "<|multi_cite_14_2|>": "ss-1805329", "<|multi_cite_14_3|>": "ss-1142186", "<|multi_cite_15_1|>": "ss-1805329", "<|multi_cite_15_2|>": "ss-1142186", "<|cite_16|>": "ss-1275217", "<|multi_cite_17_1|>": "ss-956304", "<|multi_cite_17_2|>": "ss-2511337", "<|multi_cite_17_3|>": "ss-1728877", "<|multi_cite_17_4|>": "ss-1310877", "<|cite_19|>": "ss-2483732", "<|multi_cite_20_1|>": "ss-1095228", "<|multi_cite_20_2|>": "ss-1630641", "<|multi_cite_20_3|>": "arxiv-504597", "<|multi_cite_20_4|>": "ss-2441241", "<|multi_cite_20_5|>": "ss-2441240", "<|multi_cite_21_1|>": "ss-1505987", "<|multi_cite_21_2|>": "ss-1355932"}
2111.10452
<|paper_start|> Title: MURAL: An Unsupervised Random Forest-Based Embedding for Electronic Health Record Data Abstract: MURAL: An Unsupervised Random Forest-Based Embedding for Electronic Health Record Data: A major challenge in embedding or visualizing clinical patient data is the heterogeneity of variable types including continuous lab values, categorical diagnostic codes, as well as missing or incomplete data. In particular, in EHR data, some variables are {\em missing not at random (MNAR)} but deliberately not collected and thus are a source of information. For example, lab tests may be deemed necessary for some patients on the basis of suspected diagnosis, but not for others. Here we present the MURAL forest -- an unsupervised random forest for representing data with disparate variable types (e.g., categorical, continuous, MNAR). MURAL forests consist of a set of decision trees where node-splitting variables are chosen at random, such that the marginal entropy of all other variables is minimized by the split. This allows us to also split on MNAR variables and discrete variables in a way that is consistent with the continuous variables. The end goal is to learn the MURAL embedding of patients using average tree distances between those patients. These distances can be fed to nonlinear dimensionality reduction method like PHATE to derive visualizable embeddings. While such methods are ubiquitous in continuous-valued datasets (like single cell RNA-sequencing) they have not been used extensively in mixed variable data. We showcase the use of our method on one artificial and two clinical datasets. We show that using our approach, we can visualize and classify data more accurately than competing approaches. Finally, we show that MURAL can also be used to compare cohorts of patients via the recently proposed tree-sliced Wasserstein distances. Introduction Unsupervised nonlinear embedding methods have allowed for exploration manifold learning of big high dimensional datasets in many fields ranging from epidemiology, to biology, to physics. However, a major limitation of using unsupervised embeddings in healthcare data is the large amount of missingness in the data as well as the mixed modality of the variables collected. In a typical EHR or patient dataset the range of missing data range from 20\% to 80\%, varying across broad categories of possible fields such as demographics, laboratory values, and treatment information <|cite_start|> (Reference: Evaluation of data completeness in the electronic health record for the purpose of patient recruitment into clinical trials: a retrospective analysis of element presence: ) <|cite_end|> <|cite_start|> (Reference: Strategies for handling missing clinical data for automated surgical site infection detection from the electronic health record: ) <|cite_end|> <|cite_start|> (Reference: Informative missingness in electronic health record systems: the curse of knowing: ) <|cite_end|>. Further there is a mix of real-valued, categorical and binary data which can be difficult to normalize or scale. This makes it difficult to to compute distances and affinities between datapoints---the first step in nonlinear dimensionality reduction methods such as tSNE, UMAP <|cite_start|> (Reference: UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction: UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. The result is a practical scalable algorithm that applies to real world data. The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.) <|cite_end|>, diffusion maps <|cite_start|> (Reference: Diffusion maps: ) <|cite_end|> or PHATE <|cite_start|> (Reference: Visualizing structure and transitions in high-dimensional biological data: ) <|cite_end|>. Similar distance/affinity computations are also required for spectral clustering <|cite_start|> (Reference: {On spectral clustering: analysis and an algorithm: Despite many empirical successes of spectral clustering methods— algorithms that cluster points using eigenvectors of matrices derived from the data—there are several unresolved issues. First. there are a wide variety of algorithms that use the eigenvectors in slightly different ways. Second, many of these algorithms have no proof that they will actually compute a reasonable clustering. In this paper, we present a simple spectral clustering algorithm that can be implemented using a few lines of Matlab. Using tools from matrix perturbation theory, we analyze the algorithm, and give conditions under which it can be expected to do well. We also show surprisingly good experimental results on a number of challenging clustering problems.) <|cite_end|>, which operates on a graph Laplacian computed from the affinity matrix. Thus data with missing values cannot be used, and if the values are MNAR they cannot be imputed. To tackle these issues, we propose to use an intermediary representation called the MURAL-forest, an unsupervised random forest in which tree distances between datapoints form an accurate measure of dissimilarity and can be used for data distance/affinity computation, as needed in <|cite_start|> (Reference: Visualizing structure and transitions in high-dimensional biological data: ) <|cite_end|> <|cite_start|> (Reference: UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction: UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. The result is a practical scalable algorithm that applies to real world data. The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.) <|cite_end|> <|cite_start|> (Reference: Diffusion maps: ) <|cite_end|>. MURAL creates a set of trees by splitting on any variable type (categorical, continuous, with or without missingness) using a marginal entropy criterion that is computed on {\em other} variables. Further, MURAL ensures that heterogeneity within categorical or MNAR variables is immediately broken down using low dimensional entropy to create 4-way splits at such levels. We test MURAL on ground truth data that the resulting tree distances result in accurate embeddings. While random forests are normally supervised and trained for prediction, there have been some efforts to learn random forests in an unsupervised manner. <|cite_start|> (Reference: Decision Forests: A Unified Framework For Classification, Regression, Density Estimation, Manifold Learning and Semi-supervised Learning: This review presents a unified, efficient model of random decision forests which can be applied to a number of machine learning, computer vision, and medical image analysis tasks. Our model extends existing forest-based techniques as it unifies classification, regression, density estimation, manifold learning, semi-supervised learning, and active learning under the same decision forest framework. This gives us the opportunity to write and optimize the core implementation only once, with application to many diverse tasks. The proposed model may be used both in a discriminative or generative way and may be applied to discrete or continuous, labeled or unlabeled data. The main contributions of this review are: (1) Proposing a unified, probabilistic and efficient model for a variety of learning tasks; (2) Demonstrating margin-maximizing properties of classification forests; (3) Discussing probabilistic regression forests in comparison with other nonlinear regression algorithms; (4) Introducing density forests for estimating probability density functions; (5) Proposing an efficient algorithm for sampling from a density forest; (6) Introducing manifold forests for nonlinear dimensionality reduction; (7) Proposing new algorithms for transductive learning and active learning. Finally, we discuss how alternatives such as random ferns and extremely randomized trees stem from our more general forest model. This document is directed at both students who wish to learn the basics of decision forests, as well as researchers interested in the new contributions. It presents both fundamental and novel concepts in a structured way, with many illustrative examples and real-world applications. Thorough comparisons with state-of-the-art algorithms such as support vector machines, boosting and Gaussian processes are presented and relative advantages and disadvantages discussed. The many synthetic examples and existing commercial applications demonstrate the validity of the proposed model and its flexibility.) <|cite_end|> describes a method called {\em manifold forests} which effectively use a splitting criterion based on intra-versus-inter split affinity or density. However, these and other methods often presuppose the ability to compute distances or affinities between high dimensional datapoints. By contrast, we use our MURAL unsupervised random forests in order {\em to be able to compute} an accurate distance between datapoints with missing and mixed-mode variables. We show the accuracy of our method by comparing the MURAL derived distances to known ground truth and recovering embeddings in a 5-dimensional Swiss roll. We then apply our method to a complete case subset of an intensive care unit dataset and of an international patient registry dataset of patients presenting with symptoms of upper gastrointestinal bleeding. We induce missingness in the complete case subsets in specific ranges of laboratory values and compare imputed values using mean imputation and multiple imputation with chained equations to the original ground truth. We then construct MURAL-embeddings on the full datasets with missingness. We show that MURAL-embeddings consistently display more structure and create separations that are more clinically meaningful than commonly used imputation methods. Finally, we show an application of our method in comparing entire cohorts of patients by computing a tree-based Wasserstein distance on the MURAL-forest, which can be used to quantify similarities or distances between patient cohorts. Related Work \subsection{Manifold Learning, Dimensionality Reduction, Clustering} Though there are many nonlinear dimensionality reduction and embedding methods, we focus our results on methods that can learn the {\em data manifold} or intrinsic low dimensional shape and structure of the data. We believe that this is useful in biomedical settings where many measurements of the patient reflect non-orthogonal aspects of the same underlying entity, essentially indicating the data in fact lies in a lower dimensional space. High dimensional data can often be modeled as a sampling $Z = \{z_i\}_{i=1}^N \subset \mathcal{M}^d$ of a $d$ dimensional manifold $\mathcal{M}^d$ that is mapped to $n$ dimensional observations $X = \{x_1, \ldots, x_N\} \subset \mathbbm{R}^n$ via a nonlinear function $x_i = f(z_i)$. Intuitively, although measurement strategies, modeled here via $f$, create high dimensional observations, the intrinsic dimensionality, or degrees of freedom within the data, is relatively low. This manifold assumption is at the core of the vast field of manifold learning <|cite_start|> (Reference: Manifold learning-based methods for analyzing single-cell RNA-sequencing data: ) <|cite_end|> <|cite_start|> (Reference: Diffusion maps: ) <|cite_end|> <|cite_start|> (Reference: Dimensionality Reduction: A Comparative Review: In recent years, a variety of nonlinear dimensionality reduction techniques have been proposed that aim to address the limitations of traditional techniques such as PCA and classical scaling. The paper presents a review and systematic comparison of these techniques. The performances of the nonlinear techniques are investigated on artificial and natural tasks. The results of the experiments reveal that nonlinear techniques perform well on selected artificial tasks, but that this strong performance does not necessarily extend to real-world tasks. The paper explains these results by identifying weaknesses of current nonlinear techniques, and suggests how the performance of nonlinear dimensionality reduction techniques may be improved.) <|cite_end|> <|cite_start|> (Reference: {Introduction to manifold learning: A popular research area today in statistics and machine learning is that of manifold learning, which is related to the algorithmic techniques of dimensionality reduction. Manifold learning can be divided into linear and nonlinear methods. Linear methods, which have long been part of the statistician's toolbox for analyzing multivariate data, include principal component analysis (PCA) and multidimensional scaling (MDS). Recently, there has been a flurry of research activity on nonlinear manifold learning, which includes Isomap, local linear embedding, Laplacian eigenmaps, Hessian eigenmaps, and diffusion maps. Some of these techniques are nonlinear generalizations of the linear methods. The algorithmic process of most of these techniques consists of three steps: a nearest‐neighbor search, a definition of distances or affinities between points (a key ingredient for the success of these methods), and an eigenproblem for embedding high‐dimensional points into a lower dimensional space. This article gives us a brief survey of these new methods and indicates their strengths and weaknesses. WIREs Comput Stat 2012 doi: 10.1002/wics.1222) <|cite_end|>, which leverages the intrinsic geometry of data, as modeled by a manifold, for exploring and understanding patterns, trends, and structure that displays significant nonlinearity. In <|cite_start|> (Reference: Diffusion maps: ) <|cite_end|>, diffusion maps were proposed as a robust way to capture intrinsic manifold geometry in data by eigendecomposing a powered diffusion operator. Using $t$-step random walks that aggregate local affinity, <|cite_start|> (Reference: Diffusion maps: ) <|cite_end|> were able to reveal nonlinear relations in data and allow their embedding in low dimensional coordinates. These local affinities are commonly constructed using a Gaussian kernel: \begin{equation} \label{GKernel} \mathbf{K} (x_i, x_j) = \exp\left( {-\frac{\| x_i- x_j\|^2}{\varepsilon} }\right) \,, \quad i,j=1,...,N \end{equation} where $\mK$ forms an $N \times N$ Gram matrix whose $(i,j)$ entry is denoted by $\mK(x_i, x_j)$. A diffusion operator is defined as the row-stochastic matrix $\mP = \mD^{-1} \mK$ where $\mD$ is a diagonal matrix with $\mD (x_i, x_i) = \sum_j \mK (x_i,x_j)$. The matrix $\mP$, or diffusion operator, defines single-step transition probabilities for a time-homogeneous diffusion process, or a Markovian random walk, over the data. Furthermore, as shown in <|cite_start|> (Reference: Diffusion maps: ) <|cite_end|>, powers of this matrix $\mP^t$, for $t > 0$, can be used to simulate multi-step random walks over the data, helping understand multiscale organization of $X$, which can be interpreted geometrically when the manifold assumption is satisfied. $\mP$ has been used in many downstream unsupervised learning tasks, eigendecomposition of $\mP$ yields the popular diffusion map dimensionality reduction method that can be used as input to clustering. $\mP$ is also used by the PHATE <|cite_start|> (Reference: Visualizing structure and transitions in high-dimensional biological data: ) <|cite_end|> method for visualization. PHATE transforms the diffusion operator with a pointwise logarithm $\log(\mP)$, derives distances between points $x_i, x_j$ as $\|\log(\mP_i)-\log(\mP_j)\|_2$, and then embeds the resulting distances, known as {\em potential distances}, with metric MDS. Other methods for visualization, such as <|cite_start|> (Reference: UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction: UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. The result is a practical scalable algorithm that applies to real world data. The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.) <|cite_end|>, use $\mK$ rather than $\mP$ to focus on near neighbors rather than learning the entire data manifold. The diffusion operator $\mP$ is related to the graph Laplacian that, depending on the normalization used, can be written as $\mL=\mI-\mK$ or $\mL=\mI-\mP$. Thus the graph Laplacian has the same eigenvectors and eigenvalues that are in the opposite order. Spectral clustering <|cite_start|> (Reference: {On spectral clustering: analysis and an algorithm: Despite many empirical successes of spectral clustering methods— algorithms that cluster points using eigenvectors of matrices derived from the data—there are several unresolved issues. First. there are a wide variety of algorithms that use the eigenvectors in slightly different ways. Second, many of these algorithms have no proof that they will actually compute a reasonable clustering. In this paper, we present a simple spectral clustering algorithm that can be implemented using a few lines of Matlab. Using tools from matrix perturbation theory, we analyze the algorithm, and give conditions under which it can be expected to do well. We also show surprisingly good experimental results on a number of challenging clustering problems.) <|cite_end|> is often described in terms of the graph Laplacian, i.e., $k$-means over a graph Laplacian rather than data. \subsection{Decision Trees} \label{sec:decisiontree} A {\em tree} $T$ is a connected directed acyclic graph $T=(V,E)$ with vertices (or nodes) $V=\{t_1, t_2, \ldots, t_n\}$ and $n-1$ edges $E$ such that every node has at most one incoming edge. A rooted tree has a {\em root} node $t_1$ with no incoming edges, while $t_i$, $i > 1$, all have exactly one incoming edge. A node $t_j \in children(t_i)$ if and only if $[t_i, t_j] \in E$, i.e., there is a directed edge from $t_i$ to $t_j$. A $descendant(t_i)$ is any node $t_k$ that is connected to $t_i$ by a directed path $t_i, \ldots t_k$ width a directed edge between each consecutive pair of nodes. Decision trees contain nodes that split on a variable to create partitions of the data such that datapoints on one side of the partition are more similar to each other in terms of the decision variable. Recursive splits create finer granularity branches where data points are similar with respect to all of the variables that have been split on the path to the node. A specific strength of decision tree is the ability to naturally split multiple types of data---binary, ordinal, and missing. \subsection{Supervised Random Forests} In classification tasks single decision trees can learn irregular patterns and overfit to data. As a way of addressing this, random forests average over sets of decision trees <|cite_start|> (Reference: {Classification And Regression Trees: Bellows TS and Fisher TW (eds.) (1999) Handbook of Biological Control: Principles and Applications of Biological Control. San Diego: Academic Press. Clausen CP (ed.) (1978) Agricultural Research Service: Handbook No. 480: Introduced Parasites and Predators of Arthropod Pests and Weeds: A World Review. Washington, DC: USDA: Agricultural Research Service. DeBach P and Rosen D (1991) Biological Control by Natural Enemies. Cambridge, UK: Cambridge University Press. Follett PA and Duan JJ (eds.) (2000) Nontarget Effects of Biological Control. Boston, MA: Kluwer Academic. Jervis M and Kidd N (eds.) (1996) Insect Natural Enemies: Practical Approaches to their Study and Evaluation. London: Chapman and Hall. Julien MH and Griffiths MW (eds.) (1998) Biological Control of Weeds, a World Catalogue of Agents and their Target Weeds, 4th edn. Wallingford: CABI Publishing. Van Driesche J and Van Driesche RG (2000) Nature Out of Place: Biological Invasions in a Global Age. Washington, DC: Island Press. Van Driesche RG, Hoddle M, and Center T (2008) Control of Pests and Weeds by Natural Enemies, an Introduction to Biological Control. London: Blackwell.) <|cite_end|> and are created by randomizing variable splits. The algorithm selects a random subset of features at each potential split, and chooses a threshold so as to optimize a local criterion such as the {\em Gini impurity index} or {\em information gain}. The Gini impurity index is an information theoretic measure that is based on Tsallis entropy <|cite_start|> (Reference: Possible generalization of Boltzmann-Gibbs statistics: ) <|cite_end|>. For $C$ classes (given labels) with fractions $P = \{ p_1, p_2, \ldots, p_C\}$ of observations in each class, the Gini impurity index is given by $I_G(P) = 1- \sum_i p_i$. Information gain is also an information theoretic measure which measures the difference in Shannon entropy between the parent node and child nodes. Shannon entropy of a probability distribution $P$ is given by $H(P) = -\sum_i p_i \log(p_i)$. Information gain is defined as \begin{equation} \label{eqn:infogain} I_G(P) = H(P) - \sum_a \frac{|a|}{k} H(P^a). \end{equation} Here $P$ is the class distribution of the parent node, and $P^a$ is the class distribution of the $a$-th child node, which receives $|a|$ datapoints. The total number of datapoints split by the parent node is $k$. Note that these criteria are with respect to a classification label that is given in a supervised setting. The original random forest classifier used labeled data to randomly train an ensemble of decision trees with a majority vote aggregating the classifications. Decision trees are constructed through recursively partitioning the space occupied by data as observations travel from the tree's root to its leaves, each nonterminal node containing a weak learner that chooses a splitting variable and threshold. These weak learners minimize an impurity function to ensure that each child node receives a ``purer'' cohort than its parent. Purity is determined by the proportion of labels; if all examples belong to the same class, the subset is considered pure. \subsection{Unsupervised Random Forests} Variants of decision trees have been used to cluster data in the absence of labels: random projection trees <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|> <|cite_start|> (Reference: {Random projection trees and low dimensional manifolds: We present a simple variant of the k-d tree which automatically adapts to intrinsic low dimensional structure in data without having to explicitly learn this structure.) <|cite_end|>, density forests <|cite_start|> (Reference: Decision Forests: A Unified Framework For Classification, Regression, Density Estimation, Manifold Learning and Semi-supervised Learning: This review presents a unified, efficient model of random decision forests which can be applied to a number of machine learning, computer vision, and medical image analysis tasks. Our model extends existing forest-based techniques as it unifies classification, regression, density estimation, manifold learning, semi-supervised learning, and active learning under the same decision forest framework. This gives us the opportunity to write and optimize the core implementation only once, with application to many diverse tasks. The proposed model may be used both in a discriminative or generative way and may be applied to discrete or continuous, labeled or unlabeled data. The main contributions of this review are: (1) Proposing a unified, probabilistic and efficient model for a variety of learning tasks; (2) Demonstrating margin-maximizing properties of classification forests; (3) Discussing probabilistic regression forests in comparison with other nonlinear regression algorithms; (4) Introducing density forests for estimating probability density functions; (5) Proposing an efficient algorithm for sampling from a density forest; (6) Introducing manifold forests for nonlinear dimensionality reduction; (7) Proposing new algorithms for transductive learning and active learning. Finally, we discuss how alternatives such as random ferns and extremely randomized trees stem from our more general forest model. This document is directed at both students who wish to learn the basics of decision forests, as well as researchers interested in the new contributions. It presents both fundamental and novel concepts in a structured way, with many illustrative examples and real-world applications. Thorough comparisons with state-of-the-art algorithms such as support vector machines, boosting and Gaussian processes are presented and relative advantages and disadvantages discussed. The many synthetic examples and existing commercial applications demonstrate the validity of the proposed model and its flexibility.) <|cite_end|>, PCA trees <|cite_start|> (Reference: Which Spatial Partition Trees are Adaptive to Intrinsic Dimension?: Recent theory work has found that a special type of spatial partition tree - called a random projection tree - is adaptive to the intrinsic dimension of the data from which it is built. Here we examine this same question, with a combination of theory and experiments, for a broader class of trees that includes k-d trees, dyadic trees, and PCA trees. Our motivation is to get a feel for (i) the kind of intrinsic low dimensional structure that can be empirically verified, (ii) the extent to which a spatial partition can exploit such structure, and (iii) the implications for standard statistical tasks such as regression, vector quantization, and nearest neighbor search.) <|cite_end|>, approximate principal direction trees <|cite_start|> (Reference: Approximate Principal Direction Trees: We introduce a new spatial data structure for high dimensional data called the \emph{approximate principal direction tree} (APD tree) that adapts to the intrinsic dimension of the data. Our algorithm ensures vector-quantization accuracy similar to that of computationally-expensive PCA trees with similar time-complexity to that of lower-accuracy RP trees. APD trees use a small number of power-method iterations to find splitting planes for recursively partitioning the data. As such they provide a natural trade-off between the running-time and accuracy achieved by RP and PCA trees. Our theoretical results establish a) strong performance guarantees regardless of the convergence rate of the power-method and b) that $O(\log d)$ iterations suffice to establish the guarantee of PCA trees when the intrinsic dimension is $d$. We demonstrate this trade-off and the efficacy of our data structure on both the CPU and GPU.) <|cite_end|>, and geodesic forests <|cite_start|> (Reference: Geodesic Forests: Together with the curse of dimensionality, nonlinear dependencies in large data sets persist as major challenges in data mining tasks. A reliable way to accurately preserve nonlinear structure is to compute geodesic distances between data points. Manifold learning methods, such as Isomap, aim to preserve geodesic distances in a Riemannian manifold. However, as manifold learning algorithms operate on the ambient dimensionality of the data, the essential step of geodesic distance computation is sensitive to high-dimensional noise. Therefore, a direct application of these algorithms to high-dimensional, noisy data often yields unsatisfactory results and does not accurately capture nonlinear structure. We propose an unsupervised random forest approach called geodesic forests (GF) to geodesic distance estimation in linear and nonlinear manifolds with noise. GF operates on low-dimensional sparse linear combinations of features, rather than the full observed dimensionality. To choose the optimal split in a computationally efficient fashion, we developed Fast-BIC, a fast Bayesian Information Criterion statistic for Gaussian mixture models. We additionally propose geodesic precision and geodesic recall as novel evaluation metrics that quantify how well the geodesic distances of a latent manifold are preserved. Empirical results on simulated and real data demonstrate that GF is robust to high-dimensional noise, whereas other methods, such as Isomap, UMAP, and FLANN, quickly deteriorate in such settings. Notably, GF is able to estimate geodesic distances better than other approaches on a real connectome dataset.) <|cite_end|>. These variants are often effective at learning the manifold of the data when the data variables are continuous and distances or Gaussian affinities can be defined between datapoints. However, for us this creates a chicken-and-egg problem. Our purpose in creating a random forest is to derive a meaningful distance in situations where there are missing values and categorical variables, where simple Euclidean distances are not meaningful. For example, Criminisi's manifold forests <|cite_start|> (Reference: Decision Forests: A Unified Framework For Classification, Regression, Density Estimation, Manifold Learning and Semi-supervised Learning: This review presents a unified, efficient model of random decision forests which can be applied to a number of machine learning, computer vision, and medical image analysis tasks. Our model extends existing forest-based techniques as it unifies classification, regression, density estimation, manifold learning, semi-supervised learning, and active learning under the same decision forest framework. This gives us the opportunity to write and optimize the core implementation only once, with application to many diverse tasks. The proposed model may be used both in a discriminative or generative way and may be applied to discrete or continuous, labeled or unlabeled data. The main contributions of this review are: (1) Proposing a unified, probabilistic and efficient model for a variety of learning tasks; (2) Demonstrating margin-maximizing properties of classification forests; (3) Discussing probabilistic regression forests in comparison with other nonlinear regression algorithms; (4) Introducing density forests for estimating probability density functions; (5) Proposing an efficient algorithm for sampling from a density forest; (6) Introducing manifold forests for nonlinear dimensionality reduction; (7) Proposing new algorithms for transductive learning and active learning. Finally, we discuss how alternatives such as random ferns and extremely randomized trees stem from our more general forest model. This document is directed at both students who wish to learn the basics of decision forests, as well as researchers interested in the new contributions. It presents both fundamental and novel concepts in a structured way, with many illustrative examples and real-world applications. Thorough comparisons with state-of-the-art algorithms such as support vector machines, boosting and Gaussian processes are presented and relative advantages and disadvantages discussed. The many synthetic examples and existing commercial applications demonstrate the validity of the proposed model and its flexibility.) <|cite_end|> use trees whose nodes minimize the following information gain measure when splitting \begin{equation} \label{eqn:Criminisi-infogain} I_G(S_j) = \log(| \Lambda(S_j) |) - \sum_{i \in \{L, R\}} \frac{|S_j^i|}{|S_j|} \log(| \Lambda(S_j^i) |). \end{equation} Here, $S_j$ is the set of datapoints that node $j$ partitions, $S_j^L$ and $S_j^R$ are the sets of datapoints from $S_j$ that get sent to the left and right child of node $j$, respectively. The matrix $\Lambda(S)$ is a set's covariance matrix, which is undefined in our case with missing values. Furthermore, unless binary affinities are chosen, the affinity matrices defined using manifold forests depend on preexisting distances between datapoints. Thus we define a new type of tree that can tolerate missing values and mixtures of variables, which can itself be used to compute a new type of distance. \subsection{Wasserstein Distance over Trees} The 1-Wasserstein distance (also known as the earth mover's distance) measures the total cost of moving shifting the mass from one probability distribution to another. For discrete probability distributions over a general metric space this can be computed exactly in $O(n^3)$ time using the Hungarian algorithm <|cite_start|> (Reference: Computational {Optimal Transport: Optimal transport (OT) theory can be informally described using the words of the French mathematician Gaspard Monge (1746-1818): A worker with a shovel in hand has to move a large pile of sand lying on a construction site. The goal of the worker is to erect with all that sand a target pile with a prescribed shape (for example, that of a giant sand castle). Naturally, the worker wishes to minimize her total effort, quantified for instance as the total distance or time spent carrying shovelfuls of sand. Mathematicians interested in OT cast that problem as that of comparing two probability distributions, two different piles of sand of the same volume. They consider all of the many possible ways to morph, transport or reshape the first pile into the second, and associate a "global" cost to every such transport, using the "local" consideration of how much it costs to move a grain of sand from one place to another. Recent years have witnessed the spread of OT in several fields, thanks to the emergence of approximate solvers that can scale to sizes and dimensions that are relevant to data sciences. Thanks to this newfound scalability, OT is being increasingly used to unlock various problems in imaging sciences (such as color or texture processing), computer vision and graphics (for shape manipulation) or machine learning (for regression, classification and density fitting). This short book reviews OT with a bias toward numerical methods and their applications in data sciences, and sheds lights on the theoretical properties of OT that make it particularly useful for some of these applications.) <|cite_end|>, and approximated using entropic regularization in $O(n^2)$ time <|cite_start|> (Reference: Sinkhorn Distances: Lightspeed Computation of Optimal Transport: Optimal transportation distances are a fundamental family of parameterized distances for histograms. Despite their appealing theoretical properties, excellent performance in retrieval tasks and intuitive formulation, their computation involves the resolution of a linear program whose cost is prohibitive whenever the histograms' dimension exceeds a few hundreds. We propose in this work a new family of optimal transportation distances that look at transportation problems from a maximum-entropy perspective. We smooth the classical optimal transportation problem with an entropic regularization term, and show that the resulting optimum is also a distance which can be computed through Sinkhorn-Knopp's matrix scaling algorithm at a speed that is several orders of magnitude faster than that of transportation solvers. We also report improved performance over classical optimal transportation distances on the MNIST benchmark problem.) <|cite_end|>. However, for discrete probability distributions over a tree metric space the 1-Wasserstein distance can be computed exactly in linear time <|cite_start|> (Reference: Tree-Sliced Variants of Wasserstein Distances: Optimal transport (\OT) theory defines a powerful set of tools to compare probability distributions. \OT~suffers however from a few drawbacks, computational and statistical, which have encouraged the proposal of several regularized variants of OT in the recent literature, one of the most notable being the \textit{sliced} formulation, which exploits the closed-form formula between univariate distributions by projecting high-dimensional measures onto random lines. We consider in this work a more general family of ground metrics, namely \textit{tree metrics}, which also yield fast closed-form computations and negative definite, and of which the sliced-Wasserstein distance is a particular case (the tree is a chain). We propose the tree-sliced Wasserstein distance, computed by averaging the Wasserstein distance between these measures using random tree metrics, built adaptively in either low or high-dimensional spaces. Exploiting the negative definiteness of that distance, we also propose a positive definite kernel, and test it against other baselines on a few benchmark tasks.) <|cite_end|>. Given two probability distributions $\mu, \nu$ over a measurable space $\Omega$ with metric $d(\cdot,\cdot)$, let $\Pi(\mu, \nu)$ be the set of joint probability distributions $\pi$ on the space $\Omega \times \Omega$, where for any subset $\omega \subset \Omega$, $\pi(\omega \times \Omega) = \mu(\omega)$ and $\pi(\Omega \times \omega) = \nu(\omega)$. The 1-Wasserstein between $\mu$ and $\nu$ is defined as: \begin{equation}\label{eq:wasserstein} W_\rho(\mu, \nu) := \inf_{\pi \in \Pi(\mu, \nu)} \int_{\Omega \times \Omega} \rho(x, y) \pi(dx, dy). \end{equation} Let $\| \cdot \|_{L_\rho}$ denote the Lipschitz norm w.r.t.\ $\rho$, when $\Omega$ is separable w.r.t.\ $\rho$ and $\mu, \nu$ have bounded support, then the dual of \eqref{eq:wasserstein}, known as the Kantorovich-Rubinstein dual, can be expressed as: \begin{equation}\label{eq:dual} W_\rho(\mu, \nu) = \sup_{\| f \|_{L_\rho} \le 1} \int_\Omega f(x) \mu(dx) - \int_\Omega f(y) \nu(dy). \end{equation} When $d$ is a tree metric over a rooted tree $T$, for every pair of points $x, y \in \Omega$, $\rho(x,y)$ is the total weight of the (unique) path between nodes $x$ and $y$ in $T$. Denote the edge weight associated with each node $t$ as $w_t$, and $D(t, \mu)$ as the sum of mass of $\mu$ at and below node $t$, then the Wasserstein distance between two distributions on $T$ can be expressed as: \begin{equation} W_{\rho_T}(\mu, \nu) = \sum_{t \in T} w_t \left | D(t, \mu) - D(t, \nu) \right |. \end{equation} Previous work in demonstrated unsupervised forest constructions that approximate the Wasserstein distance when $\rho$ is the Euclidean ground metric over $\Omega \equiv \mathbb{R}^d$ <|cite_start|> (Reference: Report for CSE 5339 2018 — ( OTMLSA ) Optimal Transport in Machine Learning and Shape Analysis Fast Image Retrieval via Embeddings: The central question this paper addresses is how to build a data structure that quickly identifies the images that are closest to a query image. The authors note that early work represents images as points in multidimensional space, and uses a norm to define the distances between points. To improve the quality of the results, a variety of other metrics (such as the Earth Movers Distance (EMD)) were proposed [RTG00], but for unnormed metrics like EMD, nearest neighbor data structures such as kd-trees or R-trees cannot be used. The main contributions of Indyk and Thaper in [IT03] is a “low distortion” embedding of EMD into Rd with the `1 norm, and a data structure to solve approximate nearest neighbor on this space.) <|cite_end|> <|cite_start|> (Reference: Tree-Sliced Variants of Wasserstein Distances: Optimal transport (\OT) theory defines a powerful set of tools to compare probability distributions. \OT~suffers however from a few drawbacks, computational and statistical, which have encouraged the proposal of several regularized variants of OT in the recent literature, one of the most notable being the \textit{sliced} formulation, which exploits the closed-form formula between univariate distributions by projecting high-dimensional measures onto random lines. We consider in this work a more general family of ground metrics, namely \textit{tree metrics}, which also yield fast closed-form computations and negative definite, and of which the sliced-Wasserstein distance is a particular case (the tree is a chain). We propose the tree-sliced Wasserstein distance, computed by averaging the Wasserstein distance between these measures using random tree metrics, built adaptively in either low or high-dimensional spaces. Exploiting the negative definiteness of that distance, we also propose a positive definite kernel, and test it against other baselines on a few benchmark tasks.) <|cite_end|> <|cite_start|> (Reference: Scalable Nearest Neighbor Search for Optimal Transport: The Optimal Transport (a.k.a. Wasserstein) distance is an increasingly popular similarity measure for rich data domains, such as images or text documents. This raises the necessity for fast nearest neighbor search with respect to this distance, a problem that poses a substantial computational bottleneck for various tasks on massive datasets. In this work, we study fast tree-based approximation algorithms for searching nearest neighbors w.r.t. the Wasserstein-1 distance. A standard tree-based technique, known as Quadtree, has been previously shown to obtain good results. We introduce a variant of this algorithm, called Flowtree, and formally prove it achieves asymptotically better accuracy. Our extensive experiments, on real-world text and image datasets, show that Flowtree improves over various baselines and existing methods in either running time or accuracy. In particular, its quality of approximation is in line with previous high-accuracy methods, while its running time is much faster.) <|cite_end|>. In MURAL, we construct an unsupervised random forest over a high dimensional $\Omega$ that consists of continuous, categorical and missing variables. These trees subsequently define a distance on $\Omega$, which in turn defines a Wasserstein distance between distributions on $\Omega$, and because of the specific construction of MURAL trees, admits a simple feature importance measure described in \ref{applications}. <|paper_end|>
[ "<|reference_start|> Visualizing structure and transitions in high-dimensional biological data: <|reference_end|>", "<|reference_start|> {Introduction to manifold learning: A popular research area today in statistics and machine learning is that of manifold learning, which is related to the algorithmic techniques of dimensionality reduction. Manifold learning can be divided into linear and nonlinear methods. Linear methods, which have long been part of the statistician's toolbox for analyzing multivariate data, include principal component analysis (PCA) and multidimensional scaling (MDS). Recently, there has been a flurry of research activity on nonlinear manifold learning, which includes Isomap, local linear embedding, Laplacian eigenmaps, Hessian eigenmaps, and diffusion maps. Some of these techniques are nonlinear generalizations of the linear methods. The algorithmic process of most of these techniques consists of three steps: a nearest‐neighbor search, a definition of distances or affinities between points (a key ingredient for the success of these methods), and an eigenproblem for embedding high‐dimensional points into a lower dimensional space. This article gives us a brief survey of these new methods and indicates their strengths and weaknesses. WIREs Comput Stat 2012 doi: 10.1002/wics.1222 <|reference_end|>", "<|reference_start|> Visualizing structure and transitions in high-dimensional biological data: <|reference_end|>", "<|reference_start|> Tree-Sliced Variants of Wasserstein Distances: Optimal transport (\\OT) theory defines a powerful set of tools to compare probability distributions. \\OT~suffers however from a few drawbacks, computational and statistical, which have encouraged the proposal of several regularized variants of OT in the recent literature, one of the most notable being the \\textit{sliced} formulation, which exploits the closed-form formula between univariate distributions by projecting high-dimensional measures onto random lines. We consider in this work a more general family of ground metrics, namely \\textit{tree metrics}, which also yield fast closed-form computations and negative definite, and of which the sliced-Wasserstein distance is a particular case (the tree is a chain). We propose the tree-sliced Wasserstein distance, computed by averaging the Wasserstein distance between these measures using random tree metrics, built adaptively in either low or high-dimensional spaces. Exploiting the negative definiteness of that distance, we also propose a positive definite kernel, and test it against other baselines on a few benchmark tasks. <|reference_end|>" ]
[ 7, 14, 18, 34 ]
{"<|multi_cite_1_1|>": "ss-1329449", "<|multi_cite_1_2|>": "ss-2249634", "<|multi_cite_1_3|>": "ss-1329450", "<|cite_3|>": "arxiv-147803", "<|cite_4|>": "ss-1935736", "<|cite_5|>": "ss-1657781", "<|cite_6|>": "ss-714841", "<|multi_cite_7_2|>": "ss-1657781", "<|multi_cite_7_3|>": "arxiv-147803", "<|multi_cite_7_4|>": "ss-1935736", "<|cite_8|>": "ss-1403104", "<|multi_cite_9_1|>": "ss-1387029", "<|multi_cite_9_2|>": "ss-1935736", "<|multi_cite_9_3|>": "ss-1275806", "<|multi_cite_9_4|>": "ss-2134425", "<|cite_10|>": "ss-1935736", "<|cite_11|>": "ss-1935736", "<|cite_12|>": "ss-1935736", "<|cite_13|>": "ss-1657781", "<|multi_cite_14_2|>": "arxiv-147803", "<|cite_15|>": "ss-714841", "<|cite_16|>": "ss-829446", "<|cite_17|>": "ss-994961", "<|multi_cite_18_1|>": "ss-832115", "<|multi_cite_18_2|>": "ss-1328703", "<|cite_19|>": "ss-1403104", "<|cite_20|>": "arxiv-31682", "<|cite_21|>": "arxiv-33002", "<|cite_22|>": "ss-1620572", "<|cite_23|>": "ss-1403104", "<|cite_24|>": "ss-769125", "<|cite_25|>": "ss-1329451", "<|cite_26|>": "ss-752626", "<|multi_cite_27_1|>": "ss-987581", "<|multi_cite_27_2|>": "ss-752626", "<|multi_cite_27_3|>": "ss-1329452"}
2307.12348-1
<|cite_start|> (Reference: DifFace: Blind Face Restoration with Diffused Error Contraction: While deep learning-based methods for blind face restoration have achieved unprecedented success, they still suffer from two major limitations. First, most of them deteriorate when facing complex degradations out of their training data. Second, these methods require multiple constraints, e.g., fidelity, perceptual, and adversarial losses, which require laborious hyper-parameter tuning to stabilize and balance their influences. In this work, we propose a novel method named DifFace that is capable of coping with unseen and complex degradations more gracefully without complicated loss designs. The key of our method is to establish a posterior distribution from the observed low-quality (LQ) image to its high-quality (HQ) counterpart. In particular, we design a transition distribution from the LQ image to the intermediate state of a pre-trained diffusion model and then gradually transmit from this intermediate state to the HQ target by recursively applying a pre-trained diffusion model. The transition distribution only relies on a restoration backbone that is trained with $L_2$ loss on some synthetic data, which favorably avoids the cumbersome training process in existing methods. Moreover, the transition distribution can contract the error of the restoration backbone and thus makes our method more robust to unknown degradations. Comprehensive experiments show that DifFace is superior to current state-of-the-art methods, especially in cases with severe degradations. Code and model are available at https://github.com/zsyOAOA/DifFace.) <|cite_end|>. Both strategies often require hundreds or thousands of sampling steps to generate a realistic HR image. While several acceleration algorithms <|cite_start|> (Reference: Improved Denoising Diffusion Probabilistic Models: Denoising diffusion probabilistic models (DDPM) are a class of generative models which have recently been shown to produce excellent samples. We show that with a few simple modifications, DDPMs can also achieve competitive log-likelihoods while maintaining high sample quality. Additionally, we find that learning variances of the reverse diffusion process allows sampling with an order of magnitude fewer forward passes with a negligible difference in sample quality, which is important for the practical deployment of these models. We additionally use precision and recall to compare how well DDPMs and GANs cover the target distribution. Finally, we show that the sample quality and likelihood of these models scale smoothly with model capacity and training compute, making them easily scalable. We release our code at https://github.com/openai/improved-diffusion) <|cite_end|> <|cite_start|> (Reference: Denoising Diffusion Implicit Models: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples $10 \times$ to $50 \times$ faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.) <|cite_end|> <|cite_start|> (Reference: DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps: Diffusion probabilistic models (DPMs) are emerging powerful generative models. Despite their high-quality generation performance, DPMs still suffer from their slow sampling as they generally need hundreds or thousands of sequential function evaluations (steps) of large neural networks to draw a sample. Sampling from DPMs can be viewed alternatively as solving the corresponding diffusion ordinary differential equations (ODEs). In this work, we propose an exact formulation of the solution of diffusion ODEs. The formulation analytically computes the linear part of the solution, rather than leaving all terms to black-box ODE solvers as adopted in previous works. By applying change-of-variable, the solution can be equivalently simplified to an exponentially weighted integral of the neural network. Based on our formulation, we propose DPM-Solver, a fast dedicated high-order solver for diffusion ODEs with the convergence order guarantee. DPM-Solver is suitable for both discrete-time and continuous-time DPMs without any further training. Experimental results show that DPM-Solver can generate high-quality samples in only 10 to 20 function evaluations on various datasets. We achieve 4.70 FID in 10 function evaluations and 2.87 FID in 20 function evaluations on the CIFAR10 dataset, and a $4\sim 16\times$ speedup compared with previous state-of-the-art training-free samplers on various datasets.) <|cite_end|>have been proposed, they typically sacrifice the performance and result in blurry outputs. This work designs a more efficient diffusion model that overcomes this trade-off between efficiency and performance, as detailed in Sec.~\ref{sec:method}. \textbf{Remark}. Several parallel works <|cite_start|> (Reference: Inversion by Direct Iteration: An Alternative to Denoising Diffusion for Image Restoration: Inversion by Direct Iteration (InDI) is a new formulation for supervised image restoration that avoids the so-called "regression to the mean" effect and produces more realistic and detailed images than existing regression-based methods. It does this by gradually improving image quality in small steps, similar to generative denoising diffusion models. Image restoration is an ill-posed problem where multiple high-quality images are plausible reconstructions of a given low-quality input. Therefore, the outcome of a single step regression model is typically an aggregate of all possible explanations, therefore lacking details and realism. The main advantage of InDI is that it does not try to predict the clean target image in a single step but instead gradually improves the image in small steps, resulting in better perceptual quality. While generative denoising diffusion models also work in small steps, our formulation is distinct in that it does not require knowledge of any analytic form of the degradation process. Instead, we directly learn an iterative restoration process from low-quality and high-quality paired examples. InDI can be applied to virtually any image degradation, given paired training data. In conditional denoising diffusion image restoration the denoising network generates the restored image by repeatedly denoising an initial image of pure noise, conditioned on the degraded input. Contrary to conditional denoising formulations, InDI directly proceeds by iteratively restoring the input low-quality image, producing high-quality results on a variety of image restoration tasks, including motion and out-of-focus deblurring, super-resolution, compression artifact removal, and denoising.) <|cite_end|> <|cite_start|> (Reference: Image Restoration with Mean-Reverting Stochastic Differential Equations: This paper presents a stochastic differential equation (SDE) approach for general-purpose image restoration. The key construction consists in a mean-reverting SDE that transforms a high-quality image into a degraded counterpart as a mean state with fixed Gaussian noise. Then, by simulating the corresponding reverse-time SDE, we are able to restore the origin of the low-quality image without relying on any task-specific prior knowledge. Crucially, the proposed mean-reverting SDE has a closed-form solution, allowing us to compute the ground truth time-dependent score and learn it with a neural network. Moreover, we propose a maximum likelihood objective to learn an optimal reverse trajectory that stabilizes the training and improves the restoration results. The experiments show that our proposed method achieves highly competitive performance in quantitative comparisons on image deraining, deblurring, and denoising, setting a new state-of-the-art on two deraining datasets. Finally, the general applicability of our approach is further demonstrated via qualitative results on image super-resolution, inpainting, and dehazing. Code is available at https://github.com/Algolzw/image-restoration-sde.) <|cite_end|>also exploit such an iterative restoration paradigm in SR. Despite a similar motivation, our work and others have adopted different mathematical formulations to achieve this goal. <|cite_start|> (Reference: Inversion by Direct Iteration: An Alternative to Denoising Diffusion for Image Restoration: Inversion by Direct Iteration (InDI) is a new formulation for supervised image restoration that avoids the so-called "regression to the mean" effect and produces more realistic and detailed images than existing regression-based methods. It does this by gradually improving image quality in small steps, similar to generative denoising diffusion models. Image restoration is an ill-posed problem where multiple high-quality images are plausible reconstructions of a given low-quality input. Therefore, the outcome of a single step regression model is typically an aggregate of all possible explanations, therefore lacking details and realism. The main advantage of InDI is that it does not try to predict the clean target image in a single step but instead gradually improves the image in small steps, resulting in better perceptual quality. While generative denoising diffusion models also work in small steps, our formulation is distinct in that it does not require knowledge of any analytic form of the degradation process. Instead, we directly learn an iterative restoration process from low-quality and high-quality paired examples. InDI can be applied to virtually any image degradation, given paired training data. In conditional denoising diffusion image restoration the denoising network generates the restored image by repeatedly denoising an initial image of pure noise, conditioned on the degraded input. Contrary to conditional denoising formulations, InDI directly proceeds by iteratively restoring the input low-quality image, producing high-quality results on a variety of image restoration tasks, including motion and out-of-focus deblurring, super-resolution, compression artifact removal, and denoising.) <|cite_end|>employed the Inversion by Direct Iteration (InDI) to model this process, while <|cite_start|> (Reference: Image Restoration with Mean-Reverting Stochastic Differential Equations: This paper presents a stochastic differential equation (SDE) approach for general-purpose image restoration. The key construction consists in a mean-reverting SDE that transforms a high-quality image into a degraded counterpart as a mean state with fixed Gaussian noise. Then, by simulating the corresponding reverse-time SDE, we are able to restore the origin of the low-quality image without relying on any task-specific prior knowledge. Crucially, the proposed mean-reverting SDE has a closed-form solution, allowing us to compute the ground truth time-dependent score and learn it with a neural network. Moreover, we propose a maximum likelihood objective to learn an optimal reverse trajectory that stabilizes the training and improves the restoration results. The experiments show that our proposed method achieves highly competitive performance in quantitative comparisons on image deraining, deblurring, and denoising, setting a new state-of-the-art on two deraining datasets. Finally, the general applicability of our approach is further demonstrated via qualitative results on image super-resolution, inpainting, and dehazing. Code is available at https://github.com/Algolzw/image-restoration-sde.) <|cite_end|>and attempted to formulate it as a SDE. In this paper, we design a discrete Markov chain to depict the transition between the HR and LR images, offering a more intuitive and efficient solution to this problem. \begin{table}[t] \centering \caption{Performance comparison of \textit{ResShift} on the \textit{ImageNet-Test} under different configurations.} \label{tab:schedules} \small \vspace{-2mm} \begin{tabular}{@{}C{1.6cm}@{}|@{}C{1.6cm}@{}|@{}C{1.6cm}@{}| @{}C{1.6cm}@{} @{}C{1.8cm}@{} @{}C{1.8cm}@{} @{}C{2.0cm}@{} @{}C{2.0cm}@{} } \Xhline{0.8pt} \multicolumn{3}{c|}{Configurations} & \multicolumn{5}{c}{Metrics} \\ \Xhline{0.4pt} $T$ & $p$ & $\kappa$ & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ & CLIPIQA$\uparrow$ & MUSIQ$\uparrow$ \\ \Xhline{0.4pt} 10 & \multirow{5}*{0.3} & \multirow{5}*{2.0} & 25.20 & 0.6828 & 0.2517 & 0.5492 & 50.6617 \\ 15 & & & 25.01 & 0.6769 & 0.2312 & 0.5922 & 53.6596 \\ 30 & & & 24.52 & 0.6585 & 0.2253 & 0.6273 & 55.7904 \\ 40 & & & 24.29 & 0.6513 & 0.2225 & 0.6468 & 56.8482 \\ 50 & & & 24.22 & 0.6483 & 0.2212 & 0.6489 & 56.8463 \\ \hline \hline \multirow{5}*{15} & 0.3 & \multirow{5}*{2.0} & 25.01 & 0.6769 & 0.2312 & 0.5922 & 53.6596 \\ & 0.5 & & 25.05 & 0.6745 & 0.2387 & 0.5816 & 52.4475 \\ & 1.0 & & 25.12 & 0.6780 & 0.2613 & 0.5314 & 48.4964 \\ & 2.0 & & 25.32 & 0.6827 & 0.3050 & 0.4601 & 43.3060 \\ & 3.0 & & 25.39 & 0.5813 & 0.3432 & 0.4041 & 38.5324 \\ \hline \hline \multirow{6}*{15} & \multirow{5}*{0.3} & 0.5 & 24.90 & 0.6709 & 0.2437 & 0.5700 & 50.6101 \\ & & 1.0 & 24.84 & 0.6699 & 0.2354 & 0.5914 & 52.9933 \\ & & 2.0 & 25.01 & 0.6769 & 0.2312 & 0.5922 & 53.6596 \\ & & 8.0 & 25.31 & 0.6858 & 0.2592 & 0.5231 & 49.3182 \\ & & 16.0 & 24.46 & 0.6891 & 0.2772 & 0.4898 & 46.9794 \\ \Xhline{0.4pt} \end{tabular} \vspace{-2mm} \end{table} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{./figures/ablation_0328_v2.pdf} \vspace{-6mm} \caption{Qualitative comparisons of \textit{ResShift} under different combinations of ($T$, $p$, $\kappa$). For example, ``(15, 0.3, 2.0)'' represents the recovered result with $T=15$, $p=0.3$, and $\kappa=2.0$. Please zoom in for a better view.} \label{fig:ablation_schedule} \end{figure} <|paper_end|>
[ "<|reference_start|> DifFace: Blind Face Restoration with Diffused Error Contraction: While deep learning-based methods for blind face restoration have achieved unprecedented success, they still suffer from two major limitations. First, most of them deteriorate when facing complex degradations out of their training data. Second, these methods require multiple constraints, e.g., fidelity, perceptual, and adversarial losses, which require laborious hyper-parameter tuning to stabilize and balance their influences. In this work, we propose a novel method named DifFace that is capable of coping with unseen and complex degradations more gracefully without complicated loss designs. The key of our method is to establish a posterior distribution from the observed low-quality (LQ) image to its high-quality (HQ) counterpart. In particular, we design a transition distribution from the LQ image to the intermediate state of a pre-trained diffusion model and then gradually transmit from this intermediate state to the HQ target by recursively applying a pre-trained diffusion model. The transition distribution only relies on a restoration backbone that is trained with $L_2$ loss on some synthetic data, which favorably avoids the cumbersome training process in existing methods. Moreover, the transition distribution can contract the error of the restoration backbone and thus makes our method more robust to unknown degradations. Comprehensive experiments show that DifFace is superior to current state-of-the-art methods, especially in cases with severe degradations. Code and model are available at https://github.com/zsyOAOA/DifFace. <|reference_end|>", "<|reference_start|> Improved Denoising Diffusion Probabilistic Models: Denoising diffusion probabilistic models (DDPM) are a class of generative models which have recently been shown to produce excellent samples. We show that with a few simple modifications, DDPMs can also achieve competitive log-likelihoods while maintaining high sample quality. Additionally, we find that learning variances of the reverse diffusion process allows sampling with an order of magnitude fewer forward passes with a negligible difference in sample quality, which is important for the practical deployment of these models. We additionally use precision and recall to compare how well DDPMs and GANs cover the target distribution. Finally, we show that the sample quality and likelihood of these models scale smoothly with model capacity and training compute, making them easily scalable. We release our code at https://github.com/openai/improved-diffusion <|reference_end|>", "<|reference_start|> Image Restoration with Mean-Reverting Stochastic Differential Equations: This paper presents a stochastic differential equation (SDE) approach for general-purpose image restoration. The key construction consists in a mean-reverting SDE that transforms a high-quality image into a degraded counterpart as a mean state with fixed Gaussian noise. Then, by simulating the corresponding reverse-time SDE, we are able to restore the origin of the low-quality image without relying on any task-specific prior knowledge. Crucially, the proposed mean-reverting SDE has a closed-form solution, allowing us to compute the ground truth time-dependent score and learn it with a neural network. Moreover, we propose a maximum likelihood objective to learn an optimal reverse trajectory that stabilizes the training and improves the restoration results. The experiments show that our proposed method achieves highly competitive performance in quantitative comparisons on image deraining, deblurring, and denoising, setting a new state-of-the-art on two deraining datasets. Finally, the general applicability of our approach is further demonstrated via qualitative results on image super-resolution, inpainting, and dehazing. Code is available at https://github.com/Algolzw/image-restoration-sde. <|reference_end|>", "<|reference_start|> Image Restoration with Mean-Reverting Stochastic Differential Equations: This paper presents a stochastic differential equation (SDE) approach for general-purpose image restoration. The key construction consists in a mean-reverting SDE that transforms a high-quality image into a degraded counterpart as a mean state with fixed Gaussian noise. Then, by simulating the corresponding reverse-time SDE, we are able to restore the origin of the low-quality image without relying on any task-specific prior knowledge. Crucially, the proposed mean-reverting SDE has a closed-form solution, allowing us to compute the ground truth time-dependent score and learn it with a neural network. Moreover, we propose a maximum likelihood objective to learn an optimal reverse trajectory that stabilizes the training and improves the restoration results. The experiments show that our proposed method achieves highly competitive performance in quantitative comparisons on image deraining, deblurring, and denoising, setting a new state-of-the-art on two deraining datasets. Finally, the general applicability of our approach is further demonstrated via qualitative results on image super-resolution, inpainting, and dehazing. Code is available at https://github.com/Algolzw/image-restoration-sde. <|reference_end|>" ]
[ 0, 1, 5, 7 ]
{"<|multi_cite_1_1|>": "arxiv-74487", "<|multi_cite_1_2|>": "arxiv-273164", "<|cite_2|>": "arxiv-340336", "<|multi_cite_3_1|>": "arxiv-358665", "<|multi_cite_3_2|>": "arxiv-383950", "<|multi_cite_4_1|>": "arxiv-394473", "<|multi_cite_4_2|>": "arxiv-386478", "<|multi_cite_5_1|>": "arxiv-306081", "<|multi_cite_5_2|>": "arxiv-380305", "<|multi_cite_6_1|>": "arxiv-334777", "<|multi_cite_6_2|>": "arxiv-388766", "<|cite_7|>": "arxiv-273164", "<|multi_cite_8_1|>": "ss-2287221", "<|multi_cite_8_2|>": "arxiv-386478", "<|multi_cite_8_3|>": "arxiv-469198", "<|multi_cite_8_4|>": "arxiv-504167", "<|multi_cite_9_1|>": "arxiv-322225", "<|multi_cite_9_2|>": "arxiv-294169", "<|multi_cite_9_3|>": "arxiv-424264", "<|cite_10|>": "arxiv-294169", "<|cite_11|>": "arxiv-329923", "<|cite_12|>": "arxiv-356657", "<|cite_13|>": "arxiv-362535", "<|cite_14|>": "arxiv-408607", "<|cite_15|>": "arxiv-388766", "<|cite_16|>": "arxiv-294169", "<|cite_32|>": "arxiv-74487", "<|cite_33|>": "arxiv-273164", "<|cite_34|>": "arxiv-306081", "<|multi_cite_28_1|>": "arxiv-340336", "<|multi_cite_28_2|>": "arxiv-388766", "<|cite_29|>": "arxiv-287696", "<|cite_30|>": "arxiv-251369", "<|cite_31|>": "arxiv-284602", "<|cite_17|>": "ss-1086369", "<|cite_18|>": "ss-693768", "<|multi_cite_19_1|>": "arxiv-17774", "<|multi_cite_19_2|>": "ss-1822952", "<|cite_35|>": "arxiv-71013", "<|multi_cite_20_1|>": "arxiv-105956", "<|multi_cite_20_2|>": "arxiv-103885", "<|multi_cite_20_3|>": "arxiv-121554", "<|multi_cite_20_4|>": "arxiv-150830", "<|multi_cite_21_1|>": "arxiv-330823", "<|multi_cite_21_2|>": "arxiv-307046", "<|multi_cite_21_3|>": "arxiv-256405", "<|multi_cite_21_4|>": "arxiv-352564", "<|multi_cite_22_1|>": "arxiv-197329", "<|multi_cite_22_2|>": "arxiv-255192", "<|multi_cite_22_3|>": "arxiv-447830", "<|multi_cite_23_1|>": "arxiv-143322", "<|multi_cite_23_2|>": "arxiv-329923", "<|multi_cite_23_3|>": "arxiv-356657", "<|multi_cite_23_4|>": "ss-1345578", "<|multi_cite_24_1|>": "ss-1173556", "<|multi_cite_24_2|>": "arxiv-334777", "<|multi_cite_24_3|>": "arxiv-388766", "<|multi_cite_25_1|>": "ss-2287221", "<|multi_cite_25_2|>": "arxiv-386478", "<|multi_cite_25_3|>": "arxiv-395285", "<|multi_cite_25_4|>": "arxiv-469198", "<|multi_cite_26_1|>": "arxiv-322225", "<|multi_cite_26_2|>": "arxiv-294169", "<|multi_cite_26_3|>": "arxiv-424264", "<|multi_cite_27_1|>": "arxiv-490466", "<|multi_cite_27_2|>": "arxiv-477103", "<|cite_36|>": "arxiv-490466", "<|cite_37|>": "arxiv-477103"}
1602.02215
<|paper_start|> Title: Swivel: Improving Embeddings by Noticing What's Missing Abstract: Swivel: Improving Embeddings by Noticing What's Missing: We present Submatrix-wise Vector Embedding Learner (Swivel), a method for generating low-dimensional feature embeddings from a feature co-occurrence matrix. Swivel performs approximate factorization of the point-wise mutual information matrix via stochastic gradient descent. It uses a piecewise loss with special handling for unobserved co-occurrences, and thus makes use of all the information in the matrix. While this requires computation proportional to the size of the entire matrix, we make use of vectorized multiplication to process thousands of rows and columns at once to compute millions of predicted values. Furthermore, we partition the matrix into shards in order to parallelize the computation across many nodes. This approach results in more accurate embeddings than can be achieved with methods that consider only observed co-occurrences, and can scale to much larger corpora than can be handled with sampling methods. Introduction \label{introduction} Dense vector representations of words have proven to be useful for natural language tasks such as determining semantic similarity, parsing, and translation. Recently, work by <|cite_start|> (Reference: Efficient Estimation of Word Representations in Vector Space: We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities.) <|cite_end|> and others has inspired an investigation into the construction of word vectors using stochastic gradient descent methods. Models tend to fall into one of two categories: matrix factorization or sampling from a sliding window: <|cite_start|> (Reference: Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors: Context-predicting models (more commonly known as embeddings or neural language models) are the new kids on the distributional semantics block. Despite the buzz surrounding these models, the literature is still lacking a systematic comparison of the predictive models with classic, count-vector-based distributional semantic approaches. In this paper, we perform such an extensive evaluation, on a wide range of lexical semantics tasks and across many parameter settings. The results, to our own surprise, show that the buzz is fully justified, as the context-predicting models obtain a thorough and resounding victory against their count-based counterparts.) <|cite_end|> refers to these as ``count'' and ``predict'' methods, respectively. In this paper, we present the \emph{Submatrix-wise Vector Embedding Learner} (Swivel), a ``count-based'' method for generating low-dimensional feature embeddings from a co-occurrence matrix. Swivel uses stochastic gradient descent to perform a weighted approximate matrix factorization, ultimately arriving at embeddings that reconstruct the point-wise mutual information (PMI) between each row and column feature. Swivel uses a piecewise loss function to differentiate between observed and unobserved co-occurrences. Swivel is designed to work in a distributed environment. The original co-occurrence matrix (which may contain millions of rows and millions of columns) is ``sharded'' into smaller submatrices, each containing thousands of rows and columns. These can be distributed across multiple workers, each of which uses vectorized matrix multiplication to rapidly produce predictions for millions of individual PMI values. This allows the computation to be distributed across a cluster of computers, resulting in an efficient way to learn embeddings. This paper is organized as follows. First, we describe related word embedding work and note how two popular methods are similar to one another in their optimization objective. We then discuss Swivel in detail, and describe experimental results on several standard word embedding evaluation tasks. We conclude with analysis of our results and discussion of the algorithm with regard to the other approaches. Related Work While there are a number of interesting approaches to creating word embeddings, Skipgram Negative Sampling <|cite_start|> (Reference: Efficient Estimation of Word Representations in Vector Space: We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities.) <|cite_end|> and GloVe <|cite_start|> (Reference: GloVe: Global Vectors for word representation: Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.) <|cite_end|> are two relatively recent approaches that have received quite a bit of attention. These methods compress the distributional structure of the raw language co-occurrence statistics, yielding compact representations that retain the properties of the original space. The intrinsic quality of the embeddings can be evaluated in two ways. First, words with similar distributional contexts should be near to one another in the embedding space. Second, manipulating the distributional context directly by adding or removing words ought to lead to similar translations in the embedded space, allowing ``analogical'' traversal of the vector space. \textbf{Skipgram Negative Sampling.} The \texttt{word2vec} program released by <|cite_start|> (Reference: Efficient Estimation of Word Representations in Vector Space: We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities.) <|cite_end|> generates word embeddings by sliding a window over a large corpus of text. The ``focus'' word in the center of the window is trained to predict each ``context'' word that surrounds it by 1) maximizing the dot product between the sampled words' embeddings, and 2) minimizing the dot product between the focus word and a randomly sampled non-context word. This method of training is called \emph{skipgram negative sampling} (SGNS). <|cite_start|> (Reference: Neural Word Embedding As Implicit Matrix Factorization: We analyze skip-gram with negative-sampling (SGNS), a word embedding method introduced by Mikolov et al., and show that it is implicitly factorizing a word-context matrix, whose cells are the pointwise mutual information (PMI) of the respective word and context pairs, shifted by a global constant. We find that another embedding method, NCE, is implicitly factorizing a similar matrix, where each cell is the (shifted) log conditional probability of a word given its context. We show that using a sparse Shifted Positive PMI word-context matrix to represent words improves results on two word similarity tasks and one of two analogy tasks. When dense low-dimensional vectors are preferred, exact factorization with SVD can achieve solutions that are at least as good as SGNS's solutions for word similarity tasks. On analogy questions SGNS remains superior to SVD. We conjecture that this stems from the weighted nature of SGNS's factorization.) <|cite_end|> examine SGNS and suggest that the algorithm is implicitly performing weighted low-rank factorization of a matrix whose cell values are related to the \emph{point-wise mutual information} between the focus and context words. Point-wise mutual information (PMI) is a measure of association between two events, defined as follows: \begin{equation} \mathbf{pmi}(i;j) = \log \frac{P(i,j)}{P(i)\,P(j)} \end{equation} In the case of language, the frequency statistics of co-occurring words in a corpus can be used to estimate the probabilities that comprise PMI. Let $x_{ij}$ be the number of times that focus word $i$ co-occurs with the context word $j$, $x_{i*} = \sum_j x_{ij}$ be total number of times that focus word $i$ appears in the corpus, $x_{*j} = \sum_i x_{ij}$ be the total number of times that context word $j$ appears appears in the corpus, and $\lvert D \rvert = \sum_{i,j} x_{ij}$ be the total number of co-occurrences. Then we can re-write (1) as: \begin{align*} \mathbf{pmi}(i;j) &= \log \frac{x_{ij} \lvert D \rvert}{x_{i*} \, x_{*j}} \\ &= \log x_{ij} + \log \vert D \rvert - \log x_{i*} - \log x_{*j} \end{align*} It is important to note that, in the case that $x_{ij}$ is zero -- i.e., no co-occurrence of $i$ and $j$ has been observed -- PMI is infinitely negative. SGNS can be seen as producing two matrices, $\mathbf{W}$ for focus words and $\mathbf{\tilde{W}}$ for context words, such that their product $\mathbf{W} \mathbf{\tilde{W}}^\top$ approximates the observed PMI between respective word/context pairs. Given a specific focus word $i$ and context word $j$, SGNS minimizes the magnitude of the difference between $w_i^\top \tilde{w}_j$ and $\mathbf{pmi}(i; j)$, tempered by a monotonically increasing weighting function of the observed co-occurrence count, $f(x_{ij})$: \begin{align*} \mathcal{L}_{\mathrm{SGNS}} &= \sum_{i,j} f(x_{ij}) \left( w_i^\top \tilde{w}_j - \mathbf{pmi}(i;j) \right)^2 \\ &= \sum_{i,j} f(x_{ij}) ( w_i^\top \tilde{w}_j - \log x_{ij} - \log \lvert D \rvert \\ & \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+ \log x_{i*} + \log x_{*j} )^2 \end{align*} Due to the fact that SGNS slides a sampling window through the entire training corpus, a significant drawback of the algorithm is that it requires training time proportional to the size of the corpus. \textbf{GloVe.} <|cite_start|> (Reference: GloVe: Global Vectors for word representation: Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.) <|cite_end|>'s <|cite_start|> (Reference: GloVe: Global Vectors for word representation: Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.) <|cite_end|> GloVe is an approach that instead works from the precomputed corpus co-occurrence statistics. The authors posit several constraints that should lead to preserving the ``linear directions of meaning''. Based on ratios of conditional probabilities of words in context, they suggest that a natural model for learning such linear structure should minimize the following cost function for a given focus word $i$ and context word $j$: \begin{align*} \mathcal{L}_{\mathrm{GloVe}} &= \sum_{i,j} f(x_{ij}) \left( w_i^\top \tilde{w}_j - \log x_{ij} + b_i + b_j \right)^2 \end{align*} Here, $b_i$ and $b_j$ are bias terms that are specific to each focus word and each context word, respectively. Again $f(x_{ij})$ is a function that weights the cost according to the frequency of the co-occurrence count $x_{ij}$. Using stochastic gradient descent, GloVe learns the model parameters for $\mathbf{W}$, $\mathbf{b}$, $\mathbf{\tilde{W}}$, and $\mathbf{\tilde{b}}$: it selects a pair of words observed to co-occur in the corpus, retrieves the corresponding embedding parameters, computes the loss, and back-propagates the error to update the parameters. GloVe therefore requires training time proportional to the number of observed co-occurrence pairs, allowing it to scale independently of corpus size. Although GloVe was developed independently from SGNS (and, as far as we know, without knowledge of <|cite_start|> (Reference: Neural Word Embedding As Implicit Matrix Factorization: We analyze skip-gram with negative-sampling (SGNS), a word embedding method introduced by Mikolov et al., and show that it is implicitly factorizing a word-context matrix, whose cells are the pointwise mutual information (PMI) of the respective word and context pairs, shifted by a global constant. We find that another embedding method, NCE, is implicitly factorizing a similar matrix, where each cell is the (shifted) log conditional probability of a word given its context. We show that using a sparse Shifted Positive PMI word-context matrix to represent words improves results on two word similarity tasks and one of two analogy tasks. When dense low-dimensional vectors are preferred, exact factorization with SVD can achieve solutions that are at least as good as SGNS's solutions for word similarity tasks. On analogy questions SGNS remains superior to SVD. We conjecture that this stems from the weighted nature of SGNS's factorization.) <|cite_end|>'s <|cite_start|> (Reference: Neural Word Embedding As Implicit Matrix Factorization: We analyze skip-gram with negative-sampling (SGNS), a word embedding method introduced by Mikolov et al., and show that it is implicitly factorizing a word-context matrix, whose cells are the pointwise mutual information (PMI) of the respective word and context pairs, shifted by a global constant. We find that another embedding method, NCE, is implicitly factorizing a similar matrix, where each cell is the (shifted) log conditional probability of a word given its context. We show that using a sparse Shifted Positive PMI word-context matrix to represent words improves results on two word similarity tasks and one of two analogy tasks. When dense low-dimensional vectors are preferred, exact factorization with SVD can achieve solutions that are at least as good as SGNS's solutions for word similarity tasks. On analogy questions SGNS remains superior to SVD. We conjecture that this stems from the weighted nature of SGNS's factorization.) <|cite_end|> analysis), it is interesting how similar these two models are. \begin{itemize} \item Both seek to minimize the difference between the model's estimate and the log of the co-occurrence count. GloVe has additional free ``bias'' parameters that, in SGNS, are pegged to the corpus frequency of the individual words. Empirically, it can be observed that the bias terms are highly correlated to the frequency of the row and column features in a trained GloVe model. \item Both weight the loss according to the frequency of the co-occurrence count such that frequent co-occurrences incur greater penalty than rare ones.\footnote{This latter similarity is reminiscent of \emph{weighted alternating least squares} <|cite_start|> (Reference: Collaborative filtering for implicit feedback datasets: A common task of recommender systems is to improve customer experience through personalized recommendations based on prior implicit feedback. These systems passively track different sorts of user behavior, such as purchase history, watching habits and browsing activity, in order to model user preferences. Unlike the much more extensively researched explicit feedback, we do not have any direct input from the users regarding their preferences. In particular, we lack substantial evidence on which products consumer dislike. In this work we identify unique properties of implicit feedback datasets. We propose treating the data as indication of positive and negative preference associated with vastly varying confidence levels. This leads to a factor model which is especially tailored for implicit feedback recommenders. We also suggest a scalable optimization procedure, which scales linearly with the data size. The algorithm is used successfully within a recommender system for television shows. It compares favorably with well tuned implementations of other known methods. In addition, we offer a novel way to give explanations to recommendations given by this factor model.) <|cite_end|>, which treats $f(x_{ij})$ as a confidence estimate that favors accurate estimation of certain parameters over uncertain ones.} \end{itemize} <|cite_start|> (Reference: Improving distributional similarity with lessons learned from word embeddings: Recent trends suggest that neural-network-inspired word embedding models outperform traditional count-based distributional models on word similarity and analogy detection tasks. We reveal that much of the performance gains of word embeddings are due to certain system design choices and hyperparameter optimizations, rather than the embedding algorithms themselves. Furthermore, we show that these modifications can be transferred to traditional distributional models, yielding similar gains. In contrast to prior reports, we observe mostly local or insignificant performance differences between the methods, with no global advantage to any single approach over the others.) <|cite_end|> note these algorithmic similarities. In their controlled empirical comparison of several different embedding approaches, results produced by SGNS and GloVe differ only modestly. There are subtle differences, however. The negative sampling regime of SGNS ensures that the model does not place features near to one another in the embedding space whose co-occurrence isn't observed in the corpus. This is distinctly different from GloVe, which trains only on the \emph{observed} co-occurrence statistics. The GloVe model incurs no penalty for placing features near to one another whose co-occurrence has not been observed. As we shall see in Section 4, this can result in poor estimates for uncommon features. <|paper_end|>
[ "<|reference_start|> Efficient Estimation of Word Representations in Vector Space: We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities. <|reference_end|>", "<|reference_start|> Efficient Estimation of Word Representations in Vector Space: We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities. <|reference_end|>", "<|reference_start|> GloVe: Global Vectors for word representation: Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition. <|reference_end|>", "<|reference_start|> Neural Word Embedding As Implicit Matrix Factorization: We analyze skip-gram with negative-sampling (SGNS), a word embedding method introduced by Mikolov et al., and show that it is implicitly factorizing a word-context matrix, whose cells are the pointwise mutual information (PMI) of the respective word and context pairs, shifted by a global constant. We find that another embedding method, NCE, is implicitly factorizing a similar matrix, where each cell is the (shifted) log conditional probability of a word given its context. We show that using a sparse Shifted Positive PMI word-context matrix to represent words improves results on two word similarity tasks and one of two analogy tasks. When dense low-dimensional vectors are preferred, exact factorization with SVD can achieve solutions that are at least as good as SGNS's solutions for word similarity tasks. On analogy questions SGNS remains superior to SVD. We conjecture that this stems from the weighted nature of SGNS's factorization. <|reference_end|>" ]
[ 0, 2, 3, 9 ]
{"<|cite_4|>": "arxiv-40388", "<|cite_5|>": "ss-973099", "<|cite_1|>": "arxiv-40388", "<|cite_2|>": "ss-806920", "<|cite_6|>": "arxiv-40388", "<|cite_7|>": "ss-680861", "<|cite_11|>": "ss-806920", "<|cite_9|>": "ss-806920", "<|cite_12|>": "ss-680861", "<|cite_10|>": "ss-680861", "<|cite_3|>": "ss-1273598", "<|cite_8|>": "ss-1092013"}
2408.07632
<|paper_start|> Title: On linear quadratic regulator for the heat equation with general boundary conditions Abstract: On linear quadratic regulator for the heat equation with general boundary conditions: We consider the linear quadratic regulator of the heat equation on a finite interval. Previous frequency-domain methods for this problem rely on discrete Fourier transform and require symmetric boundary conditions. We use the Fokas method to derive the optimal control law for general Dirichlet and Neumann boundary conditions. The Fokas method uses the continuous Fourier transform restricted on the bounded spatial domain, with the frequency variable $k$ domain extended from the real line to the complex plane. This extension, together with results from complex analysis, allows us to eliminate the dependence of the optimal control on the unknown boundary values. As a result, we derive an integral representation of the control similar to the inverse Fourier transform. This representation contains integrals along complex contours and only depends on known initial and boundary conditions. We also show that for the homogeneous Dirichlet boundary value problem, the integral representation recovers an existing series representation of the optimal control. Moreover, the integral representation exhibits numerical advantages compared to the traditional series representation. Introduction Following the seminal paper <|cite_start|> (Reference: {Distributed Control of Spatially Invariant Systems: We consider distributed parameter systems where the underlying dynamics are spatially invariant, and where the controls and measurements are spatially distributed. These systems arise in many applications such as the control of vehicular platoons, flow control, microelectromechanical systems (MEMS), smart structures, and systems described by partial differential equations with constant coefficients and distributed controls and measurements. For fully actuated distributed control problems involving quadratic criteria such as linear quadratic regulator (LQR), H/sub 2/ and H/sub /spl infin//, optimal controllers can be obtained by solving a parameterized family of standard finite-dimensional problems. We show that optimal controllers have an inherent degree of decentralization, and this provides a practical distributed controller architecture. We also prove a general result that applies to partially distributed control and a variety of performance criteria, stating that optimal controllers inherit the spatial invariance structure of the plant. Connections of this work to that on systems over rings, and systems with dynamical symmetries are discussed.) <|cite_end|>, there has been a great interest in optimal control of spatially invariant systems using frequency-domain methods, e.g., see <|cite_start|> (Reference: {Distributed Control Design for Spatially Interconnected Systems: This paper deals with analysis, synthesis, and implementation of distributed controllers, designed for spatially interconnected systems. We develop a state space framework for posing problems of this type, and focus on systems whose model is spatially discrete. In this paper, analysis and synthesis results are developed for this class of systems using the l/sub 2/-induced norm as the performance criterion. The results are stated in terms of linear matrix inequalities and are thus readily amenable to computation. A special implementation of the resulting controllers is presented, which is particularly attractive for distributed operation of the controller. Several examples are provided to further illustrate the application of the results.) <|cite_end|> <|cite_start|> (Reference: Optimal control of spatially distributed systems: In this paper, we study the structural properties of optimal control of spatially distributed systems. Such systems consist of an infinite collection of possibly heterogeneous linear control systems that are spatially interconnected via certain distant-dependent coupling functions over arbitrary graphs. We study the structural properties of optimal control problems with infinite-horizon linear quadratic criteria, by analyzing the spatial structure of the solution to the corresponding operator Lyapunov and Riccati equations. The key idea of the paper is the introduction of a special class of operators called spatially decaying (SD). These operators are a generalization of translation invariant operators used in the study of spatially invariant systems. We prove that given a control system with a state-space representation consisting of SD operators, the solution of operator Lyapunov and Riccati equations are SD. Furthermore, we show that the kernel of the optimal state feedback for each subsystem decays in the spatial domain, with the type of decay (e.g., exponential, polynomial or logarithmic) depending on the type of coupling between subsystems.) <|cite_end|>. Spatially invariant systems are defined without boundaries, while most real-world systems are defined on finite spatial domains with boundary conditions. Although spatially invariant systems can be seen as approximations of large-scale but finite-extent systems, e.g., in <|cite_start|> (Reference: {On the ill-posedness of certain vehicular platoon control problems: We revisit the vehicular platoon control problems formulated by Levine & Athans (1966) and Melzer & Kuo (1971). We show that in each case, these formulations are effectively ill-posed. Specifically, we demonstrate that in the first formulation, the system's stabilizability degrades as the size of the platoon increases, and that the system loses stabilizability in the limit of an infinite number of vehicles. We show that in the LQR formulation of Melzer & Kuo, the performance index is not detectable, leading to non-stabilizing optimal feedbacks. Effectively, these closed-loop systems do not have a uniform bound on the time constants of all vehicles. For the case of infinite platoons, these difficulties are easily exhibited using the theory of spatially invariant systems. We argue that the infinite case is a useful paradigm to understand large platoons. To this end, we illustrate numerically how stabilizability and detectability degrade as functions of a finite platoon size, implying that the infinite case is a reasonable approximation to the large, but finite case. Finally, we suggest a well-posed alternative formulation of the LQR problem based on penalizing absolute position errors in addition to relative ones in the performance objective.) <|cite_end|>, the effect of boundary conditions is not considered in this approximation. Only a few studies target finite-extent systems with boundary conditions; see <|cite_start|> (Reference: {Distributed control of spatially reversible interconnected systems with boundary conditions: We present a class of spatially interconnected systems with boundary conditions that have close links with their spatially invariant extensions. In particular, well-posedness, stability, and performance of the extension imply the same characteristics for the actual, finite extent system. In turn, existing synthesis methods for control of spatially invariant systems can be extended to this class. The relation between the two kinds of systems is proved using ideas based on the "method of images" of partial differential equations theory and uses symmetry properties of the interconnection as a key tool.) <|cite_end|> <|cite_start|> (Reference: Spatially invariant embeddings of systems with boundaries: We consider certain spatially distributed optimal control problems where the spatial domains are finite intervals with boundaries. The optimal control design procedures for spatially invariant systems are normally not applicable to such bounded spatial domains. For problems that possess certain symmetries, we show how to apply spatially invariant techniques using embeddings. In this note, we report on such “embeddable” problems. As an application, we consider LQR problems for systems posed as PDEs on finite intervals, where it will turn out that the solution for the finite-extent system equals that of its spatially invariant counterpart plus a term that corrects for the boundary conditions. We also show that this decomposition can be understood as a Toeplitz plus Hankel decomposition of the state feedback gain operator, with the Toeplitz part governing the feedback in the interior domain, while the Hankel part provides the needed corrections near the boundaries.) <|cite_end|>. Both studies assumed symmetric boundary conditions and approached the problem by embedding finite-extent systems into equivalent spatially invariant systems. The embedding technique is motivated by the method of images that is used to solve boundary value problems for linear partial differential equations (PDEs) with some symmetry properties. Hence, their methods are also limited to certain symmetric boundary conditions. Although the control of finite systems with general boundary conditions is still quite open, studies have discussed potential extensions to estimation problem <|cite_start|> (Reference: Optimal estimation in spatially distributed systems: how far to share measurements from?: We consider the centralized optimal estimation problem in spatially distributed systems. We use the setting of spatially invariant systems as an idealization for which concrete and detailed results are given. Such estimators are known to have a degree of spatial localization in the sense that the estimator gains decay in space, with the spatial decay rates serving as a proxy for how far measurements need to be shared in an optimal distributed estimator. In particular, we examine the dependence of spatial decay rates on problem specifications such as system dynamics, measurement and process noise variances, as well as their spatial autocorrelations. We propose non-dimensional parameters that characterize the decay rates as a function of problem specifications. In particular, we find an interesting matching condition between the characteristic lengthscale of the dynamics and the measurement noise correlation lengthscale for which the optimal centralized estimator is completely decentralized. A new technique - termed the branch point locus - is introduced to quantify spatial decay rates in terms of analyticity regions in the complex spatial frequency plane. Our results are illustrated through two case studies of systems with dynamics modeled by diffusion and the Swift-Hohenberg equation, respectively.) <|cite_end|>. Recently, a unified approach, also known as the Fokas method, has been developed to provide solutions to linear and a class of nonlinear PDEs with general boundary conditions, see <|cite_start|> (Reference: {A unified transform method for solving linear and certain nonlinear PDEs: A new transform method for solving initial boundary value problems for linear and for integrable nonlinear PDEs in two independent variables is introduced. This unified method is based on the fact that linear and integrable nonlinear equations have the distinguished property that they possess a Lax pair formulation. The implementation of this method involves performing a simultaneous spectral analysis of both parts of the Lax pair and solving a Riemann–Hilbert problem. In addition to a unification in the method of solution, there also exists a unification in the representation of the solution. The sine–Gordon equation in light–cone coordinates, the nonlinear Schrödinger equation and their linearized versions are used as illustrative examples. It is also shown that appropriate deformations of the Lax pairs of linear equations can be used to construct Lax pairs for integrable nonlinear equations. As an example, a new Lax pair of the nonlinear Schrödinger equation is derived.) <|cite_end|> <|cite_start|> (Reference: A Unified Approach to Boundary Value Problems: This book presents a new approach to analyzing initial-boundary value problems for integrable partial differential equations (PDEs) in two dimensions, a method that the author first introduced in 1997 and which is based on ideas of the inverse scattering transform. This method is unique in also yielding novel integral representations for the explicit solution of linear boundary value problems, which include such classical problems as the heat equation on a finite interval and the Helmholtz equation in the interior of an equilateral triangle. The author s thorough introduction allows the interested reader to quickly assimilate the essential results of the book, avoiding many computational details. Several new developments are addressed in the book, including a new transform method for linear evolution equations on the half-line and on the finite interval; analytical inversion of certain integrals such as the attenuated radon transform and the Dirichlet-to-Neumann map for a moving boundary; analytical and numerical methods for elliptic PDEs in a convex polygon; and integrable nonlinear PDEs. An epilogue provides a list of problems on which the author s new approach has been used, offers open problems, and gives a glimpse into how the method might be applied to problems in three dimensions. Audience: A Unified Approach to Boundary Value Problems is appropriate for courses in boundary value problems at the advanced undergraduate and first-year graduate levels. Applied mathematicians, engineers, theoretical physicists, mathematical biologists, and other scholars who use PDEs will also find the book valuable. Contents: Preface; Introduction; Chapter 1: Evolution Equations on the Half-Line; Chapter 2: Evolution Equations on the Finite Interval; Chapter 3: Asymptotics and a Novel Numerical Technique; Chapter 4: From PDEs to Classical Transforms; Chapter 5: Riemann Hilbert and d-Bar Problems; Chapter 6: The Fourier Transform and Its Variations; Chapter 7: The Inversion of the Attenuated Radon Transform and Medical Imaging; Chapter 8: The Dirichlet to Neumann Map for a Moving Boundary; Chapter 9: Divergence Formulation, the Global Relation, and Lax Pairs; Chapter 10: Rederivation of the Integral Representations on the Half-Line and the Finite Interval; Chapter 11: The Basic Elliptic PDEs in a Polygonal Domain; Chapter 12: The New Transform Method for Elliptic PDEs in Simple Polygonal Domains; Chapter 13: Formulation of Riemann Hilbert Problems; Chapter 14: A Collocation Method in the Fourier Plane; Chapter 15: From Linear to Integrable Nonlinear PDEs; Chapter 16: Nonlinear Integrable PDEs on the Half-Line; Chapter 17: Linearizable Boundary Conditions; Chapter 18: The Generalized Dirichlet to Neumann Map; Chapter 19: Asymptotics of Oscillatory Riemann Hilbert Problems; Epilogue; Bibliography; Index.) <|cite_end|> <|cite_start|> (Reference: {The method of Fokas for solving linear partial differential equations: The classical methods for solving initial-boundary-value problems for linear partial differential equations with constant coefficients rely on separation of variables and specific integral transforms. As such, they are limited to specific equations, with special boundary conditions. Here we review a method introduced by Fokas, which contains the classical methods as special cases. However, this method also allows for the equally explicit solution of problems for which no classical approach exists. In addition, it is possible to elucidate which boundary-value problems are well posed and which are not. We provide examples of problems posed on the positive half-line and on the finite interval. Some of these examples have solutions obtainable using classical methods, and others do not. For the former, it is illustrated how the classical methods may be recovered from the more general approach of Fokas.) <|cite_end|>. Traditionally, linear PDEs with different types of boundary conditions require different specialized methods to obtain solutions. For example, sine transform and series are used for Dirichlet boundary value problems, and cosine transform and series are used for Neumann boundary value problems. In contrast to these specific approaches, the Fokas method uses only the Fourier transform to obtain solutions for all types of boundary conditions, with the frequency variable $k$ extended to the complex domain. Since the Fourier transform was initially used to analyze the optimal control of spatially invariant systems in <|cite_start|> (Reference: {Distributed Control of Spatially Invariant Systems: We consider distributed parameter systems where the underlying dynamics are spatially invariant, and where the controls and measurements are spatially distributed. These systems arise in many applications such as the control of vehicular platoons, flow control, microelectromechanical systems (MEMS), smart structures, and systems described by partial differential equations with constant coefficients and distributed controls and measurements. For fully actuated distributed control problems involving quadratic criteria such as linear quadratic regulator (LQR), H/sub 2/ and H/sub /spl infin//, optimal controllers can be obtained by solving a parameterized family of standard finite-dimensional problems. We show that optimal controllers have an inherent degree of decentralization, and this provides a practical distributed controller architecture. We also prove a general result that applies to partially distributed control and a variety of performance criteria, stating that optimal controllers inherit the spatial invariance structure of the plant. Connections of this work to that on systems over rings, and systems with dynamical symmetries are discussed.) <|cite_end|>, it is natural to explore the control of finite-extent systems with boundary conditions using the Fokas method. Unlike the boundary control setting previously dealt with by the Fokas method in <|cite_start|> (Reference: Numerical computation of Neumann controls for the heat equation on a finite interval: This paper presents a new numerical method which approximates Neumann type null controls for the heat equation and is based on the Fokas method. This is a direct method for solving problems originating from the control theory, which allows the realisation of an efficient numerical algorithm that requires small computational effort for determining the null control with exponentially small error. Furthermore, the unified character of the Fokas method makes the extension of the numerical algorithm to a wide range of other linear PDEs and different type of boundary conditions straightforward.) <|cite_end|>, the linear quadratic control appears in PDEs as a forcing term that depends on the states and boundary conditions. Still, we show that the Fokas method can derive an integral representation of the optimal control. Furthermore, the integral control law results in an integral representation of the state that uniformly converges to the boundary conditions, while the series representation does not for nonzero boundary conditions. Also, the numerical computation of integral representations is much easier than computing series representations. It is important to note that the Fokas method applies to general linear PDEs. Thus, our approach can be extended to other linear PDEs, beyond the heat equation. The contributions of the paper are as follows. First, we derive the control law in the complex domain for the linear quadratic regulator of the heat equation using the unified Fourier transform. The optimal control law depends on the Neumann and Dirichlet boundary conditions, whereas only one of these two is given. Second, we derive an integral representation of the optimal control that depends only on the given initial and boundary conditions and thus can be directly evaluated. Third, we show that the integral representation is equivalent to the series representation of the optimal control in <|cite_start|> (Reference: Spatially invariant embeddings of systems with boundaries: We consider certain spatially distributed optimal control problems where the spatial domains are finite intervals with boundaries. The optimal control design procedures for spatially invariant systems are normally not applicable to such bounded spatial domains. For problems that possess certain symmetries, we show how to apply spatially invariant techniques using embeddings. In this note, we report on such “embeddable” problems. As an application, we consider LQR problems for systems posed as PDEs on finite intervals, where it will turn out that the solution for the finite-extent system equals that of its spatially invariant counterpart plus a term that corrects for the boundary conditions. We also show that this decomposition can be understood as a Toeplitz plus Hankel decomposition of the state feedback gain operator, with the Toeplitz part governing the feedback in the interior domain, while the Hankel part provides the needed corrections near the boundaries.) <|cite_end|> for the homogeneous Dirichlet boundary value problem. We numerically evaluate our integral representation of the control and demonstrate its numerical advantages over the series representation. The paper is organized as follows: we formulate the linear quadratic regulator problem for the heat equation in \Cref{sec:problem-formulation}. Then, we derive the transformed optimal control law in \Cref{sec:transformed-control}. We consider the special case of infinite-time control in \Cref{sec:infinite-time-control}. \Cref{sec:D-N-map} derives the integral representation of the optimal control. We compare our integral representation of the control and an existing control form in \Cref{sec:comparison}. \Cref{sec:conclusion} concludes our findings and gives future directions. <|paper_end|>
[ "<|reference_start|> {Distributed Control Design for Spatially Interconnected Systems: This paper deals with analysis, synthesis, and implementation of distributed controllers, designed for spatially interconnected systems. We develop a state space framework for posing problems of this type, and focus on systems whose model is spatially discrete. In this paper, analysis and synthesis results are developed for this class of systems using the l/sub 2/-induced norm as the performance criterion. The results are stated in terms of linear matrix inequalities and are thus readily amenable to computation. A special implementation of the resulting controllers is presented, which is particularly attractive for distributed operation of the controller. Several examples are provided to further illustrate the application of the results. <|reference_end|>", "<|reference_start|> Optimal control of spatially distributed systems: In this paper, we study the structural properties of optimal control of spatially distributed systems. Such systems consist of an infinite collection of possibly heterogeneous linear control systems that are spatially interconnected via certain distant-dependent coupling functions over arbitrary graphs. We study the structural properties of optimal control problems with infinite-horizon linear quadratic criteria, by analyzing the spatial structure of the solution to the corresponding operator Lyapunov and Riccati equations. The key idea of the paper is the introduction of a special class of operators called spatially decaying (SD). These operators are a generalization of translation invariant operators used in the study of spatially invariant systems. We prove that given a control system with a state-space representation consisting of SD operators, the solution of operator Lyapunov and Riccati equations are SD. Furthermore, we show that the kernel of the optimal state feedback for each subsystem decays in the spatial domain, with the type of decay (e.g., exponential, polynomial or logarithmic) depending on the type of coupling between subsystems. <|reference_end|>", "<|reference_start|> Optimal estimation in spatially distributed systems: how far to share measurements from?: We consider the centralized optimal estimation problem in spatially distributed systems. We use the setting of spatially invariant systems as an idealization for which concrete and detailed results are given. Such estimators are known to have a degree of spatial localization in the sense that the estimator gains decay in space, with the spatial decay rates serving as a proxy for how far measurements need to be shared in an optimal distributed estimator. In particular, we examine the dependence of spatial decay rates on problem specifications such as system dynamics, measurement and process noise variances, as well as their spatial autocorrelations. We propose non-dimensional parameters that characterize the decay rates as a function of problem specifications. In particular, we find an interesting matching condition between the characteristic lengthscale of the dynamics and the measurement noise correlation lengthscale for which the optimal centralized estimator is completely decentralized. A new technique - termed the branch point locus - is introduced to quantify spatial decay rates in terms of analyticity regions in the complex spatial frequency plane. Our results are illustrated through two case studies of systems with dynamics modeled by diffusion and the Swift-Hohenberg equation, respectively. <|reference_end|>", "<|reference_start|> {A unified transform method for solving linear and certain nonlinear PDEs: A new transform method for solving initial boundary value problems for linear and for integrable nonlinear PDEs in two independent variables is introduced. This unified method is based on the fact that linear and integrable nonlinear equations have the distinguished property that they possess a Lax pair formulation. The implementation of this method involves performing a simultaneous spectral analysis of both parts of the Lax pair and solving a Riemann–Hilbert problem. In addition to a unification in the method of solution, there also exists a unification in the representation of the solution. The sine–Gordon equation in light–cone coordinates, the nonlinear Schrödinger equation and their linearized versions are used as illustrative examples. It is also shown that appropriate deformations of the Lax pairs of linear equations can be used to construct Lax pairs for integrable nonlinear equations. As an example, a new Lax pair of the nonlinear Schrödinger equation is derived. <|reference_end|>" ]
[ 1, 2, 6, 7 ]
{"<|cite_1|>": "ss-1354406", "<|multi_cite_2_1|>": "ss-1354414", "<|multi_cite_2_2|>": "ss-1287144", "<|cite_3|>": "ss-1067070", "<|multi_cite_4_1|>": "ss-1354408", "<|multi_cite_4_2|>": "ss-2443955", "<|cite_5|>": "arxiv-630610", "<|multi_cite_6_1|>": "ss-2338184", "<|multi_cite_6_2|>": "ss-2032328", "<|multi_cite_6_4|>": "ss-2338185", "<|cite_7|>": "ss-1354406", "<|cite_8|>": "arxiv-371174", "<|cite_9|>": "ss-2443955"}
2003.12060-1
<|cite_start|> (Reference: ArcFace: Additive Angular Margin Loss for Deep Face Recognition: Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In this paper, we first introduce an Additive Angular Margin Loss (ArcFace), which not only has a clear geometric interpretation but also significantly enhances the discriminative power. Since ArcFace is susceptible to the massive label noise, we further propose sub-center ArcFace, in which each class contains $K$ sub-centers and training samples only need to be close to any of the $K$ positive sub-centers. Sub-center ArcFace encourages one dominant sub-class that contains the majority of clean faces and non-dominant sub-classes that include hard or noisy faces. Based on this self-propelled isolation, we boost the performance through automatically purifying raw web faces under massive real-world noise. Besides discriminative feature embedding, we also explore the inverse problem, mapping feature vectors to face images. Without training any additional generator or discriminator, the pre-trained ArcFace model can generate identity-preserved face images for both subjects inside and outside the training data only by using the network gradient and Batch Normalization (BN) priors. Extensive experiments demonstrate that ArcFace can enhance the discriminative feature embedding as well as strengthen the generative face synthesis.) <|cite_end|>. For example, SphereFace <|cite_start|> (Reference: SphereFace: Deep Hypersphere Embedding for Face Recognition: This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter $m$. We further derive specific $m$ to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge show the superiority of A-Softmax loss in FR tasks. The code has also been made publicly available.) <|cite_end|>, CosFace <|cite_start|> (Reference: CosFace: Large Margin Cosine Loss for Deep Face Recognition: Face recognition has made extraordinary progress owing to the advancement of deep convolutional neural networks (CNNs). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, the traditional softmax loss of deep CNNs usually lacks the power of discrimination. To address this problem, recently several loss functions such as center loss, large margin softmax loss, and angular softmax loss have been proposed. All these improved losses share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we propose a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as a cosine loss by $L_2$ normalizing both features and weight vectors to remove radial variations, based on which a cosine margin term is introduced to further maximize the decision margin in the angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by virtue of normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. Extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmarks, which confirms the effectiveness of our proposed approach.) <|cite_end|>, and ArcFace <|cite_start|> (Reference: ArcFace: Additive Angular Margin Loss for Deep Face Recognition: Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In this paper, we first introduce an Additive Angular Margin Loss (ArcFace), which not only has a clear geometric interpretation but also significantly enhances the discriminative power. Since ArcFace is susceptible to the massive label noise, we further propose sub-center ArcFace, in which each class contains $K$ sub-centers and training samples only need to be close to any of the $K$ positive sub-centers. Sub-center ArcFace encourages one dominant sub-class that contains the majority of clean faces and non-dominant sub-classes that include hard or noisy faces. Based on this self-propelled isolation, we boost the performance through automatically purifying raw web faces under massive real-world noise. Besides discriminative feature embedding, we also explore the inverse problem, mapping feature vectors to face images. Without training any additional generator or discriminator, the pre-trained ArcFace model can generate identity-preserved face images for both subjects inside and outside the training data only by using the network gradient and Batch Normalization (BN) priors. Extensive experiments demonstrate that ArcFace can enhance the discriminative feature embedding as well as strengthen the generative face synthesis.) <|cite_end|>enforce the intra-class variance and inter-class diversity by adding the margin to cosine softmax loss. However, as the tasks of previous works are based on close-set scenarios, they limit the margin parameter as positive values <|cite_start|> (Reference: SphereFace: Deep Hypersphere Embedding for Face Recognition: This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter $m$. We further derive specific $m$ to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge show the superiority of A-Softmax loss in FR tasks. The code has also been made publicly available.) <|cite_end|> <|cite_start|> (Reference: CosFace: Large Margin Cosine Loss for Deep Face Recognition: Face recognition has made extraordinary progress owing to the advancement of deep convolutional neural networks (CNNs). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, the traditional softmax loss of deep CNNs usually lacks the power of discrimination. To address this problem, recently several loss functions such as center loss, large margin softmax loss, and angular softmax loss have been proposed. All these improved losses share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we propose a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as a cosine loss by $L_2$ normalizing both features and weight vectors to remove radial variations, based on which a cosine margin term is introduced to further maximize the decision margin in the angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by virtue of normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. Extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmarks, which confirms the effectiveness of our proposed approach.) <|cite_end|> <|cite_start|> (Reference: ArcFace: Additive Angular Margin Loss for Deep Face Recognition: Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In this paper, we first introduce an Additive Angular Margin Loss (ArcFace), which not only has a clear geometric interpretation but also significantly enhances the discriminative power. Since ArcFace is susceptible to the massive label noise, we further propose sub-center ArcFace, in which each class contains $K$ sub-centers and training samples only need to be close to any of the $K$ positive sub-centers. Sub-center ArcFace encourages one dominant sub-class that contains the majority of clean faces and non-dominant sub-classes that include hard or noisy faces. Based on this self-propelled isolation, we boost the performance through automatically purifying raw web faces under massive real-world noise. Besides discriminative feature embedding, we also explore the inverse problem, mapping feature vectors to face images. Without training any additional generator or discriminator, the pre-trained ArcFace model can generate identity-preserved face images for both subjects inside and outside the training data only by using the network gradient and Batch Normalization (BN) priors. Extensive experiments demonstrate that ArcFace can enhance the discriminative feature embedding as well as strengthen the generative face synthesis.) <|cite_end|>, where making the deep features more discriminative could be generalized to the validation set and improve the performance. For open-set scenarios, such as few-shot learning, increasing the margin would not enforce the inter-class diversity but unfortunately enlarge the intra-class variance for novel classes, as shown in Fig.~\ref{fig:discriminative_function}, which would hurt the performance. In contrast, an appropriate negative margin would better tradeoff the discriminability and transferability of deep features in novel classes, and obtain better performance for few-shot classification. <|paper_end|>
[ "<|reference_start|> CosFace: Large Margin Cosine Loss for Deep Face Recognition: Face recognition has made extraordinary progress owing to the advancement of deep convolutional neural networks (CNNs). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, the traditional softmax loss of deep CNNs usually lacks the power of discrimination. To address this problem, recently several loss functions such as center loss, large margin softmax loss, and angular softmax loss have been proposed. All these improved losses share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we propose a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as a cosine loss by $L_2$ normalizing both features and weight vectors to remove radial variations, based on which a cosine margin term is introduced to further maximize the decision margin in the angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by virtue of normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. Extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmarks, which confirms the effectiveness of our proposed approach. <|reference_end|>", "<|reference_start|> ArcFace: Additive Angular Margin Loss for Deep Face Recognition: Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In this paper, we first introduce an Additive Angular Margin Loss (ArcFace), which not only has a clear geometric interpretation but also significantly enhances the discriminative power. Since ArcFace is susceptible to the massive label noise, we further propose sub-center ArcFace, in which each class contains $K$ sub-centers and training samples only need to be close to any of the $K$ positive sub-centers. Sub-center ArcFace encourages one dominant sub-class that contains the majority of clean faces and non-dominant sub-classes that include hard or noisy faces. Based on this self-propelled isolation, we boost the performance through automatically purifying raw web faces under massive real-world noise. Besides discriminative feature embedding, we also explore the inverse problem, mapping feature vectors to face images. Without training any additional generator or discriminator, the pre-trained ArcFace model can generate identity-preserved face images for both subjects inside and outside the training data only by using the network gradient and Batch Normalization (BN) priors. Extensive experiments demonstrate that ArcFace can enhance the discriminative feature embedding as well as strengthen the generative face synthesis. <|reference_end|>", "<|reference_start|> CosFace: Large Margin Cosine Loss for Deep Face Recognition: Face recognition has made extraordinary progress owing to the advancement of deep convolutional neural networks (CNNs). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, the traditional softmax loss of deep CNNs usually lacks the power of discrimination. To address this problem, recently several loss functions such as center loss, large margin softmax loss, and angular softmax loss have been proposed. All these improved losses share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we propose a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as a cosine loss by $L_2$ normalizing both features and weight vectors to remove radial variations, based on which a cosine margin term is introduced to further maximize the decision margin in the angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by virtue of normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. Extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmarks, which confirms the effectiveness of our proposed approach. <|reference_end|>", "<|reference_start|> ArcFace: Additive Angular Margin Loss for Deep Face Recognition: Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In this paper, we first introduce an Additive Angular Margin Loss (ArcFace), which not only has a clear geometric interpretation but also significantly enhances the discriminative power. Since ArcFace is susceptible to the massive label noise, we further propose sub-center ArcFace, in which each class contains $K$ sub-centers and training samples only need to be close to any of the $K$ positive sub-centers. Sub-center ArcFace encourages one dominant sub-class that contains the majority of clean faces and non-dominant sub-classes that include hard or noisy faces. Based on this self-propelled isolation, we boost the performance through automatically purifying raw web faces under massive real-world noise. Besides discriminative feature embedding, we also explore the inverse problem, mapping feature vectors to face images. Without training any additional generator or discriminator, the pre-trained ArcFace model can generate identity-preserved face images for both subjects inside and outside the training data only by using the network gradient and Batch Normalization (BN) priors. Extensive experiments demonstrate that ArcFace can enhance the discriminative feature embedding as well as strengthen the generative face synthesis. <|reference_end|>" ]
[ 2, 3, 5, 6 ]
{"<|multi_cite_1_1|>": "ss-690198", "<|multi_cite_1_2|>": "arxiv-65675", "<|multi_cite_1_3|>": "arxiv-88870", "<|multi_cite_1_4|>": "arxiv-78819", "<|multi_cite_1_5|>": "arxiv-119391", "<|multi_cite_1_6|>": "arxiv-124751", "<|multi_cite_2_1|>": "arxiv-198952", "<|multi_cite_2_2|>": "arxiv-198760", "<|multi_cite_2_3|>": "arxiv-216439", "<|multi_cite_2_4|>": "arxiv-222183", "<|multi_cite_2_5|>": "arxiv-100002", "<|multi_cite_2_6|>": "arxiv-119156", "<|multi_cite_2_7|>": "arxiv-118717", "<|multi_cite_2_8|>": "ss-683782", "<|multi_cite_2_9|>": "arxiv-140298", "<|multi_cite_2_10|>": "arxiv-126527", "<|multi_cite_2_11|>": "arxiv-143548", "<|multi_cite_2_12|>": "arxiv-139784", "<|multi_cite_3_1|>": "arxiv-198952", "<|multi_cite_3_2|>": "arxiv-222183", "<|multi_cite_3_3|>": "arxiv-216439", "<|cite_4|>": "arxiv-198952", "<|cite_5|>": "arxiv-112044", "<|multi_cite_6_1|>": "arxiv-146192", "<|multi_cite_6_2|>": "arxiv-146576", "<|multi_cite_7_1|>": "arxiv-112044", "<|multi_cite_7_2|>": "arxiv-146192", "<|multi_cite_7_3|>": "arxiv-146576", "<|multi_cite_8_1|>": "arxiv-146192", "<|multi_cite_8_2|>": "arxiv-112044", "<|multi_cite_8_3|>": "arxiv-146576", "<|multi_cite_9_1|>": "arxiv-118717", "<|multi_cite_9_2|>": "arxiv-166068", "<|multi_cite_9_3|>": "arxiv-150905", "<|multi_cite_9_4|>": "arxiv-144179", "<|multi_cite_9_5|>": "arxiv-128955", "<|cite_10|>": "ss-683782", "<|cite_11|>": "arxiv-118036", "<|multi_cite_12_1|>": "arxiv-198760", "<|multi_cite_12_2|>": "arxiv-159302", "<|multi_cite_13_1|>": "arxiv-99721", "<|multi_cite_13_2|>": "arxiv-145640", "<|cite_14|>": "arxiv-99721", "<|cite_15|>": "arxiv-145640", "<|cite_16|>": "arxiv-198952", "<|cite_17|>": "arxiv-100002", "<|cite_18|>": "arxiv-119156", "<|cite_19|>": "arxiv-140298", "<|multi_cite_20_1|>": "arxiv-198952", "<|multi_cite_20_2|>": "arxiv-222183", "<|multi_cite_20_3|>": "arxiv-216439", "<|cite_21|>": "ss-1066774", "<|cite_22|>": "ss-1049787", "<|cite_23|>": "arxiv-189824", "<|cite_25|>": "ss-851254", "<|cite_26|>": "ss-790557", "<|cite_27|>": "ss-1066774", "<|cite_28|>": "ss-1071623", "<|multi_cite_29_1|>": "arxiv-189824", "<|multi_cite_29_2|>": "arxiv-198760", "<|multi_cite_29_3|>": "arxiv-178479", "<|multi_cite_30_1|>": "arxiv-74528", "<|multi_cite_30_2|>": "arxiv-122630", "<|multi_cite_30_3|>": "arxiv-146576", "<|multi_cite_30_4|>": "arxiv-146192", "<|cite_31|>": "arxiv-122630", "<|cite_32|>": "arxiv-146576", "<|cite_33|>": "arxiv-146192", "<|multi_cite_34_1|>": "arxiv-122630", "<|multi_cite_34_2|>": "arxiv-146576", "<|multi_cite_34_3|>": "arxiv-146192"}
2006.12940
<|paper_start|> Title: Particle Swarm Optimization for Energy Disaggregation in Industrial and Commercial Buildings Abstract: Particle Swarm Optimization for Energy Disaggregation in Industrial and Commercial Buildings: This paper provides a formalization of the energy disaggregation problem for particle swarm optimization and shows the successful application of particle swarm optimization for disaggregation in a multi-tenant commercial building. The developed mathmatical description of the disaggregation problem using a state changes matrix belongs to the group of non-event based methods for energy disaggregation. This work includes the development of an objective function in the power domain and the description of position and velocity of each particle in a high dimensional state space. For the particle swarm optimization, four adaptions have been applied to improve the results of disaggregation, increase the robustness of the optimizer regarding local optima and reduce the computational time. The adaptions are varying movement constants, shaking of particles, framing and an early stopping criterion. In this work we use two unlabelled power datasets with a granularity of 1 s. Therefore, the results are validated in the power domain in which good results regarding multiple error measures like root mean squared error or the percentage energy error can be shown. Introduction \IEEEPARstart{D}{ue} to the increasing share of renewable energies in the electricity generation, the electricity supply is getting more volatile. In order to guarantee stability of the power grid, adaptions on both the producer and the consumer side are getting more important <|cite_start|> (Reference: Demand-side view of electricity markets: This tutorial paper discusses some aspects of electricity markets from the perspective of the demand-side. It argues that increasing the short-run price elasticity of the demand for electrical energy would improve the operation of these markets. It shows, however, that enhancing this elasticity is not an easy task. The tools that consumers and retailers of electrical energy need to participate more actively and effectively in electricity markets are discussed. The paper also describes how consumers of electricity can take part in the provision of power system security.) <|cite_end|>. Adaptions on the consumer side are called demand side management (DSM) <|cite_start|> (Reference: Demand side management: demand response, intelligent energy systems, and smart loads: Energy management means to optimize one of the most complex and important technical creations that we know: the energy system. While there is plenty of experience in optimizing energy generation and distribution, it is the demand side that receives increasing attention by research and industry. Demand Side Management (DSM) is a portfolio of measures to improve the energy system at the side of consumption. It ranges from improving energy efficiency by using better materials, over smart energy tariffs with incentives for certain consumption patterns, up to sophisticated real-time control of distributed energy resources. This paper gives an overview and a taxonomy for DSM, analyzes the various types of DSM, and gives an outlook on the latest demonstration projects in this domain.) <|cite_end|> <|cite_start|> (Reference: Demand side management: Benefits and challenges ☆: ) <|cite_end|>. DSM in buildings is carried out by energy management systems. Energy management can be realized by submetering as in the often used REDD dataset <|cite_start|> (Reference: REDD: a public data set for energy disaggregation research: Energy and sustainability issues raise a large number of problems that can be tackled using approaches from data mining and machine learning, but traction of such problems has been slow due to the lack of publicly available data. In this paper we present the Reference Energy Disaggregation Data Set (REDD), a freely available data set containing detailed power usage information from several homes, which is aimed at furthering research on energy disaggregation (the task of determining the component appliance contributions from an aggregated electricity signal). We discuss past approaches to disaggregation and how they have influenced our design choices in collecting data, we describe the hardware and software setups for the data collection, and we present initial benchmark disaggregation results using a well-known Factorial Hidden Markov Model (FHMM) technique.) <|cite_end|>. But that leads to great amounts of data and is hardly feasible for a large number of buildings. In order to reduce the needed data, non-intrusive load monitoring (NILM) can be used which has been described first by Hart in <|cite_start|> (Reference: {Nonintrusive Appliance Load Monitoring: A nonintrusive appliance load monitor that determines the energy consumption of individual appliances turning on and off in an electric load, based on detailed analysis of the current and voltage of the total load, as measured at the interface to the power source is described. The theory and current practice of nonintrusive appliance load monitoring are discussed, including goals, applications, load models, appliance signatures, algorithms, prototypes field-test results, current research directions, and the advantages and disadvantages of this approach relative to intrusive monitoring. >) <|cite_end|>. The objective of NILM is the description of the state of every device in an aggregate power signal without a complex submetering <|cite_start|> (Reference: Unsupervised disaggregation of appliances using aggregated consumption data: Non-Intrusive Load Monitoring (NILM) is a technique that determines the electrical load composition of a household through a single point of measurement at the main power feed. In contrast with the majority of the existing approaches to solve this problem which require training, here we explore an unsupervised approach to determine the number of appliances in the household, their power consumption and state, at any given moment. We attempt to achieve this without using any a priori information on the number and type of appliances. Our approach is to first create clusters of steady-state changes and then employ a matching pursuit algorithm to reconstruct the original power signals using the clusters that were found as the sources in a linear blind source separation strategy. Changes in steady-state, sometimes referred to as events, are characterized by their change in real and reactive power (P and Q). Ultimately, the results may be applied to other features in an attempt to improve the separation between clusters. The preliminary results point toward a mixed scenario: large appliances (roughly above 400W) were easily identified, but the small appliances typically clustered together and were difficult to separate. We conclude that the errors occur during clustering which indicates that, in order to increase the purity of the clusters, perhaps other features could be used.) <|cite_end|> <|cite_start|> (Reference: Disaggregation of home energy display data using probabilistic approach: Home energy displays are emerging home energy management devices. However, their energy savings potential is limited, because most display whole-home electricity consumption data. We propose a new approach to disaggregation electricity consumption by individual appliances and/or end uses that would enhance the effectiveness of home energy displays.) <|cite_end|>. Figure~\ref{fig:NILM} shows the principle of NILM where a measured aggregate power signal is divided into the single contributions of individual loads. Thus, NILM is also called energy disaggregation. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{figs/NILM.png} \caption{Graphical representation of energy disaggregation. Upper illustration shows the total measured power. The bottom illustration shows the corresponding power of four individual loads over time, totaling to be the measured aggregate power.} \label{fig:NILM} \end{figure} Multiple publications are working on this topic with event based methods <|cite_start|> (Reference: Load Signature Study—Part I: Basic Concept, Structure, and Methodology: Load signature is the unique consumption pattern intrinsic to each individual electrical appliance/piece of equipment. This paper focus on building a universal platform to better understand and explore the nature of electricity consumption patterns using load signatures and advanced technology, such as feature extraction and intelligent computing. Through this knowledge, we can explore and develop innovative applications to achieve better utilization of resources and develop more intelligent ways of operation. This paper depicts the basic concept, features of load signatures, structure and methodology of applying mathematical programming techniques, pattern recognition tools, and committee decision mechanism to perform load disaggregation. New indices are also introduced to aid our understanding of the nature of load signatures and different disaggregation algorithms.) <|cite_end|> and non-event based methods <|cite_start|> (Reference: A Hybrid Signature-based Iterative Disaggregation algorithm for Non-Intrusive Load Monitoring: ) <|cite_end|> and methods from supervised and unsupervised machine learning <|cite_start|> (Reference: Unsupervised disaggregation for non-intrusive load monitoring: A method for unsupervised disaggregation of appliance signatures from smart meter data is presented. The primary feature used for unsupervised learning relates to abrupt transitions or magnitude changes in the power waveform. The method consists of a sequence of procedures for appliance signature identification, and disaggregation using hidden Markov modeling (HMM), and residual analysis. The key contributions are (a) a novel 'segmented' application of the Viterbi algorithm for sequence decoding with the HMM, (b) details of establishing observation and state transition probabilities for the HMM, and (c) procedures for careful handling of low power signatures. Results show that the method is effective for magnitude-based disaggregation, and provide insights for a more complete solution.) <|cite_end|> <|cite_start|> (Reference: Unsupervised disaggregation of low frequency power measurements: Fear of increasing prices and concern about climate change are motivating residential power conservation efforts. We investigate the effectiveness of several unsupervised disaggregation methods on low frequency power measurements collected in real homes. Specifically, we consider variants of the factorial hidden Markov model. Our results indicate that a conditional factorial hidden semi-Markov model, which integrates additional features related to when and how appliances are used in the home and more accurately represents the power use of individual appliances, outperforms the other unsupervised disaggregation methods. Our results show that unsupervised techniques can provide perappliance power usage information in a non-invasive manner, which is ideal for enabling power conservation efforts.) <|cite_end|> <|cite_start|> (Reference: An unsupervised training method for non-intrusive appliance load monitoring: ) <|cite_end|> <|cite_start|> (Reference: An Extreme Learning Machine Approach to Effective Energy Disaggregation: Power disaggregation is aimed at determining appliance-by-appliance electricity consumption, leveraging upon a single meter only, which measures the entire power demand. Data-driven procedures based on Factorial Hidden Markov Models (FHMMs) have produced remarkable results on energy disaggregation. Nevertheless, these procedures have various weaknesses; there is a scalability problem as the number of devices to observe rises, and the inference step is computationally heavy. Artificial neural networks (ANNs) have been demonstrated to be a viable solution to deal with FHMM shortcomings. Nonetheless, there are two significant limitations: A complicated and time-consuming training system based on back-propagation has to be employed to estimate the neural architecture parameters, and large amounts of training data covering as many operation conditions as possible need to be collected to attain top performances. In this work, we aim to overcome these limitations by leveraging upon the unique and useful characteristics of the extreme learning machine technique, which is based on a collection of randomly chosen hidden units and analytically defined output weights. We find that the suggested approach outperforms state-of-the-art solutions, namely FHMMs and ANNs, on the UK-DALE corpus. Moreover, our solution generalizes better than previous approaches for unseen houses, and avoids a data-hungry training scheme.) <|cite_end|>. However, in this context almost exclusively households are considered, since the energy systems of individual households are of manageable complexity <|cite_start|> (Reference: Neural NILM: Deep Neural Networks Applied to Energy Disaggregation: Energy disaggregation estimates appliance-by-appliance electricity consumption from a single meter that measures the whole home's electricity demand. Recently, deep neural networks have driven remarkable improvements in classification performance in neighbouring machine learning fields such as image classification and automatic speech recognition. In this paper, we adapt three deep neural network architectures to energy disaggregation: 1) a form of recurrent neural network called `long short-term memory' (LSTM); 2) denoising autoencoders; and 3) a network which regresses the start time, end time and average power demand of each appliance activation. We use seven metrics to test the performance of these algorithms on real aggregate power data from five appliances. Tests are performed against a house not seen during training and against houses seen during training. We find that all three neural nets achieve better F1 scores (averaged over all five appliances) than either combinatorial optimisation or factorial hidden Markov models and that our neural net algorithms generalise well to an unseen house.) <|cite_end|>. Industrial and commercial buildings are hardly investigated due to their complex energy systems and often confidential electricty data <|cite_start|> (Reference: BLOND, a building-level office environment dataset of typical electrical appliances: ) <|cite_end|>. But due to the high energy demand of industrial buildings, there are large saving potentials and great possibilities for effective demand-side management. Since commercial properties in particular, can differ greatly from one another, approaches and algorithms are needed that work independently of these differences and adapt to any dataset. Additionally, often there is no other knowledge than the measured data. Thus, machine learning algorithm requiring a lot of prior knowledge and many features are not feasible for NILM approaches for indutrial buildings. Furthermore, machine learning methods often rely on a costly training regarding computational time and power for operation and amount of data. In this work, we present a new, fully unsupervised approach using particle swarm optimization (PSO) for energy disaggregation and show its application in a multi-tenant commercial building. The disaggregation problem is of high complexity. PSO is able to find complex solutions even if the information for each particle is limited <|cite_start|> (Reference: Improving Multiobjective Particle Swarm Optimization Method: ) <|cite_end|>. It has been implemented for various applications and complex real world problems <|cite_start|> (Reference: Particle swarm optimization: developments, applications and resources: This paper focuses on the engineering and computer science aspects of developments, applications, and resources related to particle swarm optimization. Developments in the particle swarm algorithm since its origin in 1995 are reviewed. Included are brief discussions of constriction factors, inertia weights, and tracking dynamic systems. Applications, both those already developed, and promising future application areas, are reviewed. Finally, resources related to particle swarm optimization are listed, including books, Web sites, and software. A particle swarm optimization bibliography is at the end of the paper.) <|cite_end|> <|cite_start|> (Reference: Particle swarm optimization: Basic concepts, variants and applications in power systems: Many areas in power systems require solving one or more nonlinear optimization problems. While analytical methods might suffer from slow convergence and the curse of dimensionality, heuristics-based swarm intelligence can be an efficient alternative. Particle swarm optimization (PSO), part of the swarm intelligence family, is known to effectively solve large-scale nonlinear optimization problems. This paper presents a detailed overview of the basic concepts of PSO and its variants. Also, it provides a comprehensive survey on the power system applications that have benefited from the powerful nature of PSO as an optimization technique. For each application, technical details that are required for applying PSO, such as its type, particle formulation (solution representation), and the most efficient fitness functions are also discussed.) <|cite_end|>. The method has been very successful in solving high dimensional nonlinear optimization problems also in applications of power system <|cite_start|> (Reference: EPSO-evolutionary particle swarm optimization, a new algorithm with applications in power systems: This paper presents a new optimization model EPSO, evolutionary particle swarm optimization, inspired in both evolutionary algorithms and in particle swarm optimization algorithms. The fundamentals of the method are described, and an application to the problem of loss minimization and voltage control is presented, with very good results.) <|cite_end|> <|cite_start|> (Reference: A survey of particle swarm optimization applications in power system operations: Particle swarm optimization (PSO) has been getting added attention in many research fields. This article presents a comprehensive coverage of PSO applications in solving optimization problems in the area of electric power systems.) <|cite_end|> <|cite_start|> (Reference: A survey of particle swarm optimization applications in electric power systems: Particle swarm optimization (PSO) has received increased attention in many research fields recently. This paper presents a comprehensive coverage of different PSO applications in solving optimization problems in the area of electric power systems. It highlights the PSO key features and advantages over other various optimization algorithms. Furthermore, recent trends with regard to PSO development in this area are explored. This paper also discusses PSO possible future applications in the area of electric power systems and its potential theoretical studies.) <|cite_end|>. In theory, PSO has been deployed effectively to the multidimensional \textit{Knapsack} problem <|cite_start|> (Reference: A Genetic Algorithm for the Multidimensional Knapsack Problem: ) <|cite_end|>. It is very similar to the formulation of the disaggregation problem stated in this work. However, applications to real world disaggregation problems left room for improvements in the past due to a lack of descriptive characteristics of the single devices. On the other hand, using the metaheuristic PSO no complex training or model building is necessary. The method does not adapt to the data during optimization. That could increase the transferability to other datasets with minimal changes. Aditionally, problems like underfitting and overfitting due to the complexity of chosen models do not occur. For developing and testing the method, three phase power data in active and reactive power of a multi tenant commercial building and the according device profiles was used. This measured power data is referred to as aggregate power signal. The device profiles must not be full appliance signatures but e.g. one component of an appliance signature like one operational mode of a complex device. The developed method takes the measured aggregate power signal and device profiles as input to determine the state of each device for any given point in time. In the first part of this paper we present our formulation of the disaggregation problem and how PSO can be used and improved for energy disaggregation. Therefore, we firstly introduce the classic PSO method followed by the formal description of the state space, position and velocity of the PSO for energy disaggregation. Thereafter, the adaptions to PSO for energy disaggregation are stated. The adaptions are time varying movement constants, shaking of particles, framing of the power signal and an early stopping criterion. To our knowledge, this combination of adaptions to PSO is new. In the second part of this work, the testing of the developed method is described. Therein, the used data and error measures are presented. The results and their discussion are outlined subsequently. The paper is closed with a conclusion and outlook. <|paper_end|>
[ "<|reference_start|> Disaggregation of home energy display data using probabilistic approach: Home energy displays are emerging home energy management devices. However, their energy savings potential is limited, because most display whole-home electricity consumption data. We propose a new approach to disaggregation electricity consumption by individual appliances and/or end uses that would enhance the effectiveness of home energy displays. <|reference_end|>", "<|reference_start|> An unsupervised training method for non-intrusive appliance load monitoring: <|reference_end|>", "<|reference_start|> Improving Multiobjective Particle Swarm Optimization Method: <|reference_end|>", "<|reference_start|> A Genetic Algorithm for the Multidimensional Knapsack Problem: <|reference_end|>" ]
[ 6, 11, 15, 21 ]
{"<|cite_1|>": "ss-1396176", "<|multi_cite_2_1|>": "ss-1304289", "<|multi_cite_2_2|>": "ss-1362891", "<|cite_3|>": "ss-1830988", "<|cite_4|>": "ss-947937", "<|multi_cite_5_1|>": "ss-2189487", "<|multi_cite_5_2|>": "ss-1680736", "<|cite_6|>": "ss-1627726", "<|cite_7|>": "ss-1907613", "<|multi_cite_8_1|>": "ss-780753", "<|multi_cite_8_2|>": "ss-684707", "<|multi_cite_8_3|>": "ss-2038798", "<|multi_cite_8_4|>": "ss-1796403", "<|cite_9|>": "arxiv-81478", "<|cite_10|>": "ss-1142287", "<|cite_11|>": "ss-1796404", "<|multi_cite_12_1|>": "ss-1379725", "<|multi_cite_12_2|>": "ss-1014065", "<|multi_cite_13_1|>": "ss-1953912", "<|multi_cite_13_2|>": "ss-1796405", "<|multi_cite_13_3|>": "ss-1796406", "<|cite_14|>": "ss-1283717"}
2210.09539
<|paper_start|> Title: Hierarchical Model-Based Imitation Learning for Planning in Autonomous Driving Abstract: Hierarchical Model-Based Imitation Learning for Planning in Autonomous Driving: We demonstrate the first large-scale application of model-based generative adversarial imitation learning (MGAIL) to the task of dense urban self-driving. We augment standard MGAIL using a hierarchical model to enable generalization to arbitrary goal routes, and measure performance using a closed-loop evaluation framework with simulated interactive agents. We train policies from expert trajectories collected from real vehicles driving over 100,000 miles in San Francisco, and demonstrate a steerable policy that can navigate robustly even in a zero-shot setting, generalizing to synthetic scenarios with novel goals that never occurred in real-world driving. We also demonstrate the importance of mixing closed-loop MGAIL losses with open-loop behavior cloning losses, and show our best policy approaches the performance of the expert. We evaluate our imitative model in both average and challenging scenarios, and show how it can serve as a useful prior to plan successful trajectories. Introduction Driving at scale in dense urban environments remains difficult due to the complexity of interactions between large numbers of diverse actors. In these scenarios, it is difficult to apply classic motion planning methods that require defining cost functions such that the emergent behavior fully aligns with human expectations <|cite_start|> (Reference: Reward (Mis)design for Autonomous Driving: This article considers the problem of diagnosing certain common errors in reward design. Its insights are also applicable to the design of cost functions and performance metrics more generally. To diagnose common errors, we develop 8 simple sanity checks for identifying flaws in reward functions. These sanity checks are applied to reward functions from past work on reinforcement learning (RL) for autonomous driving (AD), revealing near-universal flaws in reward design for AD that might also exist pervasively across reward design for other tasks. Lastly, we explore promising directions that may aid the design of reward functions for AD in subsequent research, following a process of inquiry that can be adapted to other domains.) <|cite_end|>. This motivates imitation learning (IL), which uses expert demonstrations to learn either a cost function or a policy directly over actions <|cite_start|> (Reference: End to End Learning for Self-Driving Cars: We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS).) <|cite_end|> <|cite_start|> (Reference: ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst: Our goal is to train a policy for autonomous driving via imitation learning that is robust enough to drive a real vehicle. We find that standard behavior cloning is insufficient for handling complex driving scenarios, even when we leverage a perception system for preprocessing the input and a controller for executing the output on the car: 30 million examples are still not enough. We propose exposing the learner to synthesized data in the form of perturbations to the expert's driving, which creates interesting situations such as collisions and/or going off the road. Rather than purely imitating all data, we augment the imitation loss with additional losses that penalize undesirable events and encourage progress -- the perturbations then provide an important signal for these losses and lead to robustness of the learned model. We show that the ChauffeurNet model can handle complex situations in simulation, and present ablation experiments that emphasize the importance of each of our proposed changes and show that the model is responding to the appropriate causal factors. Finally, we demonstrate the model driving a car in the real world.) <|cite_end|>. In practice, any imitation model used for motion planning needs additional safety considerations to enforce hard constraints such as collision avoidance and kinematic feasibility. Nonetheless, studying the driving ability of an imitative model in isolation gives an indication of when and where it produces a feasible prior that could be used in an AV stack to plan successful trajectories. In these situations, motion planning could reduce to verifying trajectories from a model instead of generating them with bespoke solutions. A common challenge with IL is covariate shift, also known as the ``DAgger problem" <|cite_start|> (Reference: A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning: Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem.) <|cite_end|>. This occurs when the policy makes small errors that cause it to visit states outside of its training distribution, resulting in compounding error and divergent behavior. Intuitively, this occurs when the policy encounters unfamiliar states, and is similar to challenges in offline reinforcement learning <|cite_start|> (Reference: Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems: In this tutorial article, we aim to provide the reader with the conceptual tools needed to get started on research on offline reinforcement learning algorithms: reinforcement learning algorithms that utilize previously collected data, without additional online data collection. Offline reinforcement learning algorithms hold tremendous promise for making it possible to turn large datasets into powerful decision making engines. Effective offline reinforcement learning methods would be able to extract policies with the maximum possible utility out of the available data, thereby allowing automation of a wide range of decision-making domains, from healthcare and education to robotics. However, the limitations of current algorithms make this difficult. We will aim to provide the reader with an understanding of these challenges, particularly in the context of modern deep reinforcement learning methods, and describe some potential solutions that have been explored in recent work to mitigate these challenges, along with recent applications, and a discussion of perspectives on open problems in the field.) <|cite_end|>. State-of-the-art imitation methods like model-based generative adversarial imitation learning (MGAIL) <|cite_start|> (Reference: {End-to-End Differentiable Adversarial Imitation Learning: Generative Adversarial Networks (GANs) have been successfully applied to the problem of policy imitation in a model-free setup. However, the computation graph of GANs, that include a stochastic policy as the generative model, is no longer differentiable end-to-end, which requires the use of high-variance gradient estimation. In this paper, we introduce the Modelbased Generative Adversarial Imitation Learning (MGAIL) algorithm. We show how to use a forward model to make the computation fully differentiable, which enables training policies using the exact gradient of the discriminator. The resulting algorithm trains competent policies using relatively fewer expert samples and interactions with the environment. We test it on both discrete and continuous action domains and report results that surpass the state-of-the-art.) <|cite_end|> address covariate shift through \emph{closed-loop training}, where dynamics are simulated and losses are backed up over the time horizon. Hence, the value of a decision depends on its long-term consequences, in contrast to open-loop behavior cloning, which treats each timestep independently. While theory predicts the importance of closed-loop training <|cite_start|> (Reference: Of Moments and Matching: A Game-Theoretic Framework for Closing the Imitation Gap: We provide a unifying view of a large family of previous imitation learning algorithms through the lens of moment matching. At its core, our classification scheme is based on whether the learner attempts to match (1) reward or (2) action-value moments of the expert's behavior, with each option leading to differing algorithmic approaches. By considering adversarially chosen divergences between learner and expert behavior, we are able to derive bounds on policy performance that apply for all algorithms in each of these classes, the first to our knowledge. We also introduce the notion of moment recoverability, implicit in many previous analyses of imitation learning, which allows us to cleanly delineate how well each algorithmic family is able to mitigate compounding errors. We derive three novel algorithm templates (AdVIL, AdRIL, and DAeQuIL) with strong guarantees, simple implementation, and competitive empirical performance.) <|cite_end|>, empirical evidence in the self-driving literature is limited <|cite_start|> (Reference: Symphony: Learning Realistic and Diverse Agents for Autonomous Driving Simulation: Simulation is a crucial tool for accelerating the development of autonomous vehicles. Making simulation realistic requires models of the human road users who interact with such cars. Such models can be obtained by applying learning from demonstration (LfD) to trajectories observed by cars already on the road. However, existing LfD methods are typically insufficient, yielding policies that frequently collide or drive off the road. To address this problem, we propose Symphony, which greatly improves realism by combining conventional policies with a parallel beam search. The beam search refines these policies on the fly by pruning branches that are unfavourably evaluated by a discriminator. However, it can also harm diversity, i.e., how well the agents cover the entire distribution of realistic behaviour, as pruning can encourage mode collapse. Symphony addresses this issue with a hierarchical approach, factoring agent behaviour into goal generation and goal conditioning. The use of such goals ensures that agent diversity neither disappears during adversarial training nor is pruned away by the beam search. Experiments on both proprietary and open Waymo datasets confirm that Symphony agents learn more realistic and diverse behaviour than several baselines.) <|cite_end|>. This work affirms the benefit of closed-loop training on a practical, large-scale, and difficult motion planning task. In autonomous driving, a learned motion policy must not only realistically imitate the expert, but also be goal-directed. Such a policy can be challenging to develop due to the confounding of high-level task planning and low-level motion planning <|cite_start|> (Reference: Integrated Task and Motion Planning: The problem of planning for a robot that operates in environments containing a large number of objects, taking actions to move itself through the world as well as to change the state of the objects, is known as task and motion planning (TAMP). TAMP problems contain elements of discrete task planning, discrete-continuous mathematical programming, and continuous motion planning, and thus cannot be effectively addressed by any of these fields directly. In this paper, we define a class of TAMP problems and survey algorithms for solving them, characterizing the solution methods in terms of their strategies for solving the continuous-space subproblems and their techniques for integrating the discrete and continuous components of the search.) <|cite_end|>: often a trajectory is observed from the expert, but the high-level intents or goals that affect lane choice, future route, or final destination are hidden, making it difficult to recover the causal factors that led to the observed trajectory. One promising solution is the use of hierarchical methods, which decompose the problem into a high-level goal generation module and a low-level goal-conditioned motion policy <|cite_start|> (Reference: 2020 IEEE International Conference on Robotics and Automation, ICRA 2020, Paris, France, May 31 - August 31, 2020: ) <|cite_end|> <|cite_start|> (Reference: Goal-conditioned Imitation Learning: Designing rewards for Reinforcement Learning (RL) is challenging because it needs to convey the desired task, be efficient to optimize, and be easy to compute. The latter is particularly problematic when applying RL to robotics, where detecting whether the desired configuration is reached might require considerable supervision and instrumentation. Furthermore, we are often interested in being able to reach a wide range of configurations, hence setting up a different reward every time might be unpractical. Methods like Hindsight Experience Replay (HER) have recently shown promise to learn policies able to reach many goals, without the need of a reward. Unfortunately, without tricks like resetting to points along the trajectory, HER might require many samples to discover how to reach certain areas of the state-space. In this work we investigate different approaches to incorporate demonstrations to drastically speed up the convergence to a policy able to reach any goal, also surpassing the performance of an agent trained with other Imitation Learning algorithms. Furthermore, we show our method can also be used when the available expert trajectories do not contain the actions, which can leverage kinesthetic or third person demonstration. The code is available at https://sites.google.com/view/goalconditioned-il/.) <|cite_end|>. During training, this allows the motion policy to associate the goal with the expert's intent-driven behavior. At inference time, this approach offers the flexibility to specify novel goals and generalize beyond the observed expert trajectories. Ultimately, we aim to develop a policy that can safely navigate a diversity of driving situations and accomplish novel goals, including those not demonstrated by the expert. While we show that employing closed-loop training with respect to the ego vehicle's dynamics is instrumental in creating such a policy, an important question is how to properly evaluate it. Given a dataset of driving scenes with logged vehicle trajectories, evaluating an imitative policy on the same goals achieved by the expert can lead to an overly optimistic performance estimate due to spurious correlations between input features <|cite_start|> (Reference: Causal Confusion in Imitation Learning: Behavioral cloning reduces policy learning to supervised learning by training a discriminative model to predict expert actions given observations. Such discriminative models are non-causal: the training procedure is unaware of the causal structure of the interaction between the expert and the environment. We point out that ignoring causality is particularly damaging because of the distributional shift in imitation learning. In particular, it leads to a counter-intuitive "causal misidentification" phenomenon: access to more information can yield worse performance. We investigate how this problem arises, and propose a solution to combat it through targeted interventions---either environment interaction or expert queries---to determine the correct causal model. We show that causal misidentification occurs in several benchmark control domains as well as realistic driving settings, and validate our solution against DAgger and other baselines and ablations.) <|cite_end|>. For example, other vehicles' logged trajectories can influence the policy to follow the expert's goal, rather than actively interacting with other actors to reach its own goal. For this reason, it is critical to evaluate the policy's ability to follow novel goals. This poses a challenge when simulating driving because, as the autonomous vehicle (AV) diverges from its logged trajectory to achieve a new goal, other actors' logged trajectories may become unrealistic. To address this issue, we introduce the combination of goal generalization with \emph{closed-loop evaluation}, in which the policy attempts to reach novel goals in the presence of realistic actors that react to the AV's new actions <|cite_start|> (Reference: Symphony: Learning Realistic and Diverse Agents for Autonomous Driving Simulation: Simulation is a crucial tool for accelerating the development of autonomous vehicles. Making simulation realistic requires models of the human road users who interact with such cars. Such models can be obtained by applying learning from demonstration (LfD) to trajectories observed by cars already on the road. However, existing LfD methods are typically insufficient, yielding policies that frequently collide or drive off the road. To address this problem, we propose Symphony, which greatly improves realism by combining conventional policies with a parallel beam search. The beam search refines these policies on the fly by pruning branches that are unfavourably evaluated by a discriminator. However, it can also harm diversity, i.e., how well the agents cover the entire distribution of realistic behaviour, as pruning can encourage mode collapse. Symphony addresses this issue with a hierarchical approach, factoring agent behaviour into goal generation and goal conditioning. The use of such goals ensures that agent diversity neither disappears during adversarial training nor is pruned away by the beam search. Experiments on both proprietary and open Waymo datasets confirm that Symphony agents learn more realistic and diverse behaviour than several baselines.) <|cite_end|>. Even with the simulation of other interactive actors, it is also important to measure performance in challenging and rare scenarios. Aggregating over a large dataset can mask the performance on difficult but uncommon situations, misrepresenting the model's ability to handle the ``long-tail" <|cite_start|> (Reference: {Autonomy 2.0: Why is self-driving always 5 years away?: Despite the numerous successes of machine learning over the past decade (image recognition, decision-making, NLP, image synthesis), self-driving technology has not yet followed the same trend. In this paper, we study the history, composition, and development bottlenecks of the modern self-driving stack. We argue that the slow progress is caused by approaches that require too much hand-engineering, an over-reliance on road testing, and high fleet deployment costs. We observe that the classical stack has several bottlenecks that preclude the necessary scale needed to capture the long tail of rare events. To resolve these problems, we outline the principles of Autonomy 2.0, an ML-first approach to self-driving, as a viable alternative to the currently adopted state-of-the-art. This approach is based on (i) a fully differentiable AV stack trainable from human demonstrations, (ii) closed-loop data-driven reactive simulation, and (iii) large-scale, low-cost data collections as critical solutions towards scalability issues. We outline the general architecture, survey promising works in this direction and propose key challenges to be addressed by the community in the future.) <|cite_end|>. The key ingredients for closed-loop, machine-learned planner development are, now for the first time, readily available: closed-loop imitation learning with MGAIL, hierarchical goal-based policies, and realistic interactive agents. In this work, we show how to train and evaluate such a system by demonstrating the first application of MGAIL on a large and practical self-driving task of ego vehicle motion planning for dense urban driving. Our method outperforms prior imitation approaches based on pure open-loop optimization like behavior cloning, and achieves aggregate performance similar to the expert demonstrator. We report several key design choices and experimental contributions: \begin{itemize} \item We introduce a hierarchical model that combines a high-level graph-based search with a low-level transformer-based MGAIL policy, adding an intermediate set of route features to help the model generalize and follow arbitrary goal routes. \item We evaluate our policy's ability to follow novel goal routes alongside simulated reactive agents in closed-loop in order to obtain more realistic estimates of zero-shot generalization and allow for interaction between the policy and other actors. \item We run experiments on both average driving and challenging scenarios to estimate ``long-tail" performance and highlight the best opportunities for hill-climbing. \item We run several ablations and show that augmenting MGAIL's closed-loop adversarial losses with an open-loop behavior cloning loss leads to better performance. \end{itemize} Related Work \label{sec:related_work} Imitation learning (IL) has a long history in the robotics and machine learning literature, often appearing under names such as learning from demonstration, apprenticeship learning, inverse reinforcement learning, and inverse optimal control <|cite_start|> (Reference: Algorithms for {{Inverse Reinforcement Learning: Objective—To evaluate the pharmacokinetics of a novel commercial formulation of ivermectin after administration to goats. Animals—6 healthy adult goats. Procedure—Ivermectin (200 μg/kg) was initially administered IV to each goat, and plasma samples were obtained for 36 days. After a washout period of 3 weeks, each goat received a novel commercial formulation of ivermectin (200 μg/kg) by SC injection. Plasma samples were then obtained for 42 days. Drug concentrations were quantified by use of high-performance liquid chromatography with fluorescence detection. Results—Pharmacokinetics of ivermectin after IV administration were best described by a 2-compartment open model; values for main compartmental variables included volume of distribution at a steady state (9.94 L/kg), clearance (1.54 L/kg/d), and area under the plasma concentration-time curve (AUC; 143 [ng•d]/mL). Values for the noncompartmental variables included mean residence time (7.37 days), AUC (153 [ng•d]/mL), and clearance (1.43 L/kg/d). After ...) <|cite_end|> <|cite_start|> (Reference: Apprenticeship {{Learning}} via {{Inverse Reinforcement Learning}}: We consider learning in a Markov decision process where we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform. This setting is useful in applications (such as the task of driving) where it may be difficult to write down an explicit reward function specifying exactly how different desiderata should be traded off. We think of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and give an algorithm for learning the task demonstrated by the expert. Our algorithm is based on using "inverse reinforcement learning" to try to recover the unknown reward function. We show that our algorithm terminates in a small number of iterations, and that even though we may never recover the expert's reward function, the policy output by the algorithm will attain performance close to that of the expert, where here performance is measured with respect to the expert's unknown reward function.) <|cite_end|> <|cite_start|> (Reference: Learning from Demonstration in the Wild: Learning from demonstration (LfD) is useful in settings where hand-coding behaviour or a reward function is impractical. It has succeeded in a wide range of problems but typically relies on manually generated demonstrations or specially deployed sensors and has not generally been able to leverage the copious demonstrations available in the wild: those that capture behaviours that were occurring anyway using sensors that were already deployed for another purpose, e.g., traffic camera footage capturing demonstrations of natural behaviour of vehicles, cyclists, and pedestrians. We propose Video to Behaviour (ViBe), a new approach to learn models of behaviour from unlabelled raw video data of a traffic scene collected from a single, monocular, initially uncalibrated camera with ordinary resolution. Our approach calibrates the camera, detects relevant objects, tracks them through time, and uses the resulting trajectories to perform LfD, yielding models of naturalistic behaviour. We apply ViBe to raw videos of a traffic intersection and show that it can learn purely from videos, without additional expert knowledge.) <|cite_end|> <|cite_start|> (Reference: Maximum {{Entropy Inverse Reinforcement Learning: Recent research has shown the benefit of framing problems of imitation learning as solutions to Markov Decision Problems. This approach reduces learning to the problem of recovering a utility function that makes the behavior induced by a near-optimal policy closely mimic demonstrated behavior. In this work, we develop a probabilistic approach based on the principle of maximum entropy. Our approach provides a well-defined, globally normalized distribution over decision sequences, while providing the same performance guarantees as existing methods. We develop our technique in the context of modeling real-world navigation and driving behaviors where collected data is inherently noisy and imperfect. Our probabilistic approach enables modeling of route preferences as well as a powerful new approach to inferring destinations and routes based on partial trajectories.) <|cite_end|> <|cite_start|> (Reference: Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization: Reinforcement learning can acquire complex behaviors from high-level specifications. However, defining a cost function that can be optimized effectively and encodes the correct task is challenging in practice. We explore how inverse optimal control (IOC) can be used to learn behaviors from demonstrations, with applications to torque control of high-dimensional robotic systems. Our method addresses two key challenges in inverse optimal control: first, the need for informative features and effective regularization to impose structure on the cost, and second, the difficulty of learning the cost function under unknown dynamics for high-dimensional continuous systems. To address the former challenge, we present an algorithm capable of learning arbitrary nonlinear cost functions, such as neural networks, without meticulous feature engineering. To address the latter challenge, we formulate an efficient sample-based approximation for MaxEnt IOC. We evaluate our method on a series of simulated tasks and real-world robotic manipulation problems, demonstrating substantial improvement over prior methods both in terms of task complexity and sample efficiency.) <|cite_end|> <|cite_start|> (Reference: Maximum Margin Planning: Imitation learning of sequential, goal-directed behavior by standard supervised techniques is often difficult. We frame learning such behaviors as a maximum margin structured prediction problem over a space of policies. In this approach, we learn mappings from features to cost so an optimal policy in an MDP with these cost mimics the expert's behavior. Further, we demonstrate a simple, provably efficient approach to structured maximum margin learning, based on the subgradient method, that leverages existing fast algorithms for inference. Although the technique is general, it is particularly relevant in problems where A* and dynamic programming approaches make learning policies tractable in problems beyond the limitations of a QP formulation. We demonstrate our approach applied to route planning for outdoor mobile robots, where the behavior a designer wishes a planner to execute is often clear, while specifying cost functions that engender this behavior is a much more difficult task.) <|cite_end|>. For an overview see <|cite_start|> (Reference: A survey of robot learning from demonstration: ) <|cite_end|>. IL is also closely related to offline reinforcement learning <|cite_start|> (Reference: Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems: In this tutorial article, we aim to provide the reader with the conceptual tools needed to get started on research on offline reinforcement learning algorithms: reinforcement learning algorithms that utilize previously collected data, without additional online data collection. Offline reinforcement learning algorithms hold tremendous promise for making it possible to turn large datasets into powerful decision making engines. Effective offline reinforcement learning methods would be able to extract policies with the maximum possible utility out of the available data, thereby allowing automation of a wide range of decision-making domains, from healthcare and education to robotics. However, the limitations of current algorithms make this difficult. We will aim to provide the reader with an understanding of these challenges, particularly in the context of modern deep reinforcement learning methods, and describe some potential solutions that have been explored in recent work to mitigate these challenges, along with recent applications, and a discussion of perspectives on open problems in the field.) <|cite_end|>. Theoretical understanding of IL continues to improve with the seminal work of <|cite_start|> (Reference: A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning: Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem.) <|cite_end|> and more recently <|cite_start|> (Reference: Of Moments and Matching: A Game-Theoretic Framework for Closing the Imitation Gap: We provide a unifying view of a large family of previous imitation learning algorithms through the lens of moment matching. At its core, our classification scheme is based on whether the learner attempts to match (1) reward or (2) action-value moments of the expert's behavior, with each option leading to differing algorithmic approaches. By considering adversarially chosen divergences between learner and expert behavior, we are able to derive bounds on policy performance that apply for all algorithms in each of these classes, the first to our knowledge. We also introduce the notion of moment recoverability, implicit in many previous analyses of imitation learning, which allows us to cleanly delineate how well each algorithmic family is able to mitigate compounding errors. We derive three novel algorithm templates (AdVIL, AdRIL, and DAeQuIL) with strong guarantees, simple implementation, and competitive empirical performance.) <|cite_end|> <|cite_start|> (Reference: Shaking the foundations: delusions in sequence models for interaction and control: The recent phenomenal success of language models has reinvigorated machine learning research, and large sequence models such as transformers are being applied to a variety of domains. One important problem class that has remained relatively elusive however is purposeful adaptive behavior. Currently there is a common perception that sequence models "lack the understanding of the cause and effect of their actions" leading them to draw incorrect inferences due to auto-suggestive delusions. In this report we explain where this mismatch originates, and show that it can be resolved by treating actions as causal interventions. Finally, we show that in supervised learning, one can teach a system to condition or intervene on data by training with factual and counterfactual error signals respectively.) <|cite_end|>. Modern approaches to IL use techniques from generative adversarial networks <|cite_start|> (Reference: Generative Adversarial Imitation Learning: Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal. One approach is to recover the expert's cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large, high-dimensional environments.) <|cite_end|> <|cite_start|> (Reference: Learning Robust Rewards with Adversarial Inverse Reinforcement Learning: Reinforcement learning provides a powerful and general framework for decision making and control, but its application in practice is often hindered by the need for extensive feature and reward engineering. Deep reinforcement learning methods can remove the need for explicit engineering of policy or value features, but still require a manually specified reward function. Inverse reinforcement learning holds the promise of automatic reward acquisition, but has proven exceptionally difficult to apply to large, high-dimensional problems with unknown dynamics. In this work, we propose adverserial inverse reinforcement learning (AIRL), a practical and scalable inverse reinforcement learning algorithm based on an adversarial reward learning formulation. We demonstrate that AIRL is able to recover reward functions that are robust to changes in dynamics, enabling us to learn policies even under significant variation in the environment seen during training. Our experiments show that AIRL greatly outperforms prior methods in these transfer settings.) <|cite_end|> <|cite_start|> (Reference: {End-to-End Differentiable Adversarial Imitation Learning: Generative Adversarial Networks (GANs) have been successfully applied to the problem of policy imitation in a model-free setup. However, the computation graph of GANs, that include a stochastic policy as the generative model, is no longer differentiable end-to-end, which requires the use of high-variance gradient estimation. In this paper, we introduce the Modelbased Generative Adversarial Imitation Learning (MGAIL) algorithm. We show how to use a forward model to make the computation fully differentiable, which enables training policies using the exact gradient of the discriminator. The resulting algorithm trains competent policies using relatively fewer expert samples and interactions with the environment. We test it on both discrete and continuous action domains and report results that surpass the state-of-the-art.) <|cite_end|> <|cite_start|> (Reference: InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations: The goal of imitation learning is to mimic expert behavior without access to an explicit reward signal. Expert demonstrations provided by humans, however, often show significant variability due to latent factors that are typically not explicitly modeled. In this paper, we propose a new algorithm that can infer the latent structure of expert demonstrations in an unsupervised way. Our method, built on top of Generative Adversarial Imitation Learning, can not only imitate complex behaviors, but also learn interpretable and meaningful representations of complex behavioral data, including visual demonstrations. In the driving domain, we show that a model learned from human demonstrations is able to both accurately reproduce a variety of behaviors and accurately anticipate human actions using raw visual inputs. Compared with various baselines, our method can better capture the latent structure underlying expert demonstrations, often recovering semantically meaningful factors of variation in the data.) <|cite_end|> and include goal-conditioning <|cite_start|> (Reference: Goal-conditioned Imitation Learning: Designing rewards for Reinforcement Learning (RL) is challenging because it needs to convey the desired task, be efficient to optimize, and be easy to compute. The latter is particularly problematic when applying RL to robotics, where detecting whether the desired configuration is reached might require considerable supervision and instrumentation. Furthermore, we are often interested in being able to reach a wide range of configurations, hence setting up a different reward every time might be unpractical. Methods like Hindsight Experience Replay (HER) have recently shown promise to learn policies able to reach many goals, without the need of a reward. Unfortunately, without tricks like resetting to points along the trajectory, HER might require many samples to discover how to reach certain areas of the state-space. In this work we investigate different approaches to incorporate demonstrations to drastically speed up the convergence to a policy able to reach any goal, also surpassing the performance of an agent trained with other Imitation Learning algorithms. Furthermore, we show our method can also be used when the available expert trajectories do not contain the actions, which can leverage kinesthetic or third person demonstration. The code is available at https://sites.google.com/view/goalconditioned-il/.) <|cite_end|>. IL has been applied to autonomous driving dating to the early success of ALVINN, and more recently <|cite_start|> (Reference: ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst: Our goal is to train a policy for autonomous driving via imitation learning that is robust enough to drive a real vehicle. We find that standard behavior cloning is insufficient for handling complex driving scenarios, even when we leverage a perception system for preprocessing the input and a controller for executing the output on the car: 30 million examples are still not enough. We propose exposing the learner to synthesized data in the form of perturbations to the expert's driving, which creates interesting situations such as collisions and/or going off the road. Rather than purely imitating all data, we augment the imitation loss with additional losses that penalize undesirable events and encourage progress -- the perturbations then provide an important signal for these losses and lead to robustness of the learned model. We show that the ChauffeurNet model can handle complex situations in simulation, and present ablation experiments that emphasize the importance of each of our proposed changes and show that the model is responding to the appropriate causal factors. Finally, we demonstrate the model driving a car in the real world.) <|cite_end|> <|cite_start|> (Reference: End to End Learning for Self-Driving Cars: We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS).) <|cite_end|> <|cite_start|> (Reference: End-to-end Driving via Conditional Imitation Learning: Deep networks trained on demonstrations of human driving have learned to follow roads and avoid obstacles. However, driving policies trained via imitation learning cannot be controlled at test time. A vehicle trained end-to-end to imitate an expert cannot be guided to take a specific turn at an upcoming intersection. This limits the utility of such systems. We propose to condition imitation learning on high-level command input. At test time, the learned driving policy functions as a chauffeur that handles sensorimotor coordination but continues to respond to navigational commands. We evaluate different architectures for conditional imitation learning in vision-based driving. We conduct experiments in realistic three-dimensional simulations of urban driving and on a 1/5 scale robotic truck that is trained to drive in a residential area. Both systems drive based on visual input yet remain responsive to high-level navigational commands. The supplementary video can be viewed at https://youtu.be/cFtnflNe5fM) <|cite_end|> <|cite_start|> (Reference: SafetyNet: Safe planning for real-world self-driving vehicles using machine-learned policies: In this paper we present the first safe system for full control of self-driving vehicles trained from human demonstrations and deployed in challenging, real-world, urban environments. Current industry-standard solutions use rule-based systems for planning. Although they perform reasonably well in common scenarios, the engineering complexity renders this approach incompatible with human-level performance. On the other hand, the performance of machine-learned (ML) planning solutions can be improved by simply adding more exemplar data. However, ML methods cannot offer safety guarantees and sometimes behave unpredictably. To combat this, our approach uses a simple yet effective rule-based fallback layer that performs sanity checks on an ML planner's decisions (e.g. avoiding collision, assuring physical feasibility). This allows us to leverage ML to handle complex situations while still assuring the safety, reducing ML planner-only collisions by 95%. We train our ML planner on 300 hours of expert driving demonstrations using imitation learning and deploy it along with the fallback layer in downtown San Francisco, where it takes complete control of a real vehicle and navigates a wide variety of challenging urban driving scenarios.) <|cite_end|> <|cite_start|> (Reference: Learning by Cheating: Vision-based urban driving is hard. The autonomous system needs to learn to perceive the world and act in it. We show that this challenging learning problem can be simplified by decomposing it into two stages. We first train an agent that has access to privileged information. This privileged agent cheats by observing the ground-truth layout of the environment and the positions of all traffic participants. In the second stage, the privileged agent acts as a teacher that trains a purely vision-based sensorimotor agent. The resulting sensorimotor agent does not have access to any privileged information and does not cheat. This two-stage training procedure is counter-intuitive at first, but has a number of important advantages that we analyze and empirically demonstrate. We use the presented approach to train a vision-based autonomous driving system that substantially outperforms the state of the art on the CARLA benchmark and the recent NoCrash benchmark. Our approach achieves, for the first time, 100% success rate on all tasks in the original CARLA benchmark, sets a new record on the NoCrash benchmark, and reduces the frequency of infractions by an order of magnitude compared to the prior state of the art. For the video that summarizes this work, see https://youtu.be/u9ZCxxD-UUw) <|cite_end|>. Combining expert demonstrations and reinforcement learning (RL) offers promising new approaches to scalable self-driving <|cite_start|> (Reference: Learning to Drive in a Day: We demonstrate the first application of deep reinforcement learning to autonomous driving. From randomly initialised parameters, our model is able to learn a policy for lane following in a handful of training episodes using a single monocular image as input. We provide a general and easy to obtain reward: the distance travelled by the vehicle without the safety driver taking control. We use a continuous, model-free deep reinforcement learning algorithm, with all exploration and optimisation performed on-vehicle. This demonstrates a new framework for autonomous driving which moves away from reliance on defined logical rules, mapping, and direct supervision. We discuss the challenges and opportunities to scale this approach to a broader range of autonomous driving tasks.) <|cite_end|> <|cite_start|> (Reference: {Autonomy 2.0: Why is self-driving always 5 years away?: Despite the numerous successes of machine learning over the past decade (image recognition, decision-making, NLP, image synthesis), self-driving technology has not yet followed the same trend. In this paper, we study the history, composition, and development bottlenecks of the modern self-driving stack. We argue that the slow progress is caused by approaches that require too much hand-engineering, an over-reliance on road testing, and high fleet deployment costs. We observe that the classical stack has several bottlenecks that preclude the necessary scale needed to capture the long tail of rare events. To resolve these problems, we outline the principles of Autonomy 2.0, an ML-first approach to self-driving, as a viable alternative to the currently adopted state-of-the-art. This approach is based on (i) a fully differentiable AV stack trainable from human demonstrations, (ii) closed-loop data-driven reactive simulation, and (iii) large-scale, low-cost data collections as critical solutions towards scalability issues. We outline the general architecture, survey promising works in this direction and propose key challenges to be addressed by the community in the future.) <|cite_end|>. Despite the excitement of machine learning as a path towards large-scale deployment of AVs, many AV companies still rely heavily on classic search-based planning and trajectory optimization. For a survey of classic approaches, see <|cite_start|> (Reference: A Survey of Motion Planning and Control Techniques for Self-driving Urban Vehicles: Self-driving vehicles are a maturing technology with the potential to reshape mobility by enhancing the safety, accessibility, efficiency, and convenience of automotive transportation. Safety-critical tasks that must be executed by a self-driving vehicle include planning of motions through a dynamic environment shared with other vehicles and pedestrians, and their robust executions via feedback control. The objective of this paper is to survey the current state of the art on planning and control algorithms with particular regard to the urban setting. A selection of proposed techniques is reviewed along with a discussion of their effectiveness. The surveyed approaches differ in the vehicle mobility model used, in assumptions on the structure of the environment, and in computational requirements. The side-by-side comparison presented in this survey helps to gain insight into the strengths and limitations of the reviewed approaches and assists with system level design choices.) <|cite_end|>. While motion forecasting models <|cite_start|> (Reference: MultiPath++: Efficient Information Fusion and Trajectory Aggregation for Behavior Prediction: Predicting the future behavior of road users is one of the most challenging and important problems in autonomous driving. Applying deep learning to this problem requires fusing heterogeneous world state in the form of rich perception signals and map information, and inferring highly multi-modal distributions over possible futures. In this paper, we present MultiPath++, a future prediction model that achieves state-of-the-art performance on popular benchmarks. MultiPath++ improves the MultiPath architecture by revisiting many design choices. The first key design difference is a departure from dense image-based encoding of the input world state in favor of a sparse encoding of heterogeneous scene elements: MultiPath++ consumes compact and efficient polylines to describe road features, and raw agent state information directly (e.g., position, velocity, acceleration). We propose a context-aware fusion of these elements and develop a reusable multi-context gating fusion component. Second, we reconsider the choice of pre-defined, static anchors, and develop a way to learn latent anchor embeddings end-to-end in the model. Lastly, we explore ensembling and output aggregation techniques -- common in other ML domains -- and find effective variants for our probabilistic multimodal output representation. We perform an extensive ablation on these design choices, and show that our proposed model achieves state-of-the-art performance on the Argoverse Motion Forecasting Competition and the Waymo Open Dataset Motion Prediction Challenge.) <|cite_end|> have had a long history in AV stacks to predict other agent behavior, recent work has applied these models to the ego agent to predict feasible trajectories for direct planning, and can be viewed in the context of open-loop imitation <|cite_start|> (Reference: PRECOG: PREdiction Conditioned On Goals in Visual Multi-Agent Settings: For autonomous vehicles (AVs) to behave appropriately on roads populated by human-driven vehicles, they must be able to reason about the uncertain intentions and decisions of other drivers from rich perceptual information. Towards these capabilities, we present a probabilistic forecasting model of future interactions between a variable number of agents. We perform both standard forecasting and the novel task of conditional forecasting, which reasons about how all agents will likely respond to the goal of a controlled agent (here, the AV). We train models on real and simulated data to forecast vehicle trajectories given past positions and LIDAR. Our evaluation shows that our model is substantially more accurate in multi-agent driving scenarios compared to existing state-of-the-art. Beyond its general ability to perform conditional forecasting queries, we show that our model's predictions of all agents improve when conditioned on knowledge of the AV's goal, further illustrating its capability to model agent interactions.) <|cite_end|> <|cite_start|> (Reference: Deep Imitative Models for Flexible Inference, Planning, and Control: Imitation Learning (IL) is an appealing approach to learn desirable autonomous behavior. However, directing IL to achieve arbitrary goals is difficult. In contrast, planning-based algorithms use dynamics models and reward functions to achieve goals. Yet, reward functions that evoke desirable behavior are often difficult to specify. In this paper, we propose Imitative Models to combine the benefits of IL and goal-directed planning. Imitative Models are probabilistic predictive models of desirable behavior able to plan interpretable expert-like trajectories to achieve specified goals. We derive families of flexible goal objectives, including constrained goal regions, unconstrained goal sets, and energy-based goals. We show that our method can use these objectives to successfully direct behavior. Our method substantially outperforms six IL approaches and a planning-based approach in a dynamic simulated autonomous driving task, and is efficiently learned from expert demonstrations without online data collection. We also show our approach is robust to poorly specified goals, such as goals on the wrong side of the road.) <|cite_end|> <|cite_start|> (Reference: Deep Structured Reactive Planning: An intelligent agent operating in the real-world must balance achieving its goal with maintaining the safety and comfort of not only itself, but also other participants within the surrounding scene. This requires jointly reasoning about the behavior of other actors while deciding its own actions as these two processes are inherently intertwined - a vehicle will yield to us if we decide to proceed first at the intersection but will proceed first if we decide to yield. However, this is not captured in most self-driving pipelines, where planning follows prediction. In this paper we propose a novel data-driven, reactive planning objective which allows a self-driving vehicle to jointly reason about its own plans as well as how other actors will react to them. We formulate the problem as an energy-based deep structured model that is learned from observational data and encodes both the planning and prediction problems. Through simulations based on both real-world driving and synthetically generated dense traffic, we demonstrate that our reactive model outperforms a non-reactive variant in successfully completing highly complex maneuvers (lane merges/turns in traffic) faster, without trading off collision rate.) <|cite_end|> <|cite_start|> (Reference: MP3: A Unified Model to Map, Perceive, Predict and Plan: High-definition maps (HD maps) are a key component of most modern self-driving systems due to their valuable semantic and geometric information. Unfortunately, building HD maps has proven hard to scale due to their cost as well as the requirements they impose in the localization system that has to work everywhere with centimeter-level accuracy. Being able to drive without an HD map would be very beneficial to scale self-driving solutions as well as to increase the failure tolerance of existing ones (e.g., if localization fails or the map is not up-to-date). Towards this goal, we propose MP3, an end-to-end approach to mapless driving where the input is raw sensor data and a high-level command (e.g., turn left at the intersection). MP3 predicts intermediate representations in the form of an online map and the current and future state of dynamic agents, and exploits them in a novel neural motion planner to make interpretable decisions taking into account uncertainty. We show that our approach is significantly safer, more comfortable, and can follow commands better than the baselines in challenging long-term closed-loop simulations, as well as when compared to an expert driver in a large-scale real-world dataset.) <|cite_end|> <|cite_start|> (Reference: Large Scale Interactive Motion Forecasting for Autonomous Driving : The Waymo Open Motion Dataset: As autonomous driving systems mature, motion forecasting has received increasing attention as a critical requirement for planning. Of particular importance are interactive situations such as merges, unprotected turns, etc., where predicting individual object motion is not sufficient. Joint predictions of multiple objects are required for effective route planning. There has been a critical need for high-quality motion data that is rich in both interactions and annotation to develop motion planning models. In this work, we introduce the most diverse interactive motion dataset to our knowledge, and provide specific labels for interacting objects suitable for developing joint prediction models. With over 100,000 scenes, each 20 seconds long at 10 Hz, our new dataset contains more than 570 hours of unique data over 1750 km of roadways. It was collected by mining for interesting interactions between vehicles, pedestrians, and cyclists across six cities within the United States. We use a high-accuracy 3D auto-labeling system to generate high quality 3D bounding boxes for each road agent, and provide corresponding high definition 3D maps for each scene. Furthermore, we introduce a new set of metrics that provides a comprehensive evaluation of both single agent and joint agent interaction motion forecasting models. Finally, we provide strong baseline models for individual-agent prediction and joint-prediction. We hope that this new large-scale interactive motion dataset will provide new opportunities for advancing motion forecasting models.) <|cite_end|> <|cite_start|> (Reference: End-to-end Interpretable Neural Motion Planner: In this paper, we propose a neural motion planner (NMP) for learning to drive autonomously in complex urban scenarios that include traffic-light handling, yielding, and interactions with multiple road-users. Towards this goal, we design a holistic model that takes as input raw LIDAR data and a HD map and produces interpretable intermediate representations in the form of 3D detections and their future trajectories, as well as a cost volume defining the goodness of each position that the self-driving car can take within the planning horizon. We then sample a set of diverse physically possible trajectories and choose the one with the minimum learned cost. Importantly, our cost volume is able to naturally capture multi-modality. We demonstrate the effectiveness of our approach in real-world driving data captured in several cities in North America. Our experiments show that the learned cost volume can generate safer planning than all the baselines.) <|cite_end|> <|cite_start|> (Reference: Scene Transformer: A unified architecture for predicting multiple agent trajectories: Predicting the motion of multiple agents is necessary for planning in dynamic environments. This task is challenging for autonomous driving since agents (e.g. vehicles and pedestrians) and their associated behaviors may be diverse and influence one another. Most prior work have focused on predicting independent futures for each agent based on all past motion, and planning against these independent predictions. However, planning against independent predictions can make it challenging to represent the future interaction possibilities between different agents, leading to sub-optimal planning. In this work, we formulate a model for predicting the behavior of all agents jointly, producing consistent futures that account for interactions between agents. Inspired by recent language modeling approaches, we use a masking strategy as the query to our model, enabling one to invoke a single model to predict agent behavior in many ways, such as potentially conditioned on the goal or full future trajectory of the autonomous vehicle or the behavior of other agents in the environment. Our model architecture employs attention to combine features across road elements, agent interactions, and time steps. We evaluate our approach on autonomous driving datasets for both marginal and joint motion prediction, and achieve state of the art performance across two popular datasets. Through combining a scene-centric approach, agent permutation equivariant model, and a sequence masking strategy, we show that our model can unify a variety of motion prediction tasks from joint motion predictions to conditioned prediction.) <|cite_end|>. Closed-loop simulation continues to advance, both for the purposes of evaluating driving performance through realistic world models <|cite_start|> (Reference: Symphony: Learning Realistic and Diverse Agents for Autonomous Driving Simulation: Simulation is a crucial tool for accelerating the development of autonomous vehicles. Making simulation realistic requires models of the human road users who interact with such cars. Such models can be obtained by applying learning from demonstration (LfD) to trajectories observed by cars already on the road. However, existing LfD methods are typically insufficient, yielding policies that frequently collide or drive off the road. To address this problem, we propose Symphony, which greatly improves realism by combining conventional policies with a parallel beam search. The beam search refines these policies on the fly by pruning branches that are unfavourably evaluated by a discriminator. However, it can also harm diversity, i.e., how well the agents cover the entire distribution of realistic behaviour, as pruning can encourage mode collapse. Symphony addresses this issue with a hierarchical approach, factoring agent behaviour into goal generation and goal conditioning. The use of such goals ensures that agent diversity neither disappears during adversarial training nor is pruned away by the beam search. Experiments on both proprietary and open Waymo datasets confirm that Symphony agents learn more realistic and diverse behaviour than several baselines.) <|cite_end|> <|cite_start|> (Reference: TrafficSim: Learning to Simulate Realistic Multi-Agent Behaviors: Simulation has the potential to massively scale evaluation of self-driving systems enabling rapid development as well as safe deployment. To close the gap between simulation and the real world, we need to simulate realistic multi-agent behaviors. Existing simulation environments rely on heuristic-based models that directly encode traffic rules, which cannot capture irregular maneuvers (e.g., nudging, U-turns) and complex interactions (e.g., yielding, merging). In contrast, we leverage real-world data to learn directly from human demonstration and thus capture a more diverse set of actor behaviors. To this end, we propose TrafficSim, a multi-agent behavior model for realistic traffic simulation. In particular, we leverage an implicit latent variable model to parameterize a joint actor policy that generates socially-consistent plans for all actors in the scene jointly. To learn a robust policy amenable for long horizon simulation, we unroll the policy in training and optimize through the fully differentiable simulation across time. Our learning objective incorporates both human demonstrations as well as common sense. We show TrafficSim generates significantly more realistic and diverse traffic scenarios as compared to a diverse set of baselines. Notably, we can exploit trajectories generated by TrafficSim as effective data augmentation for training better motion planner.) <|cite_end|> <|cite_start|> (Reference: nuPlan: A closed-loop ML-based planning benchmark for autonomous vehicles: In this work, we propose the world's first closed-loop ML-based planning benchmark for autonomous driving. While there is a growing body of ML-based motion planners, the lack of established datasets and metrics has limited the progress in this area. Existing benchmarks for autonomous vehicle motion prediction have focused on short-term motion forecasting, rather than long-term planning. This has led previous works to use open-loop evaluation with L2-based metrics, which are not suitable for fairly evaluating long-term planning. Our benchmark overcomes these limitations by introducing a large-scale driving dataset, lightweight closed-loop simulator, and motion-planning-specific metrics. We provide a high-quality dataset with 1500h of human driving data from 4 cities across the US and Asia with widely varying traffic patterns (Boston, Pittsburgh, Las Vegas and Singapore). We will provide a closed-loop simulation framework with reactive agents and provide a large set of both general and scenario-specific planning metrics. We plan to release the dataset at NeurIPS 2021 and organize benchmark challenges starting in early 2022.) <|cite_end|> <|cite_start|> (Reference: SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for Autonomous Driving: Multi-agent interaction is a fundamental aspect of autonomous driving in the real world. Despite more than a decade of research and development, the problem of how to competently interact with diverse road users in diverse scenarios remains largely unsolved. Learning methods have much to offer towards solving this problem. But they require a realistic multi-agent simulator that generates diverse and competent driving interactions. To meet this need, we develop a dedicated simulation platform called SMARTS (Scalable Multi-Agent RL Training School). SMARTS supports the training, accumulation, and use of diverse behavior models of road users. These are in turn used to create increasingly more realistic and diverse interactions that enable deeper and broader research on multi-agent interaction. In this paper, we describe the design goals of SMARTS, explain its basic architecture and its key features, and illustrate its use through concrete multi-agent experiments on interactive scenarios. We open-source the SMARTS platform and the associated benchmark tasks and evaluation metrics to encourage and empower research on multi-agent learning for autonomous driving. Our code is available at https://github.com/huawei-noah/SMARTS.) <|cite_end|> <|cite_start|> (Reference: Scalable End-to-End Autonomous Vehicle Testing via Rare-event Simulation: While recent developments in autonomous vehicle (AV) technology highlight substantial progress, we lack tools for rigorous and scalable testing. Real-world testing, the $\textit{de facto}$ evaluation environment, places the public in danger, and, due to the rare nature of accidents, will require billions of miles in order to statistically validate performance claims. We implement a simulation framework that can test an entire modern autonomous driving system, including, in particular, systems that employ deep-learning perception and control algorithms. Using adaptive importance-sampling methods to accelerate rare-event probability evaluation, we estimate the probability of an accident under a base distribution governing standard traffic behavior. We demonstrate our framework on a highway scenario, accelerating system evaluation by $2$-$20$ times over naive Monte Carlo sampling methods and $10$-$300 \mathsf{P}$ times (where $\mathsf{P}$ is the number of processors) over real-world testing.) <|cite_end|>, and to train driving policies that could transfer to the real world <|cite_start|> (Reference: Virtual to Real Reinforcement Learning for Autonomous Driving: Reinforcement learning is considered as a promising direction for driving policy learning. However, training autonomous driving vehicle with reinforcement learning in real environment involves non-affordable trial-and-error. It is more desirable to first train in a virtual environment and then transfer to the real environment. In this paper, we propose a novel realistic translation network to make model trained in virtual environment be workable in real world. The proposed network can convert non-realistic virtual image input into a realistic one with similar scene structure. Given realistic frames as input, driving policy trained by reinforcement learning can nicely adapt to real world driving. Experiments show that our proposed virtual to real (VR) reinforcement learning (RL) works pretty well. To our knowledge, this is the first successful case of driving policy trained by reinforcement learning that can adapt to real world driving data.) <|cite_end|> <|cite_start|> (Reference: Driving Policy Transfer via Modularity and Abstraction: End-to-end approaches to autonomous driving have high sample complexity and are difficult to scale to realistic urban driving. Simulation can help end-to-end driving systems by providing a cheap, safe, and diverse training environment. Yet training driving policies in simulation brings up the problem of transferring such policies to the real world. We present an approach to transferring driving policies from simulation to reality via modularity and abstraction. Our approach is inspired by classic driving systems and aims to combine the benefits of modular architectures and end-to-end deep learning approaches. The key idea is to encapsulate the driving policy such that it is not directly exposed to raw perceptual input or low-level vehicle dynamics. We evaluate the presented approach in simulated urban environments and in the real world. In particular, we transfer a driving policy trained in simulation to a 1/5-scale robotic truck that is deployed in a variety of conditions, with no finetuning, on two continents. The supplementary video can be viewed at https://youtu.be/BrMDJqI6H5U) <|cite_end|>. The work most similar to ours is <|cite_start|> (Reference: Symphony: Learning Realistic and Diverse Agents for Autonomous Driving Simulation: Simulation is a crucial tool for accelerating the development of autonomous vehicles. Making simulation realistic requires models of the human road users who interact with such cars. Such models can be obtained by applying learning from demonstration (LfD) to trajectories observed by cars already on the road. However, existing LfD methods are typically insufficient, yielding policies that frequently collide or drive off the road. To address this problem, we propose Symphony, which greatly improves realism by combining conventional policies with a parallel beam search. The beam search refines these policies on the fly by pruning branches that are unfavourably evaluated by a discriminator. However, it can also harm diversity, i.e., how well the agents cover the entire distribution of realistic behaviour, as pruning can encourage mode collapse. Symphony addresses this issue with a hierarchical approach, factoring agent behaviour into goal generation and goal conditioning. The use of such goals ensures that agent diversity neither disappears during adversarial training nor is pruned away by the beam search. Experiments on both proprietary and open Waymo datasets confirm that Symphony agents learn more realistic and diverse behaviour than several baselines.) <|cite_end|>, which applies model-based imitation and parallel beam search to train simulated agents for testing an AV. By contrast, we focus on the ego AV motion planning problem, where following arbitrary goal routes is critical. We avoid beam search as done by <|cite_start|> (Reference: Symphony: Learning Realistic and Diverse Agents for Autonomous Driving Simulation: Simulation is a crucial tool for accelerating the development of autonomous vehicles. Making simulation realistic requires models of the human road users who interact with such cars. Such models can be obtained by applying learning from demonstration (LfD) to trajectories observed by cars already on the road. However, existing LfD methods are typically insufficient, yielding policies that frequently collide or drive off the road. To address this problem, we propose Symphony, which greatly improves realism by combining conventional policies with a parallel beam search. The beam search refines these policies on the fly by pruning branches that are unfavourably evaluated by a discriminator. However, it can also harm diversity, i.e., how well the agents cover the entire distribution of realistic behaviour, as pruning can encourage mode collapse. Symphony addresses this issue with a hierarchical approach, factoring agent behaviour into goal generation and goal conditioning. The use of such goals ensures that agent diversity neither disappears during adversarial training nor is pruned away by the beam search. Experiments on both proprietary and open Waymo datasets confirm that Symphony agents learn more realistic and diverse behaviour than several baselines.) <|cite_end|>, which requires future information from reference trajectories not available in the context of ego agent motion planning <|cite_start|> (Reference: Multimodal Motion Prediction with Stacked Transformers: Predicting multiple plausible future trajectories of the nearby vehicles is crucial for the safety of autonomous driving. Recent motion prediction approaches attempt to achieve such multimodal motion prediction by implicitly regularizing the feature or explicitly generating multiple candidate proposals. However, it remains challenging since the latent features may concentrate on the most frequent mode of the data while the proposal-based methods depend largely on the prior knowledge to generate and select the proposals. In this work, we propose a novel transformer framework for multimodal motion prediction, termed as mmTransformer. A novel network architecture based on stacked transformers is designed to model the multimodality at feature level with a set of fixed independent proposals. A region-based training strategy is then developed to induce the multimodality of the generated proposals. Experiments on Argoverse dataset show that the proposed model achieves the state-of-the-art performance on motion prediction, substantially improving the diversity and the accuracy of the predicted trajectories. Demo video and code are available at https://decisionforce.github.io/mmTransformer.) <|cite_end|> <|cite_start|> (Reference: Perceiver: General Perception with Iterative Attention: Biological systems perceive the world by simultaneously processing high-dimensional inputs from modalities as diverse as vision, audition, touch, proprioception, etc. The perception models used in deep learning on the other hand are designed for individual modalities, often relying on domain-specific assumptions such as the local grid structures exploited by virtually all existing vision models. These priors introduce helpful inductive biases, but also lock models to individual modalities. In this paper we introduce the Perceiver - a model that builds upon Transformers and hence makes few architectural assumptions about the relationship between its inputs, but that also scales to hundreds of thousands of inputs, like ConvNets. The model leverages an asymmetric attention mechanism to iteratively distill inputs into a tight latent bottleneck, allowing it to scale to handle very large inputs. We show that this architecture is competitive with or outperforms strong, specialized models on classification tasks across various modalities: images, point clouds, audio, video, and video+audio. The Perceiver obtains performance comparable to ResNet-50 and ViT on ImageNet without 2D convolutions by directly attending to 50,000 pixels. It is also competitive in all modalities in AudioSet.) <|cite_end|>. We also focus on dense urban driving. <|paper_end|>
[ "<|reference_start|> Goal-conditioned Imitation Learning: Designing rewards for Reinforcement Learning (RL) is challenging because it needs to convey the desired task, be efficient to optimize, and be easy to compute. The latter is particularly problematic when applying RL to robotics, where detecting whether the desired configuration is reached might require considerable supervision and instrumentation. Furthermore, we are often interested in being able to reach a wide range of configurations, hence setting up a different reward every time might be unpractical. Methods like Hindsight Experience Replay (HER) have recently shown promise to learn policies able to reach many goals, without the need of a reward. Unfortunately, without tricks like resetting to points along the trajectory, HER might require many samples to discover how to reach certain areas of the state-space. In this work we investigate different approaches to incorporate demonstrations to drastically speed up the convergence to a policy able to reach any goal, also surpassing the performance of an agent trained with other Imitation Learning algorithms. Furthermore, we show our method can also be used when the available expert trajectories do not contain the actions, which can leverage kinesthetic or third person demonstration. The code is available at https://sites.google.com/view/goalconditioned-il/. <|reference_end|>", "<|reference_start|> Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization: Reinforcement learning can acquire complex behaviors from high-level specifications. However, defining a cost function that can be optimized effectively and encodes the correct task is challenging in practice. We explore how inverse optimal control (IOC) can be used to learn behaviors from demonstrations, with applications to torque control of high-dimensional robotic systems. Our method addresses two key challenges in inverse optimal control: first, the need for informative features and effective regularization to impose structure on the cost, and second, the difficulty of learning the cost function under unknown dynamics for high-dimensional continuous systems. To address the former challenge, we present an algorithm capable of learning arbitrary nonlinear cost functions, such as neural networks, without meticulous feature engineering. To address the latter challenge, we formulate an efficient sample-based approximation for MaxEnt IOC. We evaluate our method on a series of simulated tasks and real-world robotic manipulation problems, demonstrating substantial improvement over prior methods both in terms of task complexity and sample efficiency. <|reference_end|>", "<|reference_start|> End-to-end Driving via Conditional Imitation Learning: Deep networks trained on demonstrations of human driving have learned to follow roads and avoid obstacles. However, driving policies trained via imitation learning cannot be controlled at test time. A vehicle trained end-to-end to imitate an expert cannot be guided to take a specific turn at an upcoming intersection. This limits the utility of such systems. We propose to condition imitation learning on high-level command input. At test time, the learned driving policy functions as a chauffeur that handles sensorimotor coordination but continues to respond to navigational commands. We evaluate different architectures for conditional imitation learning in vision-based driving. We conduct experiments in realistic three-dimensional simulations of urban driving and on a 1/5 scale robotic truck that is trained to drive in a residential area. Both systems drive based on visual input yet remain responsive to high-level navigational commands. The supplementary video can be viewed at https://youtu.be/cFtnflNe5fM <|reference_end|>", "<|reference_start|> Symphony: Learning Realistic and Diverse Agents for Autonomous Driving Simulation: Simulation is a crucial tool for accelerating the development of autonomous vehicles. Making simulation realistic requires models of the human road users who interact with such cars. Such models can be obtained by applying learning from demonstration (LfD) to trajectories observed by cars already on the road. However, existing LfD methods are typically insufficient, yielding policies that frequently collide or drive off the road. To address this problem, we propose Symphony, which greatly improves realism by combining conventional policies with a parallel beam search. The beam search refines these policies on the fly by pruning branches that are unfavourably evaluated by a discriminator. However, it can also harm diversity, i.e., how well the agents cover the entire distribution of realistic behaviour, as pruning can encourage mode collapse. Symphony addresses this issue with a hierarchical approach, factoring agent behaviour into goal generation and goal conditioning. The use of such goals ensures that agent diversity neither disappears during adversarial training nor is pruned away by the beam search. Experiments on both proprietary and open Waymo datasets confirm that Symphony agents learn more realistic and diverse behaviour than several baselines. <|reference_end|>" ]
[ 10, 18, 32, 46 ]
{"<|cite_1|>": "arxiv-337513", "<|multi_cite_2_2|>": "arxiv-96666", "<|multi_cite_2_3|>": "arxiv-183712", "<|cite_3|>": "arxiv-17086", "<|cite_4|>": "arxiv-263399", "<|cite_5|>": "ss-1364533", "<|cite_6|>": "arxiv-325347", "<|cite_7|>": "arxiv-417873", "<|cite_8|>": "arxiv-293494", "<|multi_cite_9_1|>": "ss-1514101", "<|multi_cite_9_2|>": "arxiv-209645", "<|cite_10|>": "arxiv-206519", "<|cite_11|>": "arxiv-417873", "<|cite_12|>": "ss-1220958", "<|multi_cite_13_1|>": "ss-960349", "<|multi_cite_13_2|>": "ss-921172", "<|multi_cite_13_3|>": "arxiv-179644", "<|multi_cite_13_4|>": "ss-953014", "<|multi_cite_13_5|>": "arxiv-93217", "<|multi_cite_13_6|>": "ss-1387936", "<|multi_cite_14_1|>": "ss-1276296", "<|cite_15|>": "arxiv-263399", "<|cite_16|>": "arxiv-17086", "<|multi_cite_17_1|>": "arxiv-325347", "<|multi_cite_17_2|>": "arxiv-375704", "<|multi_cite_18_1|>": "arxiv-99846", "<|multi_cite_18_2|>": "arxiv-138662", "<|multi_cite_18_3|>": "ss-1364533", "<|multi_cite_18_4|>": "arxiv-120045", "<|cite_19|>": "arxiv-209645", "<|multi_cite_21_1|>": "arxiv-183712", "<|multi_cite_21_2|>": "arxiv-96666", "<|multi_cite_21_3|>": "arxiv-136630", "<|multi_cite_21_4|>": "arxiv-369956", "<|multi_cite_21_5|>": "arxiv-241179", "<|multi_cite_22_1|>": "arxiv-164377", "<|multi_cite_22_2|>": "ss-1220958", "<|cite_23|>": "arxiv-96691", "<|cite_24|>": "arxiv-384000", "<|multi_cite_25_1|>": "arxiv-202640", "<|multi_cite_25_2|>": "arxiv-176335", "<|multi_cite_25_3|>": "arxiv-315599", "<|multi_cite_25_4|>": "arxiv-315589", "<|multi_cite_25_5|>": "arxiv-335989", "<|multi_cite_25_6|>": "arxiv-315541", "<|multi_cite_25_7|>": "arxiv-348665", "<|multi_cite_26_1|>": "arxiv-417873", "<|multi_cite_26_2|>": "arxiv-315488", "<|multi_cite_26_3|>": "ss-2353628", "<|multi_cite_26_4|>": "arxiv-297442", "<|multi_cite_26_6|>": "arxiv-178462", "<|multi_cite_27_1|>": "arxiv-121568", "<|multi_cite_27_2|>": "arxiv-156267", "<|cite_28|>": "arxiv-417873", "<|cite_29|>": "arxiv-417873", "<|multi_cite_30_1|>": "arxiv-328898", "<|multi_cite_30_2|>": "arxiv-325332"}
1711.08362
<|paper_start|> Title: RGB-D-based Human Motion Recognition with Deep Learning: A Survey Abstract: RGB-D-based Human Motion Recognition with Deep Learning: A Survey: Human motion recognition is one of the most important branches of human-centered research activities. In recent years, motion recognition based on RGB-D data has attracted much attention. Along with the development in artificial intelligence, deep learning techniques have gained remarkable success in computer vision. In particular, convolutional neural networks (CNN) have achieved great success for image-based tasks, and recurrent neural networks (RNN) are renowned for sequence-based problems. Specifically, deep learning methods based on the CNN and RNN architectures have been adopted for motion recognition using RGB-D data. In this paper, a detailed overview of recent advances in RGB-D-based motion recognition is presented. The reviewed methods are broadly categorized into four groups, depending on the modality adopted for recognition: RGB-based, depth-based, skeleton-based and RGB+D-based. As a survey focused on the application of deep learning to RGB-D-based motion recognition, we explicitly discuss the advantages and limitations of existing techniques. Particularly, we highlighted the methods of encoding spatial-temporal-structural information inherent in video sequence, and discuss potential directions for future research. Introduction Among the several human-centered research activities (e.g. human detection, tracking, pose estimation and motion recognition) in computer vision, human motion recognition is particularly important due to its potential application in video surveillance, human computer interfaces, ambient assisted living, human-robot interaction, intelligent driving, etc. A human motion recognition task can be summarised as the automatic identification of human behaviours from images or video sequences. The complexity and duration of the motion involved can be used as basis for broad categorization into four kinds namely gesture, action, interaction and group activity. A \textit{gesture} can be defined as the basic movement or positioning of the hand, arm, body, or head that communicates an idea, emotion, etc. ``Hand waving" and ``nodding" are some typical examples of gestures. Usually, a gesture has relatively short duration. An \textit{action} is considered as a type of motion performed by a single person during short time period and involves multiple body parts, in contrast with the few body parts that involved in gesture. An \textit{activity} is composed by a sequence of actions. An \textit{interaction} is a type of motion performed by two actors; one actor is human while the other may be human or an object. This implies that the interaction category will include human-human or human-object interaction. ``Hugging each other" and ``playing guitar" are examples of these two kinds of interaction, respectively. \textit{Group activity} is the most complex type of activity, and it may be a combination of gestures, actions and interactions. Necessarily, it involves more than two humans and from zero to multiple objects. Examples of group activities would include ``two teams playing basketball" and ``group meeting". Early research on human motion recognition was dominated by the analysis of still images or videos <|cite_start|> (Reference: Human motion analysis: A review: Human motion analysis is receiving increasing attention from computer vision researchers. This interest is motivated by a wide spectrum of applications, such as athletic performance analysis, surveillance, man-machine interfaces, content-based image storage and retrieval, and video conferencing. The paper gives an overview of the various tasks involved in motion analysis of the human body. The authors focus on three major areas related to interpreting human motion: 1) motion analysis involving human body parts, 2) tracking of human motion using single or multiple cameras, and 3) recognizing human activities from image sequences. Motion analysis of human body parts involves the low-level segmentation of the human body into segments connected by joints, and recovers the 3D structure of the human body using its 2D projections over a sequence of images. Tracking human motion using a single or multiple camera focuses on higher-level processing, in which moving humans are tracked without identifying specific parts of the body structure. After successfully matching the moving human image from one frame to another in image sequences, understanding the human movements or activities comes naturally, which leads to a discussion of recognizing human activities. The review is illustrated by examples.) <|cite_end|> <|cite_start|> (Reference: Recent developments in human motion analysis: ) <|cite_end|> <|cite_start|> (Reference: {Machine recognition of human activities: A survey: The past decade has witnessed a rapid proliferation of video cameras in all walks of life and has resulted in a tremendous explosion of video content. Several applications such as content-based video annotation and retrieval, highlight extraction and video summarization require recognition of the activities occurring in the video. The analysis of human activities in videos is an area with increasingly important consequences from security and surveillance to entertainment and personal archiving. Several challenges at various levels of processing-robustness against errors in low-level processing, view and rate-invariant representations at midlevel processing and semantic representation of human activities at higher level processing-make this problem hard to solve. In this review paper, we present a comprehensive survey of efforts in the past couple of decades to address the problems of representation, recognition, and learning of human activities from video and related applications. We discuss the problem at two major levels of complexity: 1) "actions" and 2) "activities." "Actions" are characterized by simple motion patterns typically executed by a single human. "Activities" are more complex and involve coordinated actions among a small number of humans. We will discuss several approaches and classify them according to their ability to handle varying degrees of complexity as interpreted above. We begin with a discussion of approaches to model the simplest of action classes known as atomic or primitive actions that do not require sophisticated dynamical modeling. Then, methods to model actions with more complex dynamics are discussed. The discussion then leads naturally to methods for higher level representation of complex activities.) <|cite_end|> <|cite_start|> (Reference: A survey on vision-based human action recognition: ) <|cite_end|> <|cite_start|> (Reference: A survey on still image based human action recognition: ) <|cite_end|> <|cite_start|> (Reference: From handcrafted to learned representations for human action recognition: A survey: ) <|cite_end|>. Most of these efforts used color and texture cues in 2D images for recognition. However, the task remains challenging due to problems posed by background clutter, partial occlusion, view-point, lighting changes, execution rate and biometric variation. This challenge remains even with current deep learning approaches <|cite_start|> (Reference: Going Deeper into Action Recognition: A Survey: Understanding human actions in visual data is tied to advances in complementary research areas including object recognition, human dynamics, domain adaptation and semantic segmentation. Over the last decade, human action analysis evolved from earlier schemes that are often limited to controlled environments to nowadays advanced solutions that can learn from millions of videos and apply to almost all daily activities. Given the broad range of applications from video surveillance to human-computer interaction, scientific milestones in action recognition are achieved more rapidly, eventually leading to the demise of what used to be good in a short time. This motivated us to provide a comprehensive review of the notable steps taken towards recognizing human actions. To this end, we start our discussion with the pioneering methods that use handcrafted representations, and then, navigate into the realm of deep learning based approaches. We aim to remain objective throughout this survey, touching upon encouraging improvements as well as inevitable fallbacks, in the hope of raising fresh questions and motivating new research directions for the reader.) <|cite_end|> <|cite_start|> (Reference: A Survey on Deep Learning Based Approaches for Action and Gesture Recognition in Image Sequences: The interest in action and gesture recognition has grown considerably in the last years. In this paper, we present a survey on current deep learning methodologies for action and gesture recognition in image sequences. We introduce a taxonomy that summarizes important aspects of deep learning for approaching both tasks. We review the details of the proposed architectures, fusion strategies, main datasets, and competitions. We summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, discussing their main features and identify opportunities and challenges for future research.) <|cite_end|>. With the recent development of cost-effective RGB-D sensors, such as Microsoft Kinect~\texttrademark and Asus Xtion~\texttrademark, RGB-D-based motion recognition has attracted much attention. This is largely because the extra dimension (depth) is insensitive to illumination changes and includes rich 3D structural information of the scene. Additionally, 3D positions of body joints can be estimated from depth maps <|cite_start|> (Reference: {Real-time Human Pose Recognition in Parts from Single Depth Images: We propose a new method to quickly and accurately predict 3D positions of body joints from a single depth image, using no temporal information. We take an object recognition approach, designing an intermediate body parts representation that maps the difficult pose estimation problem into a simpler per-pixel classification problem. Our large and highly varied training dataset allows the classifier to estimate body parts invariant to pose, body shape, clothing, etc. Finally we generate confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes. The system runs at 200 frames per second on consumer hardware. Our evaluation shows high accuracy on both synthetic and real test sets, and investigates the effect of several training parameters. We achieve state of the art accuracy in our comparison with related work and demonstrate improved generalization over exact whole-skeleton nearest neighbor matching.) <|cite_end|>. As a consequence, several methods based on RGB-D data have been proposed and the approach has proven to be a promising direction for human motion analysis. Several survey papers have summarized the research on human motion recognition using RGB-D data <|cite_start|> (Reference: A survey of human motion analysis using depth imagery: ) <|cite_end|> <|cite_start|> (Reference: A Survey on Human Motion Analysis from Depth Data: ) <|cite_end|> <|cite_start|> (Reference: Human activity recognition from 3D data: A review: ) <|cite_end|> <|cite_start|> (Reference: Survey on 3d hand gesture recognition: Three-dimensional hand gesture recognition has attracted increasing research interests in computer vision, pattern recognition, and human-computer interaction. The emerging depth sensors greatly inspired various hand gesture recognition approaches and applications, which were severely limited in the 2D domain with conventional cameras. This paper presents a survey of some recent works on hand gesture recognition using 3D depth sensors. We first review the commercial depth sensors and public data sets that are widely used in this field. Then, we review the state-of-the-art research for 3D hand gesture recognition in four aspects: 1) 3D hand modeling; 2) static hand gesture recognition; 3) hand trajectory gesture recognition; and 4) continuous hand gesture recognition. While the emphasis is on 3D hand gesture recognition approaches, the related applications and typical systems are also briefly summarized for practitioners.) <|cite_end|> <|cite_start|> (Reference: RGB-D-based Action Recognition Datasets: A Survey: Human action recognition from RGB-D (Red, Green, Blue and Depth) data has attracted increasing attention since the first work reported in 2010. Over this period, many benchmark datasets have been created to facilitate the development and evaluation of new algorithms. This raises the question of which dataset to select and how to use it in providing a fair and objective comparative evaluation against state-of-the-art methods. To address this issue, this paper provides a comprehensive review of the most commonly used action recognition related RGB-D video datasets, including 27 single-view datasets, 10 multi-view datasets, and 7 multi-person datasets. The detailed information and analysis of these datasets is a useful resource in guiding insightful selection of datasets for future research. In addition, the issues with current algorithm evaluation vis-\'{a}-vis limitations of the available datasets and evaluation protocols are also highlighted; resulting in a number of recommendations for collection of new datasets and use of evaluation protocols.) <|cite_end|> <|cite_start|> (Reference: Challenges in multimodal gesture recognition: ) <|cite_end|> <|cite_start|> (Reference: 3D skeleton-based human action classification: A survey: ) <|cite_end|> <|cite_start|> (Reference: Space-Time Representation of People Based on 3D Skeletal Data: A Review: Spatiotemporal human representation based on 3D visual perception data is a rapidly growing research area. Based on the information sources, these representations can be broadly categorized into two groups based on RGB-D information or 3D skeleton data. Recently, skeleton-based human representations have been intensively studied and kept attracting an increasing attention, due to their robustness to variations of viewpoint, human body scale and motion speed as well as the realtime, online performance. This paper presents a comprehensive survey of existing space-time representations of people based on 3D skeletal data, and provides an informative categorization and analysis of these methods from the perspectives, including information modality, representation encoding, structure and transition, and feature engineering. We also provide a brief overview of skeleton acquisition devices and construction methods, enlist a number of public benchmark datasets with skeleton data, and discuss potential future research directions.) <|cite_end|>. Specifically, Chen et al. <|cite_start|> (Reference: A survey of human motion analysis using depth imagery: ) <|cite_end|> focused on depth sensors, pre-processing of depth data, depth-based action recognition methods and datasets. In their work, Ye et al. <|cite_start|> (Reference: A Survey on Human Motion Analysis from Depth Data: ) <|cite_end|> presented an overview of approaches using depth and skeleton modalities for tasks including activity recognition, head/hand pose estimation, facial feature detection and gesture recognition. The survey presented by Aggarwal and Xia <|cite_start|> (Reference: Human activity recognition from 3D data: A review: ) <|cite_end|> summarized five categories of representations based on 3D silhouettes, skeletal joints/body part location, local spatial-temporal features, scene flow features and local occupancy features. The work of Cheng et al. <|cite_start|> (Reference: Survey on 3d hand gesture recognition: Three-dimensional hand gesture recognition has attracted increasing research interests in computer vision, pattern recognition, and human-computer interaction. The emerging depth sensors greatly inspired various hand gesture recognition approaches and applications, which were severely limited in the 2D domain with conventional cameras. This paper presents a survey of some recent works on hand gesture recognition using 3D depth sensors. We first review the commercial depth sensors and public data sets that are widely used in this field. Then, we review the state-of-the-art research for 3D hand gesture recognition in four aspects: 1) 3D hand modeling; 2) static hand gesture recognition; 3) hand trajectory gesture recognition; and 4) continuous hand gesture recognition. While the emphasis is on 3D hand gesture recognition approaches, the related applications and typical systems are also briefly summarized for practitioners.) <|cite_end|> focused on RGB-D-based hand gesture recognition datasets and summarized corresponding methods from three perspectives: static hand gesture recognition, hand trajectory gesture recognition and continuous hand gesture recognition. In another effort Escalera et al. <|cite_start|> (Reference: Challenges in multimodal gesture recognition: ) <|cite_end|> reviewed the challenges and methods for gesture recognition using multimodal data. Some of the surveys have focused on available datasets for RGB-D research. For example, the work of Zhang et al. <|cite_start|> (Reference: RGB-D-based Action Recognition Datasets: A Survey: Human action recognition from RGB-D (Red, Green, Blue and Depth) data has attracted increasing attention since the first work reported in 2010. Over this period, many benchmark datasets have been created to facilitate the development and evaluation of new algorithms. This raises the question of which dataset to select and how to use it in providing a fair and objective comparative evaluation against state-of-the-art methods. To address this issue, this paper provides a comprehensive review of the most commonly used action recognition related RGB-D video datasets, including 27 single-view datasets, 10 multi-view datasets, and 7 multi-person datasets. The detailed information and analysis of these datasets is a useful resource in guiding insightful selection of datasets for future research. In addition, the issues with current algorithm evaluation vis-\'{a}-vis limitations of the available datasets and evaluation protocols are also highlighted; resulting in a number of recommendations for collection of new datasets and use of evaluation protocols.) <|cite_end|> described available benchmark RGB-D datasets for action/activity recognition and included 27 single-view datasets, 10 multi-view datasets and 7 multi-person datasets. Other works as Presti and La Cascia <|cite_start|> (Reference: 3D skeleton-based human action classification: A survey: ) <|cite_end|> and Han et al. <|cite_start|> (Reference: Space-Time Representation of People Based on 3D Skeletal Data: A Review: Spatiotemporal human representation based on 3D visual perception data is a rapidly growing research area. Based on the information sources, these representations can be broadly categorized into two groups based on RGB-D information or 3D skeleton data. Recently, skeleton-based human representations have been intensively studied and kept attracting an increasing attention, due to their robustness to variations of viewpoint, human body scale and motion speed as well as the realtime, online performance. This paper presents a comprehensive survey of existing space-time representations of people based on 3D skeletal data, and provides an informative categorization and analysis of these methods from the perspectives, including information modality, representation encoding, structure and transition, and feature engineering. We also provide a brief overview of skeleton acquisition devices and construction methods, enlist a number of public benchmark datasets with skeleton data, and discuss potential future research directions.) <|cite_end|> mainly reviewed skeleton-based representation and approaches for action recognition. A short survey on RGB-D action recognition using deep learning was recently presented in <|cite_start|> (Reference: A Survey on Deep Learning Based Approaches for Action and Gesture Recognition in Image Sequences: The interest in action and gesture recognition has grown considerably in the last years. In this paper, we present a survey on current deep learning methodologies for action and gesture recognition in image sequences. We introduce a taxonomy that summarizes important aspects of deep learning for approaching both tasks. We review the details of the proposed architectures, fusion strategies, main datasets, and competitions. We summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, discussing their main features and identify opportunities and challenges for future research.) <|cite_end|>, analysing RGB and depth cues in terms of 2DCNN, 3DCNN, and Deep temporal approaches \begin{figure*}[t] \begin{center} {\includegraphics[height = 55mm, width = 170mm]{categorization}} \end{center} \caption{Categorisation of the methods for RGB-D-based motion recognition using deep learning.} \label{categorization} \end{figure*} All above surveys mainly focused on the analysis of handcrafted features. Here, we provide a comprehensive review of RGB-D-based human motion recognition using deep learning approaches. Even while focusing on deep learning approaches, the nature of the input data is still important. RGB-D data for human motion analysis comprises three modalities: RGB, depth and skeleton. The main characteristic of RGB data is its shape, color and texture which brings the benefits of extracting interesting points and optical flow. Compared to RGB videos, the depth modality is insensitive to illumination variations, invariant to color and texture changes, reliable for estimating body silhouette and skeleton, and provides rich 3D structural information of the scene. Differently from RGB and depth, skeleton data containing the positions of human joints, is a relatively high-level feature for motion recognition. The different properties of the three modalities have inspired the various methods found in the literature. For example, optical flow-based methods with Convolutional Neural Networks (CNN) is very effective for RGB channel <|cite_start|> (Reference: Multi-Modality Fusion based on Consensus-Voting and 3D Convolution for Isolated Gesture Recognition: Recently, the popularity of depth-sensors such as Kinect has made depth videos easily available while its advantages have not been fully exploited. This paper investigates, for gesture recognition, to explore the spatial and temporal information complementarily embedded in RGB and depth sequences. We propose a convolutional twostream consensus voting network (2SCVN) which explicitly models both the short-term and long-term structure of the RGB sequences. To alleviate distractions from background, a 3d depth-saliency ConvNet stream (3DDSN) is aggregated in parallel to identify subtle motion characteristics. These two components in an unified framework significantly improve the recognition accuracy. On the challenging Chalearn IsoGD benchmark, our proposed method outperforms the first place on the leader-board by a large margin (10.29%) while also achieving the best result on RGBD-HuDaAct dataset (96.74%). Both quantitative experiments and qualitative analysis shows the effectiveness of our proposed framework and codes will be released to facilitate future research.) <|cite_end|>; depth rank pooling based-method with CNN is a good choice for depth modality <|cite_start|> (Reference: Large-scale Isolated Gesture Recognition Using Convolutional Neural Networks: This paper proposes three simple, compact yet effective representations of depth sequences, referred to respectively as Dynamic Depth Images (DDI), Dynamic Depth Normal Images (DDNI) and Dynamic Depth Motion Normal Images (DDMNI). These dynamic images are constructed from a sequence of depth maps using bidirectional rank pooling to effectively capture the spatial-temporal information. Such image-based representations enable us to fine-tune the existing ConvNets models trained on image data for classification of depth sequences, without introducing large parameters to learn. Upon the proposed representations, a convolutional Neural networks (ConvNets) based method is developed for gesture recognition and evaluated on the Large-scale Isolated Gesture Recognition at the ChaLearn Looking at People (LAP) challenge 2016. The method achieved 55.57\% classification accuracy and ranked $2^{nd}$ place in this challenge but was very close to the best performance even though we only used depth data.) <|cite_end|>; sequence based method with Recurrent Neural Networks (RNN) <|cite_start|> (Reference: Global context-aware attention LSTM networks for 3d action recognition: Long Short-Term Memory (LSTM) networks have shown superior performance in 3D human action recognition due to their power in modeling the dynamics and dependencies in sequential data. Since not all joints are informative for action analysis and the irrelevant joints often bring a lot of noise, we need to pay more attention to the informative ones. However, original LSTM does not have strong attention capability. Hence we propose a new class of LSTM network, Global Context-Aware Attention LSTM (GCA-LSTM), for 3D action recognition, which is able to selectively focus on the informative joints in the action sequence with the assistance of global contextual information. In order to achieve a reliable attention representation for the action sequence, we further propose a recurrent attention mechanism for our GCA-LSTM network, in which the attention performance is improved iteratively. Experiments show that our end-to-end network can reliably focus on the most informative joints in each frame of the skeleton sequence. Moreover, our network yields state-of-the-art performance on three challenging datasets for 3D action recognition.) <|cite_end|> and image-based method with CNN <|cite_start|> (Reference: Action Recognition Based on Joint Trajectory Maps Using Convolutional Neural Networks: Recently, Convolutional Neural Networks (ConvNets) have shown promising performances in many computer vision tasks, especially image-based recognition. How to effectively use ConvNets for video-based recognition is still an open problem. In this paper, we propose a compact, effective yet simple method to encode spatio-temporal information carried in $3D$ skeleton sequences into multiple $2D$ images, referred to as Joint Trajectory Maps (JTM), and ConvNets are adopted to exploit the discriminative features for real-time human action recognition. The proposed method has been evaluated on three public benchmarks, i.e., MSRC-12 Kinect gesture dataset (MSRC-12), G3D dataset and UTD multimodal human action dataset (UTD-MHAD) and achieved the state-of-the-art results.) <|cite_end|> are effective for skeleton; and scene flow-based method using CNN are promising for RGB+D channels <|cite_start|> (Reference: Scene Flow to Action Map: A New Representation for RGB-D based Action Recognition with Convolutional Neural Networks: Scene flow describes the motion of 3D objects in real world and potentially could be the basis of a good feature for 3D action recognition. However, its use for action recognition, especially in the context of convolutional neural networks (ConvNets), has not been previously studied. In this paper, we propose the extraction and use of scene flow for action recognition from RGB-D data. Previous works have considered the depth and RGB modalities as separate channels and extract features for later fusion. We take a different approach and consider the modalities as one entity, thus allowing feature extraction for action recognition at the beginning. Two key questions about the use of scene flow for action recognition are addressed: how to organize the scene flow vectors and how to represent the long term dynamics of videos based on scene flow. In order to calculate the scene flow correctly on the available datasets, we propose an effective self-calibration method to align the RGB and depth data spatially without knowledge of the camera parameters. Based on the scene flow vectors, we propose a new representation, namely, Scene Flow to Action Map (SFAM), that describes several long term spatio-temporal dynamics for action recognition. We adopt a channel transform kernel to transform the scene flow vectors to an optimal color space analogous to RGB. This transformation takes better advantage of the trained ConvNets models over ImageNet. Experimental results indicate that this new representation can surpass the performance of state-of-the-art methods on two large public datasets.) <|cite_end|>. These methods are very effective for specific modalities, but not always the case for all the modalities. Given these observations, this survey identified four broad categories of methods based on the modality adopted for human motion recognition. The categories include RGB-based, depth-based, skeleton-based and RGB+D-based. In each category, two sub-divisions are further identified, namely segmented human motion recognition and continuous/online motion recognition. For segmented motion recognition, the scenario of the problem can be simply described as classifying a well delineated sequence of video frames as one of a set of motion types. This is in contrast to continuous/online human motion recognition where there are no a priori given boundaries of motion execution. The online situation is compounded by the fact that the video sequence is not recorded and the algorithm must deal with frames as they are being captured, save for possibly a small data cache. During the performance of a specified motion spatial information which refers to the spatial configuration of human body at an instant of time (e.g. relative positions of the human body parts) can be identified. Similarly, there is the temporal information which characterizes the spatial configuration of the body over time (i.e. the dynamics of the body). Lastly, the structural information encodes the coordination and synchronization of body parts over the period in which the action is being performed. It describes the relationship of the spatial configurations of human body across different time slots. In reviewing the various methods, consideration has been given to the manner in which the spatial, temporal and structural information have been exploited. Hence, the survey discusses the advantages and limitations of the reviewed methods from the spatial-temporal-structural encoding viewpoint, and suggests potential directions for future research. A key novelty of this survey is the focus on three architectures of neural networks used in the various deep learning methods reviewed namely CNN-based, RNN-based and other structured networks. Fig.~\ref{categorization} illustrates the taxonomy underpinning this survey. This is one of the first surveys dedicated to RGB-D-based human motion recognition using deep learning. Apart from this claim, this survey distinguishes itself from other surveys through the following contributions: \begin{itemize} \item Comprehensive coverage of the most recent and advanced deep learning-based methods developed in the last five years, thereby providing readers with a complete overview of recent research results and state-of-the-art methods. \item Insightful categorization and analysis of methods based on the different properties of the modalities; highlight of the pros and cons of the methods described in the reviewed papers from the viewpoint of spatial-temporal-structural encoding. \item Discussion of the challenges of RGB-D-based motion recognition; analysis of the limitations of available methods and discussion of potential research directions. \end{itemize} Additionally, several recently released or commonly used RGB-D-based benchmark datasets associated with deep learning are surveyed. The main application domain of interest in this survey paper is human motion recognition based on RGB-D data, including gesture recognition, action/activity recognition and interaction recognition. The lack of datasets focused on RGB-D-based group activity recognition has led to paucity of research on this topic and thus this survey does not cover this topic. Other RGB-D-based human-centered applications, such as human detection, tracking and pose estimation, are also not the focus of this paper. For surveys on RGB-D data acquisition readers are referred to <|cite_start|> (Reference: A survey of human motion analysis using depth imagery: ) <|cite_end|> <|cite_start|> (Reference: Survey on 3d hand gesture recognition: Three-dimensional hand gesture recognition has attracted increasing research interests in computer vision, pattern recognition, and human-computer interaction. The emerging depth sensors greatly inspired various hand gesture recognition approaches and applications, which were severely limited in the 2D domain with conventional cameras. This paper presents a survey of some recent works on hand gesture recognition using 3D depth sensors. We first review the commercial depth sensors and public data sets that are widely used in this field. Then, we review the state-of-the-art research for 3D hand gesture recognition in four aspects: 1) 3D hand modeling; 2) static hand gesture recognition; 3) hand trajectory gesture recognition; and 4) continuous hand gesture recognition. While the emphasis is on 3D hand gesture recognition approaches, the related applications and typical systems are also briefly summarized for practitioners.) <|cite_end|> <|cite_start|> (Reference: Space-Time Representation of People Based on 3D Skeletal Data: A Review: Spatiotemporal human representation based on 3D visual perception data is a rapidly growing research area. Based on the information sources, these representations can be broadly categorized into two groups based on RGB-D information or 3D skeleton data. Recently, skeleton-based human representations have been intensively studied and kept attracting an increasing attention, due to their robustness to variations of viewpoint, human body scale and motion speed as well as the realtime, online performance. This paper presents a comprehensive survey of existing space-time representations of people based on 3D skeletal data, and provides an informative categorization and analysis of these methods from the perspectives, including information modality, representation encoding, structure and transition, and feature engineering. We also provide a brief overview of skeleton acquisition devices and construction methods, enlist a number of public benchmark datasets with skeleton data, and discuss potential future research directions.) <|cite_end|>. Subsequent sections of the this survey are organized as follows. Commonly used RGB-D-based benchmark datasets are described in Section~\ref{dataset}. Sections~\ref{rgb} to~\ref{multi} discuss methods of RGB-D-based motion recognition using deep learning from four perspectives: RGB-based motion recognition, depth-based motion recognition, skeleton-based motion recognition and RGB+D-based motion recognition. Challenges of RGB-D-based motion recognition and pointers to future directions are presented in Section~\ref{discuss}. The survey provides concluding remarks in Section~\ref{conclusion}. <|paper_end|>
[ "<|reference_start|> Space-Time Representation of People Based on 3D Skeletal Data: A Review: Spatiotemporal human representation based on 3D visual perception data is a rapidly growing research area. Based on the information sources, these representations can be broadly categorized into two groups based on RGB-D information or 3D skeleton data. Recently, skeleton-based human representations have been intensively studied and kept attracting an increasing attention, due to their robustness to variations of viewpoint, human body scale and motion speed as well as the realtime, online performance. This paper presents a comprehensive survey of existing space-time representations of people based on 3D skeletal data, and provides an informative categorization and analysis of these methods from the perspectives, including information modality, representation encoding, structure and transition, and feature engineering. We also provide a brief overview of skeleton acquisition devices and construction methods, enlist a number of public benchmark datasets with skeleton data, and discuss potential future research directions. <|reference_end|>", "<|reference_start|> 3D skeleton-based human action classification: A survey: <|reference_end|>", "<|reference_start|> Space-Time Representation of People Based on 3D Skeletal Data: A Review: Spatiotemporal human representation based on 3D visual perception data is a rapidly growing research area. Based on the information sources, these representations can be broadly categorized into two groups based on RGB-D information or 3D skeleton data. Recently, skeleton-based human representations have been intensively studied and kept attracting an increasing attention, due to their robustness to variations of viewpoint, human body scale and motion speed as well as the realtime, online performance. This paper presents a comprehensive survey of existing space-time representations of people based on 3D skeletal data, and provides an informative categorization and analysis of these methods from the perspectives, including information modality, representation encoding, structure and transition, and feature engineering. We also provide a brief overview of skeleton acquisition devices and construction methods, enlist a number of public benchmark datasets with skeleton data, and discuss potential future research directions. <|reference_end|>", "<|reference_start|> A survey of human motion analysis using depth imagery: <|reference_end|>" ]
[ 16, 23, 24, 31 ]
{"<|multi_cite_1_1|>": "ss-2157676", "<|multi_cite_1_2|>": "ss-2431023", "<|multi_cite_1_3|>": "ss-1093078", "<|multi_cite_1_4|>": "ss-1387363", "<|multi_cite_1_5|>": "ss-1178336", "<|multi_cite_1_6|>": "ss-1178341", "<|multi_cite_2_1|>": "arxiv-98053", "<|multi_cite_2_2|>": "ss-1270085", "<|cite_3|>": "ss-1114755", "<|multi_cite_4_1|>": "ss-1427023", "<|multi_cite_4_2|>": "ss-1499250", "<|multi_cite_4_3|>": "ss-1178340", "<|multi_cite_4_4|>": "ss-1450217", "<|multi_cite_4_5|>": "arxiv-90711", "<|multi_cite_4_6|>": "ss-2292688", "<|multi_cite_4_7|>": "ss-1064213", "<|multi_cite_4_8|>": "arxiv-89980", "<|cite_5|>": "ss-1427023", "<|cite_6|>": "ss-1499250", "<|cite_7|>": "ss-1178340", "<|cite_8|>": "ss-1450217", "<|cite_9|>": "ss-2292688", "<|cite_10|>": "arxiv-90711", "<|cite_11|>": "ss-1064213", "<|cite_12|>": "arxiv-89980", "<|cite_13|>": "ss-1270085", "<|cite_14|>": "arxiv-110604", "<|cite_15|>": "arxiv-113901", "<|cite_16|>": "ss-789491", "<|cite_17|>": "arxiv-109636", "<|cite_18|>": "arxiv-117742", "<|multi_cite_19_1|>": "ss-1427023", "<|multi_cite_19_2|>": "ss-1450217", "<|multi_cite_19_3|>": "arxiv-89980"}
1111.2948
<|paper_start|> Title: Using Contextual Information as Virtual Items on Top-N Recommender Systems Abstract: Using Contextual Information as Virtual Items on Top-N Recommender Systems: Traditionally, recommender systems for the Web deal with applications that have two dimensions, users and items. Based on access logs that relate these dimensions, a recommendation model can be built and used to identify a set of N items that will be of interest to a certain user. In this paper we propose a method to complement the information in the access logs with contextual information without changing the recommendation algorithm. The method consists in representing context as virtual items. We empirically test this method with two top-N recommender systems, an item-based collaborative filtering technique and association rules, on three data sets. The results show that our method is able to take advantage of the context (new dimensions) when it is informative. Introduction \label{sec:int} Most Web sites offer a large number of information resources to their users. Finding relevant content has, thus, become a challenge for users. Recommender systems have emerged in response to this problem. A recommender system for a Web site receives (implicit or explicit) information about users and their behavior and recommends items that are likely to fit his/her needs <|cite_start|> (Reference: Analysis of recommendation algorithms for E-commerce: ABSTRACT Re ommender systems apply statisti al and knowledge disovery te hniques to the problem of making produ t re ommendations during a live ustomer intera tion and they are a hieving widespread su ess in E-Commer e nowadays. In this paper, we investigate several te hniques for analyzing large-s ale pur hase and preferen e data for the purpose of produ ing useful re ommendations to ustomers. In parti ular, we apply a olle tion of algorithms su h as traditional data mining, nearest-neighbor ollaborative ltering, and dimensionality redu tion on two di erent data sets. The rst data set was derived from the web-pur hasing transa tion of a large Eommer e ompany whereas the se ond data set was olle ted from MovieLens movie re ommendation site. For the experimental purpose, we divide the re ommendation generation pro ess into three sub pro esses{ representation of input data, neighborhood formation, and re ommendation generation. We devise di erent te hniques for di erent sub pro esses and apply their ombinations on our data sets to ompare for re ommendation quality and performan e.) <|cite_end|>. Recommender models for Web personalization can be built from the historical record of accesses to a site, where one access is a pair $<user\_id,item>$. Each access is interpreted as a rating of $1$ given by the user to the item. However, other dimensions, such as time and location, can add contextual information and improve the accuracy of recommendations. For instance, the type of books that a user looks for in Amazon during work hours is probably different from the books searched for during leisure hours. According to <|cite_start|> (Reference: Using context to improve predictive modeling of customers in personalization applications: The idea that context is important when predicting customer behavior has been maintained by scholars in marketing and data mining. However, no systematic study measuring how much the contextual information really matters in building customer models in personalization applications has been done before. In this paper, we study how important the contextual information is when predicting customer behavior and how to use it when building customer models. It is done by conducting an empirical study across a wide range of experimental conditions. The experimental results show that context does matter when modeling the behavior of individual customers and that it is possible to infer the context from the existing data with reasonable accuracy in certain cases. It is also shown that significant performance improvements can be achieved if the context is "cleverly" modeled, as described in this paper. These findings have significant implications for data miners and marketers. They show that contextual information does matter in personalization and companies have different opportunities to both make context valuable for improving predictive performance of customers' behavior and decreasing the costs of gathering contextual information.) <|cite_end|>, the idea that contextual information is important when predicting customer behavior is not new. Many Web sites are supported by Content Management Systems (CMS), that often store much contextual information. However, this is not true in all cases and, additionally, getting information that is really relevant for recommendation is a hard task in many applications <|cite_start|> (Reference: Personalization in context: Does context matter when building personalized customer models?: The idea that context is important when predicting customer behavior has been maintained by scholars in marketing and data mining. However, no systematic study measuring how much the contextual information really matters in building customer models in personalization applications have been done before. In this paper, we address this problem. To this aim, we collected data containing rich contextual information by developing a special-purpose browser to help users to navigate a well- known e-commerce retail portal and purchase products on its site. The experimental results show that context does matter for the case of modeling behavior of individual customers. The granularity of contextual information also matters, and the effect of contextual information gets diluted during the process of aggregating customers' data.) <|cite_end|>. Adomavicius et al. <|cite_start|> (Reference: Incorporating contextual information in recommender systems using a multidimensional approach: The article presents a multidimensional (MD) approach to recommender systems that can provide recommendations based on additional contextual information besides the typical information on users and items used in most of the current recommender systems. This approach supports multiple dimensions, profiling information, and hierarchical aggregation of recommendations. The article also presents a multidimensional rating estimation method capable of selecting two-dimensional segments of ratings pertinent to the recommendation context and applying standard collaborative filtering or other traditional two-dimensional rating estimation techniques to these segments. A comparison of the multidimensional and two-dimensional rating estimation approaches is made, and the tradeoffs between the two are studied. Moreover, the article introduces a combined rating estimation method, which identifies the situations where the MD approach outperforms the standard two-dimensional approach and uses the MD approach in those situations and the standard two-dimensional approach elsewhere. Finally, the article presents a pilot empirical study of the combined approach, using a multidimensional movie recommender system that was developed for implementing this approach and testing its performance.) <|cite_end|> have investigated the use of context for rating estimation in multidimensional recommender systems. Palmisano et al. <|cite_start|> (Reference: Using context to improve predictive modeling of customers in personalization applications: The idea that context is important when predicting customer behavior has been maintained by scholars in marketing and data mining. However, no systematic study measuring how much the contextual information really matters in building customer models in personalization applications has been done before. In this paper, we study how important the contextual information is when predicting customer behavior and how to use it when building customer models. It is done by conducting an empirical study across a wide range of experimental conditions. The experimental results show that context does matter when modeling the behavior of individual customers and that it is possible to infer the context from the existing data with reasonable accuracy in certain cases. It is also shown that significant performance improvements can be achieved if the context is "cleverly" modeled, as described in this paper. These findings have significant implications for data miners and marketers. They show that contextual information does matter in personalization and companies have different opportunities to both make context valuable for improving predictive performance of customers' behavior and decreasing the costs of gathering contextual information.) <|cite_end|> have used contextual information to improve the predictive modeling of customer's behavior. Both authors have developed a special-purpose browser to obtain rich contextual information. In this paper we exploit how contextual information can be used to improve the accuracy of Top-$N$ Recommender Systems. Existing contextual recommender systems typically use contextual information as a label for segmenting/filtering sessions, using them to build the recommendation model (e.g., <|cite_start|> (Reference: Incorporating contextual information in recommender systems using a multidimensional approach: The article presents a multidimensional (MD) approach to recommender systems that can provide recommendations based on additional contextual information besides the typical information on users and items used in most of the current recommender systems. This approach supports multiple dimensions, profiling information, and hierarchical aggregation of recommendations. The article also presents a multidimensional rating estimation method capable of selecting two-dimensional segments of ratings pertinent to the recommendation context and applying standard collaborative filtering or other traditional two-dimensional rating estimation techniques to these segments. A comparison of the multidimensional and two-dimensional rating estimation approaches is made, and the tradeoffs between the two are studied. Moreover, the article introduces a combined rating estimation method, which identifies the situations where the MD approach outperforms the standard two-dimensional approach and uses the MD approach in those situations and the standard two-dimensional approach elsewhere. Finally, the article presents a pilot empirical study of the combined approach, using a multidimensional movie recommender system that was developed for implementing this approach and testing its performance.) <|cite_end|> <|cite_start|> (Reference: Using context to improve predictive modeling of customers in personalization applications: The idea that context is important when predicting customer behavior has been maintained by scholars in marketing and data mining. However, no systematic study measuring how much the contextual information really matters in building customer models in personalization applications has been done before. In this paper, we study how important the contextual information is when predicting customer behavior and how to use it when building customer models. It is done by conducting an empirical study across a wide range of experimental conditions. The experimental results show that context does matter when modeling the behavior of individual customers and that it is possible to infer the context from the existing data with reasonable accuracy in certain cases. It is also shown that significant performance improvements can be achieved if the context is "cleverly" modeled, as described in this paper. These findings have significant implications for data miners and marketers. They show that contextual information does matter in personalization and companies have different opportunities to both make context valuable for improving predictive performance of customers' behavior and decreasing the costs of gathering contextual information.) <|cite_end|>). We follow an alternative approach, which uses the contextual attribute as a virtual item. This means that it is treated as an ordinary item for building the recommendation model, which has the advantage of allowing the use of existing recommendation algorithms. As our contextual information are obtained from multidimensional data, we have called our approach \textbf{DaVI} (\emph{Dimensions as Virtual Items}). Instead of a special-purpose browser <|cite_start|> (Reference: Incorporating contextual information in recommender systems using a multidimensional approach: The article presents a multidimensional (MD) approach to recommender systems that can provide recommendations based on additional contextual information besides the typical information on users and items used in most of the current recommender systems. This approach supports multiple dimensions, profiling information, and hierarchical aggregation of recommendations. The article also presents a multidimensional rating estimation method capable of selecting two-dimensional segments of ratings pertinent to the recommendation context and applying standard collaborative filtering or other traditional two-dimensional rating estimation techniques to these segments. A comparison of the multidimensional and two-dimensional rating estimation approaches is made, and the tradeoffs between the two are studied. Moreover, the article introduces a combined rating estimation method, which identifies the situations where the MD approach outperforms the standard two-dimensional approach and uses the MD approach in those situations and the standard two-dimensional approach elsewhere. Finally, the article presents a pilot empirical study of the combined approach, using a multidimensional movie recommender system that was developed for implementing this approach and testing its performance.) <|cite_end|> <|cite_start|> (Reference: Using context to improve predictive modeling of customers in personalization applications: The idea that context is important when predicting customer behavior has been maintained by scholars in marketing and data mining. However, no systematic study measuring how much the contextual information really matters in building customer models in personalization applications has been done before. In this paper, we study how important the contextual information is when predicting customer behavior and how to use it when building customer models. It is done by conducting an empirical study across a wide range of experimental conditions. The experimental results show that context does matter when modeling the behavior of individual customers and that it is possible to infer the context from the existing data with reasonable accuracy in certain cases. It is also shown that significant performance improvements can be achieved if the context is "cleverly" modeled, as described in this paper. These findings have significant implications for data miners and marketers. They show that contextual information does matter in personalization and companies have different opportunities to both make context valuable for improving predictive performance of customers' behavior and decreasing the costs of gathering contextual information.) <|cite_end|>, we collect the multidimensional data from Web access logs and from attributes stored in databases of the Web sites. We have empirically tested our approach with two recommendation techniques, item-based collaborative filtering and association rules, to assess the effect of adding context on the accuracy of traditional Web recommender systems. We present results obtained on three data sets. In the following section, we present the contextual information used in our experiments. Next, we describe the recommendation techniques and the approach proposed. Then, we discuss results and present conclusions and future work. <|paper_end|>
[ "<|reference_start|> Personalization in context: Does context matter when building personalized customer models?: The idea that context is important when predicting customer behavior has been maintained by scholars in marketing and data mining. However, no systematic study measuring how much the contextual information really matters in building customer models in personalization applications have been done before. In this paper, we address this problem. To this aim, we collected data containing rich contextual information by developing a special-purpose browser to help users to navigate a well- known e-commerce retail portal and purchase products on its site. The experimental results show that context does matter for the case of modeling behavior of individual customers. The granularity of contextual information also matters, and the effect of contextual information gets diluted during the process of aggregating customers' data. <|reference_end|>", "<|reference_start|> Incorporating contextual information in recommender systems using a multidimensional approach: The article presents a multidimensional (MD) approach to recommender systems that can provide recommendations based on additional contextual information besides the typical information on users and items used in most of the current recommender systems. This approach supports multiple dimensions, profiling information, and hierarchical aggregation of recommendations. The article also presents a multidimensional rating estimation method capable of selecting two-dimensional segments of ratings pertinent to the recommendation context and applying standard collaborative filtering or other traditional two-dimensional rating estimation techniques to these segments. A comparison of the multidimensional and two-dimensional rating estimation approaches is made, and the tradeoffs between the two are studied. Moreover, the article introduces a combined rating estimation method, which identifies the situations where the MD approach outperforms the standard two-dimensional approach and uses the MD approach in those situations and the standard two-dimensional approach elsewhere. Finally, the article presents a pilot empirical study of the combined approach, using a multidimensional movie recommender system that was developed for implementing this approach and testing its performance. <|reference_end|>", "<|reference_start|> Using context to improve predictive modeling of customers in personalization applications: The idea that context is important when predicting customer behavior has been maintained by scholars in marketing and data mining. However, no systematic study measuring how much the contextual information really matters in building customer models in personalization applications has been done before. In this paper, we study how important the contextual information is when predicting customer behavior and how to use it when building customer models. It is done by conducting an empirical study across a wide range of experimental conditions. The experimental results show that context does matter when modeling the behavior of individual customers and that it is possible to infer the context from the existing data with reasonable accuracy in certain cases. It is also shown that significant performance improvements can be achieved if the context is \"cleverly\" modeled, as described in this paper. These findings have significant implications for data miners and marketers. They show that contextual information does matter in personalization and companies have different opportunities to both make context valuable for improving predictive performance of customers' behavior and decreasing the costs of gathering contextual information. <|reference_end|>", "<|reference_start|> Using context to improve predictive modeling of customers in personalization applications: The idea that context is important when predicting customer behavior has been maintained by scholars in marketing and data mining. However, no systematic study measuring how much the contextual information really matters in building customer models in personalization applications has been done before. In this paper, we study how important the contextual information is when predicting customer behavior and how to use it when building customer models. It is done by conducting an empirical study across a wide range of experimental conditions. The experimental results show that context does matter when modeling the behavior of individual customers and that it is possible to infer the context from the existing data with reasonable accuracy in certain cases. It is also shown that significant performance improvements can be achieved if the context is \"cleverly\" modeled, as described in this paper. These findings have significant implications for data miners and marketers. They show that contextual information does matter in personalization and companies have different opportunities to both make context valuable for improving predictive performance of customers' behavior and decreasing the costs of gathering contextual information. <|reference_end|>" ]
[ 2, 3, 6, 8 ]
{"<|cite_1|>": "ss-1043973", "<|cite_2|>": "ss-1208397", "<|cite_3|>": "ss-1491600", "<|cite_4|>": "ss-1253545", "<|cite_5|>": "ss-1208397", "<|multi_cite_6_1|>": "ss-1253545", "<|multi_cite_6_2|>": "ss-1208397", "<|multi_cite_7_1|>": "ss-1253545", "<|multi_cite_7_2|>": "ss-1208397"}
2003.13481
<|paper_start|> Title: Concept-aware Geographic Information Retrieval Abstract: Concept-aware Geographic Information Retrieval: Textual queries are largely employed in information retrieval to let users specify search goals in a natural way. However, differences in user and system terminologies can challenge the identification of the user's information needs, and thus the generation of relevant results. We argue that the explicit management of ontological knowledge, and of the meaning of concepts (by integrating linguistic and encyclopedic knowledge in the system ontology), can improve the analysis of search queries, because it enables a flexible identification of the topics the user is searching for, regardless of the adopted vocabulary. This paper proposes an information retrieval support model based on semantic concept identification. Starting from the recognition of the ontology concepts that the search query refers to, this model exploits the qualifiers specified in the query to select information items on the basis of possibly fine-grained features. Moreover, it supports query expansion and reformulation by suggesting the exploration of semantically similar concepts, as well as of concepts related to those referred in the query through thematic relations. A test on a data-set collected using the OnToMap Participatory GIS has shown that this approach provides accurate results. Introduction Finding information in large datasets can be challenging, without a support that helps understand what can be looked for. With respect to pure category-based search, textual queries are a fairly natural interaction mean. However, differences between the user's and system's domain conceptualizations can compromise the identification of the user's information needs, and thus the provision of appropriate results. We argue that, in the interpretation of textual queries, the integration of semantic and linguistic knowledge can improve the system's capability to provide relevant results because: \begin{itemize} \item It makes it possible to deal with queries expressed in different terminologies (e.g., by taking synonyms and word similarity into account), abstracting from the domain conceptualization adopted by the system, that the user is probably unaware of. \item It supports an explicit identification of the concepts on which the user focuses, preventing misunderstandings. \item It enables the expansion of queries with thematically related concepts, thus broadening the scope of the search results, depending on the user's interests. \end{itemize} Both aspects contribute to overcoming the limitations of pure keyword-based search, which can fail to retrieve the desired data due to word mismatch, or that can return irrelevant results because it lacks word disambiguation. Focusing on Web-GIS, which are the topic of this work, we developed an interactive query interpretation model that jointly uses linguistic, encyclopaedic, and an ontological representation of domain knowledge to answer geographical queries. Our approach follows the associative information retrieval model <|cite_start|> (Reference: Linear Associative Information Retrieval: Abstract : The recognition and exploitation of term associations for the retrieval of documents is discussed. A general theory of association and associative retrieval is presented; it is based on the use of linear transformations, both for establishing associations among terms and for discriminating among documents. The design and behavior of a simple experimental device which realizes the theory is discussed.) <|cite_end|> but is based on the execution of two query interpretation phases: \begin{enumerate} \item Semantic concept identification, by matching a semantically expanded query to the domain ontology in order to identify the referenced concepts. This enables the retrieval of a set of information items belonging to the general topics of the search query; e.g., hospitals. \item Facet-based filtering of results to take the qualifiers specified in the query into account; e.g., {\em pediatric} hospitals. Also in this case, the semantics of qualifiers is taken into account to abstract from the terminology used by the user. \end{enumerate} This two-steps approach supports the generation of relevant results because information is filtered on a semantic basis. Assuming a correct identification of the concepts referenced in the query, results cannot include items belonging to concepts different from those directly or indirectly expressed by the user. Moreover, this approach supports query reformulation and expansion, e.g., by relaxing the qualifiers, or by exploiting the semantic relations defined in the ontology in order to select more general, or thematically related, concepts than the one specified in the original queries. This paper presents our model and describes how it is applied to support information search in the OnToMap Participatory GIS <|cite_start|> (Reference: Production of spatial representations through collaborative mapping. An experiment: This paper focuses on the theme of the spatial representation of cities and the territory, reflecting on the prospects for innovation in the expressive means that serve the study of the city. The described research concerns project "Mappe di Comunita 3.0" (http://ontomap.dyndns.org/), funded by the Fondazione CRT. The project focuses on the definition of a methodology that implements a synergistic exchange between institutional territorial knowledge and the knowledge of the citizens, achievable thanks to the mediation of communication provided by a semantic representation of territorial knowledge. That type of representation supports the description of data and of its properties in a unified language. Moreover, it enables the sharing of information on the Web by providing an integrated perspective on territorial data) <|cite_end|>, which supports information sharing and participatory decision-making. A test on a dataset collected within the OnToMap project revealed that this approach provides accurate results. This work builds on the preliminary work presented in <|cite_start|> (Reference: Exploration of Cultural Heritage Information via Textual Search Queries: Searching information in a Geographical Information System (GIS) usually imposes that users explore precompiled category catalogs and select the types of information they are looking for. Unfortunately, that approach is challenging because it forces people to adhere to a conceptualization of the information space that might be different from their own. In order to address this issue, we propose to support textual search as the basic interaction model, exploiting linguistic information, together with category exploration, for query interpretation and expansion. This paper describes our model and its adoption in the OnToMap Participatory GIS.) <|cite_end|>, which sketched the query interpretation model described here, and extends it with the interpretation of textual queries including qualifiers, and with the presentation of preliminary test results. The remainder of this paper is organized as follows: Section \ref{related} positions our work in the related one. Section \ref{ontomap} provides an overview of the OnToMap application. Section \ref{model} describes our query interpretation model. Section \ref{experiments} describes the results of a preliminary evaluation of our approach and Section \ref{conclusions} concludes the paper and outlines our future work. Related Work \label{related} A flexible interpretation of textual queries presupposes that the system is able to map them to its own domain conceptualization. This mapping is particularly difficult because, as discussed in, information retrieval occurs in an anomalous state of knowledge: basically, in a search task the user is asked to specify something that (s)he does not know. Indeed, it is very likely that her/his terminology differs from the one of the system and the two have to be reconciled to identify the user’s information needs. Query expansion techniques have been long explored to enhance information retrieval. For instance, <|cite_start|> (Reference: Concept Based Query Expansion: Query expansion methods have been studied for a long time - with debatable success in many instances. In this paper we present a probabilistic query expansion model based on a similarity thesaurus which was constructed automatically. A similarity thesaurus reflects domain knowledge about the particular collection from which it is constructed. We address the two important issues with query expansion: the selection and the weighting of additional search terms. In contrast to earlier methods, our queries are expanded by adding those terms that are most similar to the concept of the query, rather than selecting terms that are similar to the query terms. Our experiments show that this kind of query expansion results in a notable improvement in the retrieval effectiveness when measured using both recall-precision and usefulness.) <|cite_end|> proposed a statistical approach to the selection of terms for query expansion, based on the analysis of the whole query (instead of single words) and on development of a custom thesaurus inferred from the source pool of documents. Moreover, <|cite_start|> (Reference: Combining Multiple Evidence from Different Types of Thesaurus for Query Expansion: Automatic query expansion has been known to be the most important method in overcoming the word mismatch problem in information retrieval. Thesauri have long been used by many researchers as a tool for query expansion. However only one type of thesaurus has generally been used. In this paper we analyze the characteristics of di erent thesaurus types and propose a method to combine them for query expansion. Experiments using the TREC collection proved the e ectiveness of our method over those using one type of thesaurus.) <|cite_end|> showed that the integration of different types of thesauri (linguistic, domain specific, etc.) improves the performance of query expansion techniques with respect to the adoption of individual ones. <|cite_start|> (Reference: Conceptual Query Expansion Model for Web Information Retrieval: ) <|cite_end|> suggests to create local thesauri, tailored to the query and to the collection being searched, and proposes a conceptual query expansion based on the combination of terms that are meaningful for the collection and form a ``formal concept". Finally, <|cite_start|> (Reference: Information retrieval systems using an associative conceptual space: An AI-based retrieval system inspired by the WEBSOM-algorithm is proposed. Contrary to the WEBSOM however, we introduce a system using only the index of every document. The knowledge extraction process results into a so-called Associative Conceptual Space where the words as found in the documents are organised using a Hebbian-type of (un)learning. Next, ’ (i.e.wordclusters) are identified using the SOM-algorithm. Thereupon, each document is characterised by comparing the concepts found in it, to those present in the concept space. Applying the characterisations, all documents can be clustered such that semantically similar documents lie close together on a SelfOrganising Map.) <|cite_end|> proposes to exploit Self-Organizing Maps to automatically generate associative conceptual spaces based on word co-occurrence in document spaces, saving the effort to build ad-hoc thesauri. With respect to these works, we do not attempt to define new algorithms for word sense disambiguation, but a new way to combine external services for query interpretation. Our model exploits the linguistic functions offered by sophisticated external word disambiguation services for query expansion. However, taking into account the difficulties in expanding short queries, it enhances the flexibility of concept recognition by enriching the domain ontology with linguistic and encyclopaedic knowledge that makes it possible to associate further synonyms and keywords to concepts. Thus, the expanded queries can be matched to a larger, but controlled, set of terms, relevant to the application domain. Moreover, if the system identifies multiple concepts, it proposes them to the user and asks her/him to select the interesting ones for continuing the information search task. As the identified concepts are semantically related to the query, this disambiguation phase is an opportunity to discover related concepts, and other portions of the information space to be explored. Several GIS use ontologies for conceptualizing the domain <|cite_start|> (Reference: Ontologies and Knowledge Sharing in Urban GIS: ) <|cite_end|> and helping users in information retrieval. For instance, SIAPAD <|cite_start|> (Reference: A multinational SDI-based system to facilitate disaster risk management in the Andean Community: ) <|cite_end|> combines semantic knowledge representation with task-based information to map the keywords occurring in search queries to the ontology concepts related to the corresponding activities. With respect to that work, we adopt a general approach, based on linguistic and encyclopaedic knowledge, in order to make the system independent of the execution of particular tasks, which would require the representation of task-specific knowledge. Moreover, the multi-faceted conceptual domain representation used by OnToMap makes it possible to search for information under different points of view. Some systems support multi-faceted information browsing, but this is not related to textual query interpretation. For instance, <|cite_start|> (Reference: Exploring the Web of Spatial Data with Facete: The majority of data (including data published on the Web as Linked Open Data) has a spatial dimension. However, the efficient, user friendly exploration of spatial data remains a major challenge. We present Facete, a web-based exploration and visualization application enabling the spatial-faceted browsing of data with a spatial dimension. Facete implements a novel spatial data exploration paradigm based on the following three key components: First, a domain independent faceted filtering module, which operates directly on SPARQL and supports nested facets. Second, an algorithm that efficiently detects spatial information related to those resources that satisfy the facet selection. The detected relations are used for automatically presenting data on a map. And third, a workflow for making the map display interact with data sources that contain large amounts of geometric information. We demonstrate Facete in large-scale, real world application scenarios.) <|cite_end|> presents a graphical user interface for faceted exploration of geographical Linked Data, but the navigation of the information space is done by browsing a set of hierarchical menus, with the possibility of specifying search keywords. In comparison, OnToMap supports both graph-based exploration, based on the visualization of views on the domain ontology, and a textual one, which directly maps natural language queries to ontology concepts. Other GIS, such as TripAdvisor <|cite_start|> (Reference: TripAdvisor: 隨網路時代的普及,旅行者在安排旅行住宿時多以網路線上預訂,而這樣的的預訂方式加上飯店住宿的體驗性商品特性,讓旅行者在預定前會非常在意過去的線上評等、評論,作為參考及決策的依據,故線上評等及評論對於消費者及業者來說都是重要指標。在進行評等及評論時,消費者會根據不同的面向去評斷該次住宿的體驗,如 Tripadvisor 就提供消費者 6 種不同的面向去進行評價,而文字評論也讓消費者能更詳細描述整個住宿體驗之感受。因此本研究透過 Tripadvisor 網站中臺北地區的飯店資料,透過文本分析將評論進行主題特徵分類及情感分析,探討評等及評論,研究消費者評論與面向評等之間的關係,並從評論中找出新的潛在面向。此外,探討展望理論中所提到在得到同等程度的獲利與損失時,會產生損失影響較大的非理性思考,本研究將應用正、負情感分數對評等的影響來驗證。 研究結果顯示評論是能夠解釋消費者給出的面向評等,而也從評論中找出餐廳飲食及環境設施這兩種面向,說明了對於業者及潛在消費者來說在缺少面向評等資訊時,透過消費者評論也可了解其面向感受。業者在想提升住宿體驗時除了原本的 6 種面向,也可在飲食及環境上檢討改進。此外也驗證了在增加相同程度的正、負面情緒時,負面情緒所帶來的影響更大,從消費者評等上來看,負面情緒的增加比起正面情緒會讓消費者在給出評等上扣更多的分數,因此業者若要提高自家的評等,對於負面評論的重視及處理要比正面評論更加積極。) <|cite_end|>, ask for a separate specification of geographical entities and information to be found. They use the keywords included in the query to match geo-data names, item reviews, etc., providing mixed results that include heterogeneous items (e.g., items tagged by the keyword, or having it in their own names, addresses, etc.). Similarly, OpenStreetMap <|cite_start|> (Reference: {{OpenStreetMap: Commençons par ce petit reportage vidéo issu des archives Framatube, et poursuivons en interrogeant encore (et toujours) Wikipédia : « OpenStreetMap est un projet pour créer des cartes libres du monde, en utilisant le système GPS ou d’autres données libres. OpenStreetMap a été fondé en juillet 2004 par Steve Coast au University College de Londres. Les cartes sont disponibles sous les termes de la licence Creative Commons Attribution-ShareAlike 2.0. Par l’utilisation de moyens informatiques basés sur Internet qui permettent l’intervention et la collaboration de tout utilisateur volontaire, le projet OpenStreetMap relève de la géomatique 2.0 et est aussi une contribution à ce qui est appelé la néogéographie. ») <|cite_end|> applies keyword-based search offered by Nominatim and returns all the items located in the bounding box that include the specified tags and keywords. MapQuest supports looking for three types of information: place, address and categories. The category-based search is similar to the one offered by TripAdvisor. MapQuest offers an extended set of categories corresponding to information layers, that can be added or removed from the map. Different from all these systems, OnToMap identifies the concepts referenced in the query to retrieve coherent results, e.g., all the sport facilities located in the selected geographical area. Moreover, it supports Linked Data exploration based on the semantic relations among ontology concepts. Wikimapia supports category-based search by presenting a list of categories that users can browse, with an auto-completion search bar. Categories reflect the tags that users insert when they add new crowdsourced items to the map, and tags can be organized in an hierarchical structure. In comparison, OnToMap offers a textual interaction mode, and an ontology-based {\em navigation by concepts}, for semantically browsing both subclass and thematic relations between concepts. Some recent work on information filtering attempts to acquire relations among information types from the observation of users' behaviour, and is complementary to our work. For instance, Google search engine manages the Knowledge Graph <|cite_start|> (Reference: Knowledge Graph: ) <|cite_end|> to relate facts, concepts and entities depending on their co-occurrence in queries. On a related perspective, CoSeNa <|cite_start|> (Reference: CoSeNa: a context-based search and navigation system: Most of the existing document and web search engines rely on keyword-based queries. To find matches, these queries are processed using retrieval algorithms that rely on word frequencies, topic recentness, document authority, and (in some cases) available ontologies. In this paper, we propose an innovative approach to exploring text collections using a novel keywords-by-concepts (KbC) graph, which supports navigation using domain-specific concepts as well as keywords that are characterizing the text corpus. The KbC graph is a weighted graph, created by tightly integrating keywords extracted from documents and concepts obtained from domain taxonomies. Documents in the corpus are associated to the nodes of the graph based on evidence supporting contextual relevance; thus, the KbC graph supports contextually informed access to these documents. In this paper, we also present CoSeNa (Context-based Search and Navigation) system that leverages the KbC model as the basis for document exploration and retrieval as well as contextually-informed media integration.) <|cite_end|> employs keyword co-occurrence in the corpus of documents to be retrieved, and ontological knowledge about the domain concepts, to support the exploration of text collections using a keywords-by-concepts graph. The graph “supports navigations using domain-specific concepts as well as keywords that are characterizing the text corpus”. Finally, recent search auto-completion models, such as COMMA <|cite_start|> (Reference: Composite match autocompletion (COMMA): a semantic result-oriented autocompletion technique for e-marketplaces: Autocompletion systems support users in the formulation of queries in different situations, from development environments to the web. In this paper we describe Composite Match Autocompletion COMMA, a lightweight approach to the introduction of semantics in the realization of a semi-structured data autocompletion matching algorithm. The approach is formally described, then it is applied and evaluated with specific reference to the e-commerce context. The semantic extension to the matching algorithm exploits available information about product categories and distinguishing features of products to enhance the elaboration of exploratory queries. COMMA supports a seamless management of both targeted/precise queries and exploratory/vague ones, combining different filtering and scoring techniques. The algorithm is evaluated with respect both to effectiveness and efficiency in a real-world scenario: the achieved improvement is significant and it is not associated to a sensible increase of computational costs.) <|cite_end|>, support the search of items in catalogs by indexing information items and by applying string-matching algorithms for item selection. Our work differs in two main aspects: firstly, we rely on item classification in ontology concepts to reduce the amount of pre-processing work to be done by the system. Secondly, we exploit domain-dependent and linguistic knowledge about ontology concepts, as well as word sense disambiguation, to support query interpretation by abstracting from the terminology used by the user. <|paper_end|>
[ "<|reference_start|> Linear Associative Information Retrieval: Abstract : The recognition and exploitation of term associations for the retrieval of documents is discussed. A general theory of association and associative retrieval is presented; it is based on the use of linear transformations, both for establishing associations among terms and for discriminating among documents. The design and behavior of a simple experimental device which realizes the theory is discussed. <|reference_end|>", "<|reference_start|> Conceptual Query Expansion Model for Web Information Retrieval: <|reference_end|>", "<|reference_start|> Exploring the Web of Spatial Data with Facete: The majority of data (including data published on the Web as Linked Open Data) has a spatial dimension. However, the efficient, user friendly exploration of spatial data remains a major challenge. We present Facete, a web-based exploration and visualization application enabling the spatial-faceted browsing of data with a spatial dimension. Facete implements a novel spatial data exploration paradigm based on the following three key components: First, a domain independent faceted filtering module, which operates directly on SPARQL and supports nested facets. Second, an algorithm that efficiently detects spatial information related to those resources that satisfy the facet selection. The detected relations are used for automatically presenting data on a map. And third, a workflow for making the map display interact with data sources that contain large amounts of geometric information. We demonstrate Facete in large-scale, real world application scenarios. <|reference_end|>", "<|reference_start|> Composite match autocompletion (COMMA): a semantic result-oriented autocompletion technique for e-marketplaces: Autocompletion systems support users in the formulation of queries in different situations, from development environments to the web. In this paper we describe Composite Match Autocompletion COMMA, a lightweight approach to the introduction of semantics in the realization of a semi-structured data autocompletion matching algorithm. The approach is formally described, then it is applied and evaluated with specific reference to the e-commerce context. The semantic extension to the matching algorithm exploits available information about product categories and distinguishing features of products to enhance the elaboration of exploratory queries. COMMA supports a seamless management of both targeted/precise queries and exploratory/vague ones, combining different filtering and scoring techniques. The algorithm is evaluated with respect both to effectiveness and efficiency in a real-world scenario: the achieved improvement is significant and it is not associated to a sensible increase of computational costs. <|reference_end|>" ]
[ 0, 5, 9, 14 ]
{"<|cite_1|>": "ss-1105570", "<|multi_cite_2_1|>": "ss-1945401", "<|cite_3|>": "ss-1105571", "<|cite_5|>": "ss-1291269", "<|cite_6|>": "ss-1105572", "<|cite_7|>": "ss-1105573", "<|cite_8|>": "ss-1105574", "<|cite_9|>": "ss-1105575", "<|cite_10|>": "ss-1105576", "<|cite_11|>": "ss-1105577", "<|cite_12|>": "ss-1105578", "<|cite_13|>": "ss-729291", "<|cite_16|>": "ss-1089822", "<|cite_17|>": "ss-1091475", "<|cite_18|>": "ss-1105579"}
2101.06666
<|paper_start|> Title: Deep Learning-Aided 5G Channel Estimation Abstract: Deep Learning-Aided 5G Channel Estimation: Deep learning has demonstrated the important roles in improving the system performance and reducing computational complexity for $5$G-and-beyond networks. In this paper, we propose a new channel estimation method with the assistance of deep learning in order to support the least squares estimation, which is a low-cost method but having relatively high channel estimation errors. This goal is achieved by utilizing a MIMO (multiple-input multiple-output) system with a multi-path channel profile used for simulations in the 5G networks under the severity of Doppler effects. Numerical results demonstrate the superiority of the proposed deep learning-assisted channel estimation method over the other channel estimation methods in previous works in terms of mean square errors. Introduction The fifth-generation (5G) wireless communication has been developed to adapt to the exponential increases in wireless data traffic and reliability communications <|cite_start|> (Reference: Massive MIMO Communications: Over the last two decades, multiple‐input, multiple‐output (MIMO) technology has been successfully deployed on a wide scale in cellular communication systems. MIMO technology involves the use of multiple antennas at one or both ends of a communication link to boost the performance and reliability through strategies such as beamforming, diversity transmission, spatial multiplexing, and interference suppression. The currently‐deployed 4G/LTE cellular standards (LTE Rel‐8/9/10) support a comprehensive suite of MIMO techniques for up to eight antenna ports in a single sector on the downlink and up to four transmit antennas at a mobile station.For 5G cellular communications, massive MIMO, sometimes called full dimension MIMO, is a promising technology for enhancing system performance for frequency bands ranging from under 6 GHz to 100 GHz. Also, for 5G systems deployed in higher frequency bands such as cmWaves (6–30 GHz) and mmWaves (30–100GHz), large‐scale antenna arrays will be a prerequisite for overcoming the poor propagation characteristics in those bands.This chapter will describe the basics of massive MIMO and how it will satisfy the high‐data‐rate demands of 5G cellular systems for frequency bands up to 100 GHz. The current state of the art of MIMO technology are reviewed, and the application of large‐scale antenna arrays to 5G are described. The chapter also surveys current trends in massive MIMO technology and system concepts, with a focus on methodologies for significantly enhancing cellular system performance. Various trends and promising concepts are identified, and various practical issues highligh ted.) <|cite_end|>. The orthogonal frequency division multiplexing (OFDM) technique has been demonstrating its inevitable successes in the current networks, and has continuously adopted in 5G systems to combat the frequency selective fading in multi-path propagation environments <|cite_start|> (Reference: Optimality Properties, Distributed Strategies, and Measurement-Based Evaluation of Coordinated Multicell OFDMA Transmission: The throughput of multicell systems is inherently limited by interference and the available communication resources. Coordinated resource allocation is the key to efficient performance, but the demand on backhaul signaling and computational resources grows rapidly with number of cells, terminals, and subcarriers. To handle this, we propose a novel multicell framework with dynamic cooperation clusters where each terminal is jointly served by a small set of base stations. Each base station coordinates interference to neighboring terminals only, thus limiting backhaul signalling and making the framework scalable. This framework can describe anything from interference channels to ideal joint multicell transmission. The resource allocation (i.e., precoding and scheduling) is formulated as an optimization problem (P1) with performance described by arbitrary monotonic functions of the signal-to-interference-and-noise ratios (SINRs) and arbitrary linear power constraints. Although (P1) is non-convex and difficult to solve optimally, we are able to prove: 1) Optimality of single-stream beamforming; 2) Conditions for full power usage; and 3) A precoding parametrization based on a few parameters between zero and one. These optimality properties are used to propose low-complexity strategies: both a centralized scheme and a distributed version that only requires local channel knowledge and processing. We evaluate the performance on measured multicell channels and observe that the proposed strategies achieve close-to-optimal performance among centralized and distributed solutions, respectively. In addition, we show that multicell interference coordination can give substantial improvements in sum performance, but that joint transmission is very sensitive to synchronization errors and that some terminals can experience performance degradations.) <|cite_end|>. Consequently, this technique increases the spectrum efficiency compared with single-carrier techniques. Through the wireless multipath channels, the transmitted signals to a particular receiver is distorted by many detrimental effects such as multi-path propagation, local scattering, and mutual interference by sharing radio resources. Therefore, channel state information and its effects must be estimated and compensated at the receiver to recover the transmitted signals. Generally, pilot symbols known to both the transmitter and receiver are used for the channel estimation. In a $5$G system, the structure of the pilot symbols may be varied depending on different use cases. Among the conventional channel estimation methods, least squares (LS) estimation is a low computational complexity one since it requires no prior information of the statistical channel information. However, this estimation method yields relatively low performance in many application scenarios. Alternatively, the minimum mean square error (MMSE) estimation method has been introduced, which minimizes the channel estimation errors on average <|cite_start|> (Reference: {Fundamentals of statistical signal processing: Estimation theory: Minimum variance unbiased estimation Cramer-Rao lower bound linear models general minimum variance unbiased estimation best linear unbiased estimators maximum likelihood estimation least squares method of moments the Bayesian philosophy general Bayesian estimators linear Bayesian estimators Kalman filters summary of estimators extension for complex data and parameters.) <|cite_end|> <|cite_start|> (Reference: Large-Scale-Fading Decoding in Cellular Massive MIMO Systems with Spatially Correlated Channels: Massive multiple-input--multiple-output (MIMO) systems can suffer from coherent intercell interference due to the phenomenon of pilot contamination. This paper investigates a two-layer decoding method that mitigates both coherent and non-coherent interference in multi-cell Massive MIMO. To this end, each base station (BS) first estimates the channels to intra-cell users using either minimum mean-squared error (MMSE) or element-wise MMSE (EW-MMSE) estimation based on uplink pilots. The estimates are used for local decoding on each BS followed by a second decoding layer where the BSs cooperate to mitigate inter-cell interference. An uplink achievable spectral efficiency (SE) expression is computed for arbitrary two-layer decoding schemes. A closed-form expression is then obtained for correlated Rayleigh fading, maximum-ratio combining, and the proposed large-scale fading decoding (LSFD) in the second layer. We also formulate a sum SE maximization problem with both the data power and LSFD vectors as optimization variables. Since this is an NP-hard problem, we develop a low-complexity algorithm based on the weighted MMSE approach to obtain a local optimum. The numerical results show that both data power control and LSFD improves the sum SE performance over single-layer decoding multi-cell Massive MIMO systems.) <|cite_end|>. The optimality of MMSE estimation is based on the assumption that the propagation channels are modeled by a linear system and each channel response follows a circularly symmetric complex Gaussian distribution for which the channel estimates can be derived in the closed form <|cite_start|> (Reference: Machine learning based channel estimation: A computational approach for universal channel conditions: Recently, machine learning has been introduced in communications to deal with channel estimation. Under non-linear system models, the superiority of machine learning based estimation has been demonstrated by simulation expriments, but the theoretical analysis is not sufficient, since the performance of machine learning, especially deep learning, is hard to analyze. This paper focuses on some theoretical problems in machine learning based channel estimation. As a data-driven method, certain amount of training data is the prerequisite of a workable machine learning based estimation, and it is analyzed qualitively in a statistic view in this paper. To deduce the exact sample size, we build a statistic model ignoring the exact structure of the learning module and then the relationship between sample size and learning performance is derived. To testify our analysis, we employ machine learning based channel estimation in OFDM system and apply two typical neural networks as the learning module: single layer or linear structure and three layer structure. The simulation results show that the analysis sample size is correct when input dimension and complexity of learning module are low, but the true required sample size will be larger the analysis result otherwise, since the influence of the two factors is not considered in the analysis of sample size. Also, we simulate the performance of machine learning based channel estimation under quasi-stationary channel condition, where the explicit form of MMSE estimation is hard to obtain, and the simulation results exhibit the effectiveness and convenience of machine learning based channel estimation under complex channel models.) <|cite_end|> <|cite_start|> (Reference: Proposals of multipath time-variant channel and additive coloured noise modelling for underwater acoustic ofdm-based systems: This paper presents the results of Underwater Acoustic (UWA) channel measurements, including the power delay profile and the Doppler power spectrum, and further uses them to analyse the Orthogonal Frequency Division Multiplexing (OFDM) system performance. The UWA channel simulation model is derived from the measurement data by applying a widely optimisation algorithm called the Lp-norm method. The close match of correlation functions of the measured and channel simulation model demonstrates correctness of our proposed channel modelling. Moreover, we also propose to use the Autoregressive (AR) generation method for characterising ambient noise in UWA systems. From the UWA channel simulation and coloured noise models, performance of OFDM system using different channel estimation techniques is investigated to analyse the impact of acoustic medium as multipath, Doppler and coloured noise effects. The numerical results manifest superior improvements of the so-called sparse channel estimation over other traditional ones such as least squares or minimum mean square estimation.) <|cite_end|>. Unfortunately, the MMSE estimation method has high computational complexity due to the requirements of channel statistic information, i.e., the mean and covariance matrices. In many environments, such statistical information is either difficult to obtain or quickly variant in a short time period <|cite_start|> (Reference: {Performance Analysis and Optimization of the Coverage Probability in Dual Hop LoRa Networks With Different Fading Channels: In this work, the performance evaluation and the optimization of dual-hop LoRa network are investigated. In particular, the coverage probability (Pcov) of edge end-devices (EDs) is computed in closed-form expressions under various fading channels, i.e., Nakagami- $m$ and Rayleigh fading. The Pcov under Nakagami- $m$ fading is computed in the approximated closed-form expressions; the Pcov under Rayleigh fading, on the other hand, is calculated in the exact closed-form expressions. In addition, we also investigate the impact of different kinds of interference on the performance of the Pcov, i.e., intra-SF interference, inter-SF interference (or capture effect) and both intra- and inter-SF interference. Our findings show that the impact of imperfect orthogonality is not non-negligible, along with the intra-SF interference. Moreover, based on the proposed mathematical framework, we formulate an optimization problem, which finds the optimal location of the relay to maximize the coverage probability. Since it is a mixed integer program with a non-convex objective function, we decompose the original problem with discrete optimization variables into sub-problems with a convex feasible set. After that, each sub-problem is effectively solved by utilizing the gradient descent approach. Monte Carlo simulations are supplied to verify the correctness of our mathematical framework. In addition, the results manifest that our proposed optimization algorithm converges rapidly, and the coverage probability is significantly improved when the location of relay is optimized.) <|cite_end|> <|cite_start|> (Reference: A non-stationary wideband channel model for massive MIMO communication systems: This paper proposes a novel non-stationary wideband multi-confocal ellipse two dimensional (2-D) channel model for massive multiple-input multiple-output (MIMO) communication systems. Spherical wavefront is assumed in the proposed channel model, instead of the plane wavefront assumption used in conventional MIMO channel models. In addition, the birth-death process is incorporated into the proposed model to capture the dynamic properties of clusters on both the array and time axes. Statistical properties of the channel model such as the space-time-frequency correlation function and power imbalance on the antenna array are studied. The impact of the spherical wavefront assumption on the statistical properties of the channel model is investigated. Furthermore, numerical analysis shows that the proposed channel model is able to capture specific characteristics of massive MIMO channel as observed in measurements.) <|cite_end|>. Machine learning has recently drawn much attention in various applications of wireless communications such as radio resource allocation, signal decoding, and channel estimation <|cite_start|> (Reference: An Introduction to Deep Learning for the Physical Layer: We present and discuss several novel applications of deep learning for the physical layer. By interpreting a communications system as an autoencoder, we develop a fundamental new way to think about communications system design as an end-to-end reconstruction task that seeks to jointly optimize transmitter and receiver components in a single process. We show how this idea can be extended to networks of multiple transmitters and receivers and present the concept of radio transformer networks as a means to incorporate expert domain knowledge in the machine learning model. Lastly, we demonstrate the application of convolutional neural networks on raw IQ samples for modulation classification which achieves competitive accuracy with respect to traditional schemes relying on expert features. The paper is concluded with a discussion of open challenges and areas for future investigation.) <|cite_end|> <|cite_start|> (Reference: Sum Spectral Efficiency Maximization in Massive MIMO Systems: Benefits from Deep Learning: This paper investigates the joint data and pilot power optimization for maximum sum spectral efficiency (SE) in multi-cell Massive MIMO systems, which is a non-convex problem. We first propose a new optimization algorithm, inspired by the weighted minimum mean square error (MMSE) approach, to obtain a stationary point in polynomial time. We then use this algorithm together with deep learning to train a convolutional neural network to perform the joint data and pilot power control in sub-millisecond runtime, making it suitable for online optimization in real multi-cell Massive MIMO systems. The numerical result demonstrates that the solution obtained by the neural network is $1\%$ less than the stationary point for four-cell systems, while the sum SE loss is $2\%$ in a nine-cell system.) <|cite_end|> <|cite_start|> (Reference: Power Control in Cellular Massive MIMO with Varying User Activity: A Deep Learning Solution: This paper considers the sum spectral efficiency (SE) optimization problem in multi-cell Massive MIMO systems with a varying number of active users. This is formulated as a joint pilot and data power control problem. Since the problem is non-convex, we first derive a novel iterative algorithm that obtains a stationary point in polynomial time. To enable real-time implementation, we also develop a deep learning solution. The proposed neural network, PowerNet, only uses the large-scale fading information to predict both the pilot and data powers. The main novelty is that we exploit the problem structure to design a single neural network that can handle a dynamically varying number of active users; hence, PowerNet is simultaneously approximating many different power control functions with varying number inputs and outputs. This is not the case in prior works and thus makes PowerNet an important step towards a practically useful solution. Numerical results demonstrate that PowerNet only loses $2\%$ in sum SE, compared to the iterative algorithm, in a nine-cell system with up to $90$ active users per in each coherence interval, and the runtime was only $0.03$ ms on a graphics processing unit (GPU). When good data labels are selected for the training phase, PowerNet can yield better sum SE than by solving the optimization problem with one initial point.) <|cite_end|> <|cite_start|> (Reference: Learning the MMSE Channel Estimator: We present a method for estimating conditionally Gaussian random vectors with random covariance matrices, which uses techniques from the field of machine learning. Such models are typical in communication systems, where the covariance matrix of the channel vector depends on random parameters, e.g., angles of propagation paths. If the covariance matrices exhibit certain Toeplitz and shift-invariance structures, the complexity of the MMSE channel estimator can be reduced to O(M log M) floating point operations, where M is the channel dimension. While in the absence of structure the complexity is much higher, we obtain a similarly efficient (but suboptimal) estimator by using the MMSE estimator of the structured model as a blueprint for the architecture of a neural network. This network learns the MMSE estimator for the unstructured model, but only within the given class of estimators that contains the MMSE estimator for the structured model. Numerical simulations with typical spatial channel models demonstrate the generalization properties of the chosen class of estimators to realistic channel models.) <|cite_end|>. Regarding the channel estimation, the authors in <|cite_start|> (Reference: Learning the MMSE Channel Estimator: We present a method for estimating conditionally Gaussian random vectors with random covariance matrices, which uses techniques from the field of machine learning. Such models are typical in communication systems, where the covariance matrix of the channel vector depends on random parameters, e.g., angles of propagation paths. If the covariance matrices exhibit certain Toeplitz and shift-invariance structures, the complexity of the MMSE channel estimator can be reduced to O(M log M) floating point operations, where M is the channel dimension. While in the absence of structure the complexity is much higher, we obtain a similarly efficient (but suboptimal) estimator by using the MMSE estimator of the structured model as a blueprint for the architecture of a neural network. This network learns the MMSE estimator for the unstructured model, but only within the given class of estimators that contains the MMSE estimator for the structured model. Numerical simulations with typical spatial channel models demonstrate the generalization properties of the chosen class of estimators to realistic channel models.) <|cite_end|> exploited the non-stationary channel conditions and the channel fading vectors are considered as conditionally Gaussian random vectors with random covariance matrices. The MMSE estimation form under those conditions may have an extremely high cost to obtain, and thus the authors used an estimation designed under a special channel condition for the machine learning aided estimation. In <|cite_start|> (Reference: Deep-learning-based channel estimation for wireless energy transfer: We propose a deep-learning-based channel estimation technique for wireless energy transfer. Specifically, we develop a channel learning scheme using the deep autoencoder, which learns the channel state information (CSI) at the energy transmitter based on the harvested energy feedback from the energy receiver, in the sense of minimizing the mean square error (mse) of the channel estimation. Numerical results demonstrate that the proposed scheme learns the CSI very well and significantly outperforms the conventional scheme in terms of the channel estimation mse as well as the harvested energy.) <|cite_end|>, the authors studied channel estimation in a wireless energy transfer system for which the downlink channel estimation is exploited to harvest energy feedback information. A deep neural network model is used to predict better channel estimates than conventional estimations as LS or linear MMSE (LMMSE). These researches have numerically proved the compelling potentials of machine learning in channel estimation as long as sufficient training data set is provided. However, they only focused on the quasi-static propagation models such that channels are static and frequency flat in each coherence block. In this paper, we propose two architectures of a deep neural network (DNN) model, which are applied for the channel estimation of a $5$G MIMO-OFDM system under frequency selective fading. The performance of the proposed deep learning-aided channel estimations is then evaluated by two different scenarios based on the receiver velocity. The channel parameters in each scenario are generated based on the tapped delay line type A model (TDL-A), which is reported by $3$GPP and of practical scenarios. The performance of the two DNN-aided channel estimations is compared with the traditional estimations, i.e., LS and Linear MMSE (LMMSE), in terms of mean square error (MSE) and bit error rate (BER) versus signal to noise ratio (SNR) criteria.\footnote{This paper uses LMMSE estimation as a benchmark for comparison because the channel estimates by MMSE estimation are nontrivial to obtain for the considered channel profile.} In particular, the proposed DNN structure will exploit a fully connected neural network to learn the features of actual channels with the channel estimates obtained by LS estimation as the input. In comparison to LS estimation, we would like to evaluate how much the system performance is improved by the assistance of a DNN. Furthermore, we would like to observe if the proposed estimation can beat the performance obtained by LMMSE. The rest of this paper is organized as follows: Section~\ref{Sec:Syst} describes the 5G MIMO-OFDM system model. Section~\ref{Sec:III} presents the problems of the conventional channel estimation methods and proposes the DNN-aided methods to solve these problems. Meanwhile, Section~\ref{Sec:IV} shows the simulation results that evaluate the performance of the proposed methods and compare with the other benchmarks. Finally, the main conclusions of this paper are presented in Section~\ref{Sec:V}. <|paper_end|>
[ "<|reference_start|> Machine learning based channel estimation: A computational approach for universal channel conditions: Recently, machine learning has been introduced in communications to deal with channel estimation. Under non-linear system models, the superiority of machine learning based estimation has been demonstrated by simulation expriments, but the theoretical analysis is not sufficient, since the performance of machine learning, especially deep learning, is hard to analyze. This paper focuses on some theoretical problems in machine learning based channel estimation. As a data-driven method, certain amount of training data is the prerequisite of a workable machine learning based estimation, and it is analyzed qualitively in a statistic view in this paper. To deduce the exact sample size, we build a statistic model ignoring the exact structure of the learning module and then the relationship between sample size and learning performance is derived. To testify our analysis, we employ machine learning based channel estimation in OFDM system and apply two typical neural networks as the learning module: single layer or linear structure and three layer structure. The simulation results show that the analysis sample size is correct when input dimension and complexity of learning module are low, but the true required sample size will be larger the analysis result otherwise, since the influence of the two factors is not considered in the analysis of sample size. Also, we simulate the performance of machine learning based channel estimation under quasi-stationary channel condition, where the explicit form of MMSE estimation is hard to obtain, and the simulation results exhibit the effectiveness and convenience of machine learning based channel estimation under complex channel models. <|reference_end|>", "<|reference_start|> An Introduction to Deep Learning for the Physical Layer: We present and discuss several novel applications of deep learning for the physical layer. By interpreting a communications system as an autoencoder, we develop a fundamental new way to think about communications system design as an end-to-end reconstruction task that seeks to jointly optimize transmitter and receiver components in a single process. We show how this idea can be extended to networks of multiple transmitters and receivers and present the concept of radio transformer networks as a means to incorporate expert domain knowledge in the machine learning model. Lastly, we demonstrate the application of convolutional neural networks on raw IQ samples for modulation classification which achieves competitive accuracy with respect to traditional schemes relying on expert features. The paper is concluded with a discussion of open challenges and areas for future investigation. <|reference_end|>", "<|reference_start|> Power Control in Cellular Massive MIMO with Varying User Activity: A Deep Learning Solution: This paper considers the sum spectral efficiency (SE) optimization problem in multi-cell Massive MIMO systems with a varying number of active users. This is formulated as a joint pilot and data power control problem. Since the problem is non-convex, we first derive a novel iterative algorithm that obtains a stationary point in polynomial time. To enable real-time implementation, we also develop a deep learning solution. The proposed neural network, PowerNet, only uses the large-scale fading information to predict both the pilot and data powers. The main novelty is that we exploit the problem structure to design a single neural network that can handle a dynamically varying number of active users; hence, PowerNet is simultaneously approximating many different power control functions with varying number inputs and outputs. This is not the case in prior works and thus makes PowerNet an important step towards a practically useful solution. Numerical results demonstrate that PowerNet only loses $2\\%$ in sum SE, compared to the iterative algorithm, in a nine-cell system with up to $90$ active users per in each coherence interval, and the runtime was only $0.03$ ms on a graphics processing unit (GPU). When good data labels are selected for the training phase, PowerNet can yield better sum SE than by solving the optimization problem with one initial point. <|reference_end|>", "<|reference_start|> Deep-learning-based channel estimation for wireless energy transfer: We propose a deep-learning-based channel estimation technique for wireless energy transfer. Specifically, we develop a channel learning scheme using the deep autoencoder, which learns the channel state information (CSI) at the energy transmitter based on the harvested energy feedback from the energy receiver, in the sense of minimizing the mean square error (mse) of the channel estimation. Numerical results demonstrate that the proposed scheme learns the CSI very well and significantly outperforms the conventional scheme in terms of the channel estimation mse as well as the harvested energy. <|reference_end|>" ]
[ 4, 8, 10, 13 ]
{"<|cite_1|>": "ss-2149131", "<|cite_2|>": "arxiv-35958", "<|multi_cite_4_1|>": "ss-678398", "<|multi_cite_4_2|>": "arxiv-166667", "<|multi_cite_5_1|>": "ss-2217260", "<|multi_cite_5_2|>": "ss-2217261", "<|multi_cite_6_1|>": "ss-789462", "<|multi_cite_6_2|>": "ss-1085389", "<|multi_cite_7_1|>": "arxiv-115755", "<|multi_cite_7_2|>": "arxiv-195999", "<|multi_cite_7_3|>": "arxiv-187300", "<|multi_cite_7_4|>": "arxiv-129631", "<|cite_8|>": "arxiv-129631", "<|cite_9|>": "ss-1634968"}
1310.7991
<|paper_start|> Title: Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization Abstract: Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization: We consider the problem of sparse coding, where each sample consists of a sparse linear combination of a set of dictionary atoms, and the task is to learn both the dictionary elements and the mixing coefficients. Alternating minimization is a popular heuristic for sparse coding, where the dictionary and the coefficients are estimated in alternate steps, keeping the other fixed. Typically, the coefficients are estimated via $\ell_1$ minimization, keeping the dictionary fixed, and the dictionary is estimated through least squares, keeping the coefficients fixed. In this paper, we establish local linear convergence for this variant of alternating minimization and establish that the basin of attraction for the global optimum (corresponding to the true dictionary and the coefficients) is $\order{1/s^2}$, where $s$ is the sparsity level in each sample and the dictionary satisfies RIP. Combined with the recent results of approximate dictionary estimation, this yields provable guarantees for exact recovery of both the dictionary elements and the coefficients, when the dictionary elements are incoherent. Introduction A sparse code encodes each sample with a sparse set of elements, termed as dictionary atoms. Specifically, given a set of samples $Y\in \R^{d\times n}$, the generative model is \[ Y= \Astar \Xstar, \qquad \Astar \in \R^{d\times r}, \Xstar\in \R^{r \times n},\] and additionally, each column of $\Xstar$ has at most $s$ non-zero entries. The columns of $\Astar$ correspond to the dictionary atoms, and the columns of $\Xstar$ correspond to the mixing coefficients of each sample. Each sample is a combination of at most $s$ dictionary atoms. Sparse codes can thus succinctly represent high dimensional observed data. The problem of sparse coding consists of unsupervised learning of the dictionary and the coefficient matrices. Thus, given only unlabeled data, we aim to learn the set of dictionary atoms or basis functions that provide a good fit to the observed data. Sparse coding is applied in a variety of domains. Sparse coding of natural images has yielded dictionary atoms which resemble the receptive fields of neurons in the visual cortex <|cite_start|> (Reference: Emergence of simple-cell receptive field properties by learning a sparse code for natural images: ) <|cite_end|> <|cite_start|> (Reference: Sparse coding with an overcomplete basis set: A strategy employed by V1?: ) <|cite_end|>, and has also yielded localized dictionary elements on speech and video data <|cite_start|> (Reference: Learning Overcomplete Representations: In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can be sparser, and can have greater flexibility in matching structure in the data. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Previous work has focused on finding the best representation of a signal using a fixed overcomplete basis (or dictionary). We present an algorithm for learning an overcomplete basis by viewing it as probabilistic model of the observed data. We show that overcomplete bases can yield a better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency. This can be viewed as a generalization of the technique of independent component analysis and provides a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures.) <|cite_end|> <|cite_start|> (Reference: Sparse coding of time-varying natural images: We show how the principle of sparse coding may be applied to learn the forms of structure occurring in time-varying natural images. A sequence of images is described as a linear superposition of space-time functions , each of which is convolved with a time-varying coeecient signal. When a sparse, independent representation is sought over the coeecients, the basis functions that emerge are space-time inseparable functions that resemble the motion-selective receptive elds of cortical simple cells. Interestingly, the coeecients form a spike-like representation of moving images, and thus suggest an interpretation of spiking activity in the brain in terms of sparse coding in time.) <|cite_end|>. An important strength of sparse coding is that it can incorporate overcomplete dictionaries, where the number of dictionary atoms $r$ can exceed the observed dimensionality $d$. It has been argued that having overcomplete representation provides greater flexibility is modeling and more robustness to noise <|cite_start|> (Reference: Learning Overcomplete Representations: In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can be sparser, and can have greater flexibility in matching structure in the data. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Previous work has focused on finding the best representation of a signal using a fixed overcomplete basis (or dictionary). We present an algorithm for learning an overcomplete basis by viewing it as probabilistic model of the observed data. We show that overcomplete bases can yield a better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency. This can be viewed as a generalization of the technique of independent component analysis and provides a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures.) <|cite_end|>, which is crucial for encoding complex signals present in images, speech and video. It has been shown that the performance of most machine learning methods employed downstream is critically dependent on the choice of data representations, and overcomplete representations are the key to obtaining state-of-art prediction results <|cite_start|> (Reference: Unsupervised feature learning and deep learning: A review and new perspectives: — The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although domain knowledge can be used to help design representations, learning can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, manifold learning, and deep learning. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections be-tween representation learning, density estimation and manifold learning.) <|cite_end|>. On the downside, the problem of learning sparse codes is computationally challenging, and is in general, NP-hard <|cite_start|> (Reference: Adaptive nonlinear approximations: The problem of optimally approximating a function with a linear expansion over a redundant dictionary of waveforms is NP-hard. The greedy matching pursuit algorithm and its orthogonalized variant produce sub-optimal function expansions by iteratively choosing the dictionary waveforms which best match the function's structures. Matching pursuits provide a means of quickly computing compact, adaptive function approximations. Numerical experiments show that the approximation errors from matching pursuits initially decrease rapidly, but the asymptotic decay rate of the errors is slow. We explain this behavior by showing that matching pursuits are chaotic, ergodic maps. The statistical properties of the approximation errors of a pursuit can be obtained from the invariant measure of the pursuit. We characterize these measures using group symmetries of dictionaries and using a stochastic di erential equation model. These invariant measures de ne a noise with respect to a given dictionary. The dictionary elements selected during the initial iterations of a pursuit correspond to a function's coherent structures. The expansion of a function into its coherent structures provides a compact approximation with a suitable dictionary. We demonstrate a denoising algorithm based on coherent function expansions. We also introduce an algorithm for adapting a dictionary for e ciently decomposing a given class of functions.) <|cite_end|>. In practice, heuristics are employed based on alternating minimization. At a high level, this consists of alternating steps, where the dictionary is kept fixed and the coefficients are updated and vice versa. Such alternating minimization methods have enjoyed empirical success in a number of settings <|cite_start|> (Reference: Efficient Sparse coding algorithms: Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that capture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. We propose novel algorithms to solve both of these optimization problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field surround suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons.) <|cite_end|> <|cite_start|> (Reference: Method of Optimal Directions for frame design: A frame design technique for use with vector selection algorithms, for example matching pursuits (MP), is presented. The design algorithm is iterative and requires a training set of signal vectors. The algorithm, called method of optimal directions (MOD), is an improvement of the algorithm presented by Engan, Aase and Husoy see (Proc. ICASSP '98, Seattle, USA, p.1817-20, 1998). The MOD is applied to speech and electrocardiogram (ECG) signals, and the designed frames are tested on signals outside the training sets. Experiments demonstrate that the approximation capabilities, in terms of mean squared error (MSE), of the optimized frames are significantly better than those obtained using frames designed by the algorithm of Engan et. al. Experiments show typical reduction in MSE by 20-50%.) <|cite_end|> <|cite_start|> (Reference: K-SVD : An Algorithm for Designing of Overcomplete Dictionaries for Sparse Representation: In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a pre-specified set of linear transforms, or by adapting the dictionary to a set of training signals. Both these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method – the K-SVD algorithm – generalizing the K-Means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results on both synthetic tests and in applications on real image data.) <|cite_end|> <|cite_start|> (Reference: Discriminative Learned Dictionaries for Local Image Analysis: Sparse signal models have been the focus of much recent research, leading to (or improving upon) state-of-the-art results in signal, image, and video restoration. This article extends this line of research into a novel framework for local image discrimination tasks, proposing an energy formulation with both sparse reconstruction and class discrimination components, jointly optimized during dictionary learning. This approach improves over the state of the art in texture segmentation experiments using the Brodatz database, and it paves the way for a novel scene analysis and recognition framework based on simultaneously learning discriminative and reconstructive dictionaries. Preliminary results in this direction using examples from the Pascal VOC06 and Graz02 datasets are presented as well.) <|cite_end|> <|cite_start|> (Reference: {Image Super-Resolution Via Sparse Representation: This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework.) <|cite_end|>. In this paper, we carry out a theoretical analysis of the alternating minimization procedure for sparse coding. \subsection{Summary of Results} We consider the alternating minimization procedure where we employ an initial estimate of the dictionary and then use $\ell_1$ based minimization for estimating the coefficient matrix, given the dictionary estimate. The dictionary is subsequently re-estimated given the coefficient estimates. We establish local convergence to the true dictionary $\Astar$ and coefficient matrix $\Xstar$ for this procedure whenever $\Astar$ satisfies RIP for $2s$-sparse vectors. In other words, we characterize the ``basin of attraction'' for the true solution $(\Astar, \Xstar)$ and establish that alternating minimization succeeds in its recovery when a dictionary is initialized with an error of at most $\order{1/s^2}$, where $s$ is the sparsity level. More precisely, the initial dictionary estimate $\Aiter[0]$ is required to satisfy \[ \errt[0]:=\max_{i\in [r]} \min_{z\in \{-1,+1\}} \twonorm{z\Astar_i - \Aiter[0]_i}= \order{\frac{1}{s^2}},\] where $\Astar_i$ represents $i^{\tha}$ column of $\Astar$. Further when the sparsity level satisfies $s = \order{d^{1/6}}$ and the number of samples satisfies $n = \order{r^2}$, we establish a linear rate of convergence for the alternating minimization procedure to the true dictionary even when the dictionary is overcomplete $(r\geq d)$, . For the case of incoherent dictionaries, by combining the above result with recent results on approximate dictionary estimation by Agarwal et. al <|cite_start|> (Reference: Exact Recovery of Sparsely Used Overcomplete Dictionaries: We consider the problem of learning overcomplete dictionaries in the context of sparse coding, where each sample selects a sparse subset of dictionary elements. Our method consists of two stages, viz., initial estimation of the dictionary, and a clean-up phase involving estimation of the coefficient matrix, and re-estimation of the dictionary. We prove that our method exactly recovers both the dictionary and the coefficient matrix under a set of sufficient conditions.) <|cite_end|> or Arora et. al <|cite_start|> (Reference: A Practical Algorithm for Topic Modeling with Provable Guarantees: Topic models provide a useful method for dimensionality reduction and exploratory data analysis in large text corpora. Most approaches to topic model inference have been based on a maximum likelihood objective. Efficient algorithms exist that approximate this objective, but they have no provable guarantees. Recently, algorithms have been introduced that provide provable bounds, but these algorithms are not practical because they are inefficient and not robust to violations of model assumptions. In this paper we present an algorithm for topic model inference that is both provable and practical. The algorithm produces results comparable to the best MCMC implementations while running orders of magnitude faster.) <|cite_end|>, we guarantee exact recovery of the true solution $(\Astar, \Xstar)$ when the alternating procedure is initialized with the output of <|cite_start|> (Reference: Exact Recovery of Sparsely Used Overcomplete Dictionaries: We consider the problem of learning overcomplete dictionaries in the context of sparse coding, where each sample selects a sparse subset of dictionary elements. Our method consists of two stages, viz., initial estimation of the dictionary, and a clean-up phase involving estimation of the coefficient matrix, and re-estimation of the dictionary. We prove that our method exactly recovers both the dictionary and the coefficient matrix under a set of sufficient conditions.) <|cite_end|> or <|cite_start|> (Reference: A Practical Algorithm for Topic Modeling with Provable Guarantees: Topic models provide a useful method for dimensionality reduction and exploratory data analysis in large text corpora. Most approaches to topic model inference have been based on a maximum likelihood objective. Efficient algorithms exist that approximate this objective, but they have no provable guarantees. Recently, algorithms have been introduced that provide provable bounds, but these algorithms are not practical because they are inefficient and not robust to violations of model assumptions. In this paper we present an algorithm for topic model inference that is both provable and practical. The algorithm produces results comparable to the best MCMC implementations while running orders of magnitude faster.) <|cite_end|>. If we employ the procedure of Agarwal et. al <|cite_start|> (Reference: Exact Recovery of Sparsely Used Overcomplete Dictionaries: We consider the problem of learning overcomplete dictionaries in the context of sparse coding, where each sample selects a sparse subset of dictionary elements. Our method consists of two stages, viz., initial estimation of the dictionary, and a clean-up phase involving estimation of the coefficient matrix, and re-estimation of the dictionary. We prove that our method exactly recovers both the dictionary and the coefficient matrix under a set of sufficient conditions.) <|cite_end|>, the overall requirements are as follows: the sparsity level is required to be $s = \order{d^{1/9}, r^{1/8}}$, and the number of samples $n = \order{r^2}$ to guarantee exact recovery of the true solution. If we employ the procedure of Arora et. al <|cite_start|> (Reference: A Practical Algorithm for Topic Modeling with Provable Guarantees: Topic models provide a useful method for dimensionality reduction and exploratory data analysis in large text corpora. Most approaches to topic model inference have been based on a maximum likelihood objective. Efficient algorithms exist that approximate this objective, but they have no provable guarantees. Recently, algorithms have been introduced that provide provable bounds, but these algorithms are not practical because they are inefficient and not robust to violations of model assumptions. In this paper we present an algorithm for topic model inference that is both provable and practical. The algorithm produces results comparable to the best MCMC implementations while running orders of magnitude faster.) <|cite_end|> (in particular their \textsc{OverlappingAverage} procedure), we can establish exact recovery assuming $s = \order{r^{1/6}, \sqrt{d}}$. \subsection{Related Work} \paragraph{Analysis of local optima of non-convex programs for sparse coding: } Gribonval and Schnass <|cite_start|> (Reference: Dictionary Identification—Sparse Matrix-Factorization via $\ell_1$ -Minimization: This paper treats the problem of learning a dictionary providing sparse representations for a given signal class, via ℓ<sub>1</sub>-minimization. The problem can also be seen as factorizing a d × N matrix Y = (y<sub>1</sub> . . . y<sub>N</sub>), y<sub>n</sub> ∈ ℝ<sup>d</sup> of training signals into a d × K dictionary matrix Φ and a K × N coefficient matrix X = (x<sub>1</sub> . . . x<sub>N</sub>), x<sub>n</sub> ∈ ℝ<sup>K</sup>, which is sparse. The exact question studied here is when a dictionary coefficient pair (Φ, X) can be recovered as local minimum of a (nonconvex) ℓ<sub>1</sub>-criterion with input Y = Φ X. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialized to the case when the dictionary is a basis. Finally, assuming a random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples N grows up to a logarithmic factor only linearly with the signal dimension, i.e., N ≈ CK log K, in contrast to previous approaches requiring combinatorially many samples.) <|cite_end|>, Geng et al. and Jenatton et al. <|cite_start|> (Reference: Local stability and robustness of sparse dictionary learning in the presence of noise: A popular approach within the signal processing and machine learning communities consists in modelling signals as sparse linear combinations of atoms selected from a learned dictionary. While this paradigm has led to numerous empirical successes in various fields ranging from image to audio processing, there have only been a few theoretical arguments supporting these evidences. In particular, sparse coding, or sparse dictionary learning, relies on a non-convex procedure whose local minima have not been fully analyzed yet. In this paper, we consider a probabilistic model of sparse signals, and show that, with high probability, sparse coding admits a local minimum around the reference dictionary generating the signals. Our study takes into account the case of over-complete dictionaries and noisy signals, thus extending previous work limited to noiseless settings and/or under-complete dictionaries. The analysis we conduct is non-asymptotic and makes it possible to understand how the key quantities of the problem, such as the coherence or the level of noise, can scale with respect to the dimension of the signals, the number of atoms, the sparsity and the number of observations.) <|cite_end|> carry out a theoretical analysis and study the conditions under which the true solution turns out to be a local optimum of a non-convex optimization problem for dictionary recovery. Gribonval and Schnass <|cite_start|> (Reference: Dictionary Identification—Sparse Matrix-Factorization via $\ell_1$ -Minimization: This paper treats the problem of learning a dictionary providing sparse representations for a given signal class, via ℓ<sub>1</sub>-minimization. The problem can also be seen as factorizing a d × N matrix Y = (y<sub>1</sub> . . . y<sub>N</sub>), y<sub>n</sub> ∈ ℝ<sup>d</sup> of training signals into a d × K dictionary matrix Φ and a K × N coefficient matrix X = (x<sub>1</sub> . . . x<sub>N</sub>), x<sub>n</sub> ∈ ℝ<sup>K</sup>, which is sparse. The exact question studied here is when a dictionary coefficient pair (Φ, X) can be recovered as local minimum of a (nonconvex) ℓ<sub>1</sub>-criterion with input Y = Φ X. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialized to the case when the dictionary is a basis. Finally, assuming a random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples N grows up to a logarithmic factor only linearly with the signal dimension, i.e., N ≈ CK log K, in contrast to previous approaches requiring combinatorially many samples.) <|cite_end|> and Geng et. al both consider the noiseless setting, and analyze the following non-convex program \beq \label{eqn:nonconvex}\min \|X\|_1\qquad \st, \,\, Y=AX,\,\, \|A_i\|_2=1, \,\,\forall\,i\in [r].\eeq Since $A$ and $X$ are both unknown, the constraint $Y = AX$ is non-convex. It is natural to expect the true solution $(\Astar, \Xstar)$ to be a local optimum for \eqref{eqn:nonconvex} under fairly mild conditions, but this turns out to be non-trivial to establish. The difficulties arise from the non-convexity of the problem and the presence of sign-permutation ambiguity which leads to exponentially many equivalent solutions obtained via sign change and permutation. Gribonval and Schnass <|cite_start|> (Reference: Dictionary Identification—Sparse Matrix-Factorization via $\ell_1$ -Minimization: This paper treats the problem of learning a dictionary providing sparse representations for a given signal class, via ℓ<sub>1</sub>-minimization. The problem can also be seen as factorizing a d × N matrix Y = (y<sub>1</sub> . . . y<sub>N</sub>), y<sub>n</sub> ∈ ℝ<sup>d</sup> of training signals into a d × K dictionary matrix Φ and a K × N coefficient matrix X = (x<sub>1</sub> . . . x<sub>N</sub>), x<sub>n</sub> ∈ ℝ<sup>K</sup>, which is sparse. The exact question studied here is when a dictionary coefficient pair (Φ, X) can be recovered as local minimum of a (nonconvex) ℓ<sub>1</sub>-criterion with input Y = Φ X. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialized to the case when the dictionary is a basis. Finally, assuming a random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples N grows up to a logarithmic factor only linearly with the signal dimension, i.e., N ≈ CK log K, in contrast to previous approaches requiring combinatorially many samples.) <|cite_end|> established that $(\Astar, \Xstar)$ is a local optimum for \eqref{eqn:nonconvex}, but limited to the case where the dictionary matrix $A$ is square and hence, did not incorporate the overcomplete setting. Geng et al. extend the analysis to the overcomplete setting, and establish that the true solution is a local optimum of \eqref{eqn:nonconvex} w.h.p. for incoherent dictionaries, when the number of samples $n$ and sparsity level $s$ scale as \beq \label{eqn:gengresult}n = \Omega\left(\|A\|_2^4 r^3 s \right),\quad s =\order{\sqrt{d}}.\eeq In our setting, where the spectral norm is assumed to be $\|A\|_2 < \mu_1 \sqrt{r/d}$, for some constant $\mu_1>0$, the sample complexity simplifies as $n = \Omega\left( r^5 s/d^2 \right)$. Jenatton et al. <|cite_start|> (Reference: Local stability and robustness of sparse dictionary learning in the presence of noise: A popular approach within the signal processing and machine learning communities consists in modelling signals as sparse linear combinations of atoms selected from a learned dictionary. While this paradigm has led to numerous empirical successes in various fields ranging from image to audio processing, there have only been a few theoretical arguments supporting these evidences. In particular, sparse coding, or sparse dictionary learning, relies on a non-convex procedure whose local minima have not been fully analyzed yet. In this paper, we consider a probabilistic model of sparse signals, and show that, with high probability, sparse coding admits a local minimum around the reference dictionary generating the signals. Our study takes into account the case of over-complete dictionaries and noisy signals, thus extending previous work limited to noiseless settings and/or under-complete dictionaries. The analysis we conduct is non-asymptotic and makes it possible to understand how the key quantities of the problem, such as the coherence or the level of noise, can scale with respect to the dimension of the signals, the number of atoms, the sparsity and the number of observations.) <|cite_end|> consider the noisy setting and analyze the modified non-convex program involving $\ell_1$ penalty for the coefficient matrix and $\ell_2$ penalty for the loss in fitting the samples, and establish that the true solution is in the neighborhood of a local optimum of the modified non-convex program w.h.p. when the number of samples scales as $n =\Omega\left(\|A\|_2^2 r^3 d s^2 \right)$. In our setting, this reduces to $n = \Omega \left( r^4 s^2\right) $. There are significant differences of the above works from ours. While these works establish that $(\Astar, \Xstar)$ is a local optimum of a non-convex program, they do not provide a tractable algorithm to reach this particular solution as opposed to another local optimum. In contrast, we establish guarantees for a simple alternating minimization algorithm and explicitly characterize the ``basin of attraction'' for the true solution $(\Astar, \Xstar)$. This provides precise initialization conditions for the alternating minimization to succeed. Moreover, our sample complexity requirements are much weaker and we require only $n = \order{r^2}$ samples for our guarantees to hold. \paragraph{Alternating minimization for sparse coding: } Our analysis in this paper provides a theoretical explanation for the empirical success of alternating minimization, observed in a number of works <|cite_start|> (Reference: Efficient Sparse coding algorithms: Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that capture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. We propose novel algorithms to solve both of these optimization problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field surround suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons.) <|cite_end|> <|cite_start|> (Reference: Method of Optimal Directions for frame design: A frame design technique for use with vector selection algorithms, for example matching pursuits (MP), is presented. The design algorithm is iterative and requires a training set of signal vectors. The algorithm, called method of optimal directions (MOD), is an improvement of the algorithm presented by Engan, Aase and Husoy see (Proc. ICASSP '98, Seattle, USA, p.1817-20, 1998). The MOD is applied to speech and electrocardiogram (ECG) signals, and the designed frames are tested on signals outside the training sets. Experiments demonstrate that the approximation capabilities, in terms of mean squared error (MSE), of the optimized frames are significantly better than those obtained using frames designed by the algorithm of Engan et. al. Experiments show typical reduction in MSE by 20-50%.) <|cite_end|> <|cite_start|> (Reference: K-SVD : An Algorithm for Designing of Overcomplete Dictionaries for Sparse Representation: In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a pre-specified set of linear transforms, or by adapting the dictionary to a set of training signals. Both these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method – the K-SVD algorithm – generalizing the K-Means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results on both synthetic tests and in applications on real image data.) <|cite_end|> <|cite_start|> (Reference: Discriminative Learned Dictionaries for Local Image Analysis: Sparse signal models have been the focus of much recent research, leading to (or improving upon) state-of-the-art results in signal, image, and video restoration. This article extends this line of research into a novel framework for local image discrimination tasks, proposing an energy formulation with both sparse reconstruction and class discrimination components, jointly optimized during dictionary learning. This approach improves over the state of the art in texture segmentation experiments using the Brodatz database, and it paves the way for a novel scene analysis and recognition framework based on simultaneously learning discriminative and reconstructive dictionaries. Preliminary results in this direction using examples from the Pascal VOC06 and Graz02 datasets are presented as well.) <|cite_end|> <|cite_start|> (Reference: {Image Super-Resolution Via Sparse Representation: This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework.) <|cite_end|>. These methods are all based on alternating minimization, but differ mostly in how they update the dictionary elements. For instance, Lee et. al. carry out least squares for updating the dictionary <|cite_start|> (Reference: Efficient Sparse coding algorithms: Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that capture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. We propose novel algorithms to solve both of these optimization problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field surround suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons.) <|cite_end|> similar to the the method of optimal directions <|cite_start|> (Reference: Method of Optimal Directions for frame design: A frame design technique for use with vector selection algorithms, for example matching pursuits (MP), is presented. The design algorithm is iterative and requires a training set of signal vectors. The algorithm, called method of optimal directions (MOD), is an improvement of the algorithm presented by Engan, Aase and Husoy see (Proc. ICASSP '98, Seattle, USA, p.1817-20, 1998). The MOD is applied to speech and electrocardiogram (ECG) signals, and the designed frames are tested on signals outside the training sets. Experiments demonstrate that the approximation capabilities, in terms of mean squared error (MSE), of the optimized frames are significantly better than those obtained using frames designed by the algorithm of Engan et. al. Experiments show typical reduction in MSE by 20-50%.) <|cite_end|>, while the K-SVD procedure <|cite_start|> (Reference: K-SVD : An Algorithm for Designing of Overcomplete Dictionaries for Sparse Representation: In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a pre-specified set of linear transforms, or by adapting the dictionary to a set of training signals. Both these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method – the K-SVD algorithm – generalizing the K-Means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results on both synthetic tests and in applications on real image data.) <|cite_end|>, updates the dictionary estimate using a spectral procedure on the residual. However, none of the previous works provide theoretical guarantees on the success of the alternating minimization procedure for sparse coding. \paragraph{Guaranteed dictionary estimation: } Some of the recent works provide theoretical guarantees on the estimation of the true dictionary. Spielman et. al <|cite_start|> (Reference: Exact Recovery of Sparsely-Used Dictionaries: We consider the problem of learning sparsely used dictionaries with an arbitrary square dictionary and a random, sparse coefficient matrix. We prove that $O (n \log n)$ samples are sufficient to uniquely determine the coefficient matrix. Based on this proof, we design a polynomial-time algorithm, called Exact Recovery of Sparsely-Used Dictionaries (ER-SpUD), and prove that it probably recovers the dictionary and coefficient matrix when the coefficient matrix is sufficiently sparse. Simulation results show that ER-SpUD reveals the true dictionary as well as the coefficients with probability higher than many state-of-the-art algorithms.) <|cite_end|> establish exact recovery under $\ell_1$ based optimization when the true dictionary $\Astar$ is a basis, which rules out the overcomplete setting. Agarwal et. al <|cite_start|> (Reference: Exact Recovery of Sparsely Used Overcomplete Dictionaries: We consider the problem of learning overcomplete dictionaries in the context of sparse coding, where each sample selects a sparse subset of dictionary elements. Our method consists of two stages, viz., initial estimation of the dictionary, and a clean-up phase involving estimation of the coefficient matrix, and re-estimation of the dictionary. We prove that our method exactly recovers both the dictionary and the coefficient matrix under a set of sufficient conditions.) <|cite_end|> and Arora et. al <|cite_start|> (Reference: A Practical Algorithm for Topic Modeling with Provable Guarantees: Topic models provide a useful method for dimensionality reduction and exploratory data analysis in large text corpora. Most approaches to topic model inference have been based on a maximum likelihood objective. Efficient algorithms exist that approximate this objective, but they have no provable guarantees. Recently, algorithms have been introduced that provide provable bounds, but these algorithms are not practical because they are inefficient and not robust to violations of model assumptions. In this paper we present an algorithm for topic model inference that is both provable and practical. The algorithm produces results comparable to the best MCMC implementations while running orders of magnitude faster.) <|cite_end|> propose methods for approximate dictionary estimation in the overcomplete setting. At a high level, both their methods involve a clustering-based approach for finding samples which share a dictionary element, and then using the subset of samples to estimate a dictionary element. Agarwal et. al <|cite_start|> (Reference: Exact Recovery of Sparsely Used Overcomplete Dictionaries: We consider the problem of learning overcomplete dictionaries in the context of sparse coding, where each sample selects a sparse subset of dictionary elements. Our method consists of two stages, viz., initial estimation of the dictionary, and a clean-up phase involving estimation of the coefficient matrix, and re-estimation of the dictionary. We prove that our method exactly recovers both the dictionary and the coefficient matrix under a set of sufficient conditions.) <|cite_end|> establish exact recovery of the true solution $(\Astar, \Xstar)$ under a ``one-shot'' Lasso procedure, when the non-zero coefficients are Bernoulli $\{-1,+1\}$ (or more generally discrete). On the other hand, we assume only mild conditions on the non-zero elements. Arora et. al <|cite_start|> (Reference: A Practical Algorithm for Topic Modeling with Provable Guarantees: Topic models provide a useful method for dimensionality reduction and exploratory data analysis in large text corpora. Most approaches to topic model inference have been based on a maximum likelihood objective. Efficient algorithms exist that approximate this objective, but they have no provable guarantees. Recently, algorithms have been introduced that provide provable bounds, but these algorithms are not practical because they are inefficient and not robust to violations of model assumptions. In this paper we present an algorithm for topic model inference that is both provable and practical. The algorithm produces results comparable to the best MCMC implementations while running orders of magnitude faster.) <|cite_end|> consider an alternating minimization procedure. However, a key distinction is that their analysis requires {\em fresh} samples in each iteration, while we consider the same samples for all the iterations. We show {\em exact} recovery using $n =\Omega(r^2)$ samples, while <|cite_start|> (Reference: A Practical Algorithm for Topic Modeling with Provable Guarantees: Topic models provide a useful method for dimensionality reduction and exploratory data analysis in large text corpora. Most approaches to topic model inference have been based on a maximum likelihood objective. Efficient algorithms exist that approximate this objective, but they have no provable guarantees. Recently, algorithms have been introduced that provide provable bounds, but these algorithms are not practical because they are inefficient and not robust to violations of model assumptions. In this paper we present an algorithm for topic model inference that is both provable and practical. The algorithm produces results comparable to the best MCMC implementations while running orders of magnitude faster.) <|cite_end|> can only establish that the error is bounded by $\exp[-O(n/r^2)]$. Furthermore, both the above papers <|cite_start|> (Reference: A Practical Algorithm for Topic Modeling with Provable Guarantees: Topic models provide a useful method for dimensionality reduction and exploratory data analysis in large text corpora. Most approaches to topic model inference have been based on a maximum likelihood objective. Efficient algorithms exist that approximate this objective, but they have no provable guarantees. Recently, algorithms have been introduced that provide provable bounds, but these algorithms are not practical because they are inefficient and not robust to violations of model assumptions. In this paper we present an algorithm for topic model inference that is both provable and practical. The algorithm produces results comparable to the best MCMC implementations while running orders of magnitude faster.) <|cite_end|> <|cite_start|> (Reference: Exact Recovery of Sparsely Used Overcomplete Dictionaries: We consider the problem of learning overcomplete dictionaries in the context of sparse coding, where each sample selects a sparse subset of dictionary elements. Our method consists of two stages, viz., initial estimation of the dictionary, and a clean-up phase involving estimation of the coefficient matrix, and re-estimation of the dictionary. We prove that our method exactly recovers both the dictionary and the coefficient matrix under a set of sufficient conditions.) <|cite_end|> assume that the dictionary elements are mutually incoherent. Our local convergence result in this paper assumes only that the dictionary matrix satisfies RIP (which is strictly weaker than incoherence). For the case of incoherent dictionaries, we can employ the procedures of Agarwal et. al <|cite_start|> (Reference: Exact Recovery of Sparsely Used Overcomplete Dictionaries: We consider the problem of learning overcomplete dictionaries in the context of sparse coding, where each sample selects a sparse subset of dictionary elements. Our method consists of two stages, viz., initial estimation of the dictionary, and a clean-up phase involving estimation of the coefficient matrix, and re-estimation of the dictionary. We prove that our method exactly recovers both the dictionary and the coefficient matrix under a set of sufficient conditions.) <|cite_end|> or Arora et. al <|cite_start|> (Reference: A Practical Algorithm for Topic Modeling with Provable Guarantees: Topic models provide a useful method for dimensionality reduction and exploratory data analysis in large text corpora. Most approaches to topic model inference have been based on a maximum likelihood objective. Efficient algorithms exist that approximate this objective, but they have no provable guarantees. Recently, algorithms have been introduced that provide provable bounds, but these algorithms are not practical because they are inefficient and not robust to violations of model assumptions. In this paper we present an algorithm for topic model inference that is both provable and practical. The algorithm produces results comparable to the best MCMC implementations while running orders of magnitude faster.) <|cite_end|> for initializing the alternating procedure and obtain overall guarantees in such scenarios. \paragraph{Other works on sparse coding: } Some of the other recent works are only tangentially related to this paper. For instance, the works <|cite_start|> (Reference: The Sample Complexity of Dictionary Learning: A large set of signals can sometimes be described sparsely using a dictionary, that is, every element can be represented as a linear combination of few elements from the dictionary. Algorithms for various signal processing applications, including classification, denoising and signal separation, learn a dictionary from a set of signals to be represented. Can we expect that the representation found by such a dictionary for a previously unseen example from the same source will have L_2 error of the same magnitude as those for the given examples? We assume signals are generated from a fixed distribution, and study this questions from a statistical learning theory perspective. We develop generalization bounds on the quality of the learned dictionary for two types of constraints on the coefficient selection, as measured by the expected L_2 error in representation when the dictionary is used. For the case of l_1 regularized coefficient selection we provide a generalization bound of the order of O(sqrt(np log(m lambda)/m)), where n is the dimension, p is the number of elements in the dictionary, lambda is a bound on the l_1 norm of the coefficient vector and m is the number of samples, which complements existing results. For the case of representing a new signal as a combination of at most k dictionary elements, we provide a bound of the order O(sqrt(np log(m k)/m)) under an assumption on the level of orthogonality of the dictionary (low Babel function). We further show that this assumption holds for most dictionaries in high dimensions in a strong probabilistic sense. Our results further yield fast rates of order 1/m as opposed to 1/sqrt(m) using localized Rademacher complexity. We provide similar results in a general setting using kernels with weak smoothness requirements.) <|cite_end|> <|cite_start|> (Reference: Sparsity-Based Generalization Bounds for Predictive Sparse Coding: The goal of predictive sparse coding is to learn a representation of examples as sparse linear combinations of elements from a dictionary, such that a learned hypothesis linear in the new representation performs well on a predictive task. Predictive sparse coding has demonstrated impressive performance on a variety of supervised tasks, but its generalization properties have not been studied. We establish the first generalization error bounds for predictive sparse coding, in the overcomplete setting, where the number of features k exceeds the original dimensionality d. The learning bound decays as O(√dk/m) with respect to d, k, and the size m of the training sample. It depends intimately on stability properties of the learned sparse encoder, as measured on the training sample. Consequently, we also present a fundamental stability result for the LASSO, a result that characterizes the stability of the sparse codes with respect to dictionary perturbations.) <|cite_end|> <|cite_start|> (Reference: Sparse coding for multitask and transfer learning: We investigate the use of sparse coding and dictionary learning in the context of multitask and transfer learning. The central assumption of our learning method is that the tasks parameters are well approximated by sparse linear combinations of the atoms of a dictionary on a high or infinite dimensional space. This assumption, together with the large quantity of available data in the multitask and transfer learning settings, allows a principled choice of the dictionary. We provide bounds on the generalization error of this approach, for both settings. Numerical experiments on one synthetic and two real datasets show the advantage of our method over single task learning, a previous method based on orthogonal and dense representation of the tasks and a related method learning task grouping.) <|cite_end|> <|cite_start|> (Reference: Learning stable multilevel dictionaries for sparse representation of images: Dictionaries adapted to the data provide superior performance when compared to predefined dictionaries in applications involving sparse representations. Algorithmic stability and generalization are desirable characteristics for dictionary learning algorithms that aim to build global dictionaries which can efficiently model any test data similar to the training samples. In this paper, we propose an algorithm to learn dictionaries for sparse representation of image patches, and prove that the proposed learning algorithm is stable and generalizable asymptotically. The algorithm employs a 1-D subspace clustering procedure, the K-lines clustering, in order to learn a hierarchical dictionary with multiple levels. Furthermore, we propose a regularized pursuit scheme for computing sparse representations using a multilevel dictionary. Using simulations, we demonstrate the stability and generalization characteristics of the proposed algorithm with natural image patches. Finally, we employ multilevel dictionaries for compressed recovery and demonstrate improvements in recovery performance using both random and optimized projections when compared to baseline K-SVD dictionaries.) <|cite_end|> provide generalization bounds for predictive sparse coding, without computational considerations, which differs from our generative setting here and algorithmic considerations. Parametric dictionary learning is considered in <|cite_start|> (Reference: Parametric dictionary design for sparse coding: This paper introduces a new dictionary design method for sparse coding of a class of signals. It has been shown that one can sparsely approximate some natural signals using an overcomplete set of parametric functions. A problem in using these parametric dictionaries is how to choose the parameters. In practice, these parameters have been chosen by an expert or through a set of experiments. In the sparse approximation context, it has been shown that an incoherent dictionary is appropriate for the sparse approximation methods. In this paper, we first characterize the dictionary design problem, subject to a constraint on the dictionary. Then we briefly explain that equiangular tight frames have minimum coherence. The complexity of the problem does not allow it to be solved exactly. We introduce a practical method to approximately solve it. Some experiments show the advantages one gets by using these dictionaries.) <|cite_end|>, where the data is fitted to dictionaries with small coherence. Note that we provide guarantees when the underlying dictionary is incoherent, but do not constrain our method to produce an incoherent dictionary. The problem of sparse coding is also closely related to the problem of blind source separation, and we refer the reader to <|cite_start|> (Reference: Exact Recovery of Sparsely Used Overcomplete Dictionaries: We consider the problem of learning overcomplete dictionaries in the context of sparse coding, where each sample selects a sparse subset of dictionary elements. Our method consists of two stages, viz., initial estimation of the dictionary, and a clean-up phase involving estimation of the coefficient matrix, and re-estimation of the dictionary. We prove that our method exactly recovers both the dictionary and the coefficient matrix under a set of sufficient conditions.) <|cite_end|> for an extended survey of these works. \paragraph{Majorization-minimization algorithms for biconvex optimization:} Beyond the specific problem of sparse coding, alternating optimization procedures more generally are a natural fit for biconvex optimization problems, where the objective is individually convex in two sets of variables but not jointly convex. Perhaps the most general study of these problems has been carried out in the framework of majorization-minimization schemes <|cite_start|> (Reference: Optimization transfer using surrogate objective functions: Abstract The well-known EM algorithm is an optimization transfer algorithm that depends on the notion of incomplete or missing data. By invoking convexity arguments, one can construct a variety of other optimization transfer algorithms that do not involve missing data. These algorithms all rely on a majorizing or minorizing function that serves as a surrogate for the objective function. Optimizing the surrogate function drives the objective function in the correct direction. This article illustrates this general principle by a number of specific examples drawn from the statistical literature. Because optimization transfer algorithms often exhibit the slow convergence of EM algorithms, two methods of accelerating optimization transfer are discussed and evaluated in the context of specific problems.) <|cite_end|>, or under the name of the EM algorithm in statistics literature. In this generality, the strongest result one can typically provide is a convergence guarantee to a local optimum of the problem. When the bi-convex objective is defined over probability measures, Csiszar presents a fairly general set of conditions on the objective function, under which linear convergence to the global optimum is guaranteed (see, e.g. the recent tutorial <|cite_start|> (Reference: Information Theory and Statistics: A Tutorial: This tutorial is concerned with applications of information theory concepts in statistics, in the finite alphabet setting. The information measure known as information divergence or Kullback-Leibler distance or relative entropy plays a key role, often with a geometric flavor as an analogue of squared Euclidean distance, as in the concepts of I-projection, I-radius and I-centroid. The topics covered include large deviations, hypothesis testing, maximum likelihood estimation in exponential families, analysis of contingency tables, and iterative algorithms with an "information geometry" background. Also, an introduction is provided to the theory of universal coding, and to statistical inference via the minimum description length principle motivated by that theory.) <|cite_end|> for an excellent overview). However, these conditions do not seem to easily hold in the context of dictionary learning. Alternating optimization in related contexts has also been studied in a variety of matrix factorization problems such as low-rank matrix completion and non-negative matrix factorization. Perhaps the most related to our work are similar results for low-rank matrix completion problems by Jain et al. <|cite_start|> (Reference: Low-rank Matrix Completion using Alternating Minimization: Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix Challenge. In the alternating minimization approach, the low-rank target matrix is written in a bi-linear form, i.e. $X = UV^\dag$; the algorithm then alternates between finding the best $U$ and the best $V$. Typically, each alternating step in isolation is convex and tractable. However the overall problem becomes non-convex and there has been almost no theoretical understanding of when this approach yields a good result. In this paper we present first theoretical analysis of the performance of alternating minimization for matrix completion, and the related problem of matrix sensing. For both these problems, celebrated recent results have shown that they become well-posed and tractable once certain (now standard) conditions are imposed on the problem. We show that alternating minimization also succeeds under similar conditions. Moreover, compared to existing results, our paper shows that alternating minimization guarantees faster (in particular, geometric) convergence to the true matrix, while allowing a simpler analysis.) <|cite_end|>. \paragraph{Notation: } Let $[n]:=\{1,2, \ldots, n\}$. For a vector $v$ or a matrix $W$, we will use the shorthand $\nzset(v)$ and $\nzset(W)$ to denote the set of non-zero entries of $v$ and $W$ respectively. $\|w\|_p$ denote the $\ell_p$ norm of vector $w$; by default, $\|w\|$ denotes $\ell_2$ norm of $w$. $\|W\|_2$ denotes the spectral norm (largest singular value) of matrix $W$. $\|W\|_\infty$ denotes the largest element (in magnitude) of $W$. For a matrix $X$, $\row{X}{i}$, $\col{X}{i}$ and $\elt{X}{i}{j}$ denote the $i^{\tha}$ row, $i^{\tha}$ column and $(i,j)^{\tha}$ element of $X$ respectively. <|paper_end|>
[ "<|reference_start|> Method of Optimal Directions for frame design: A frame design technique for use with vector selection algorithms, for example matching pursuits (MP), is presented. The design algorithm is iterative and requires a training set of signal vectors. The algorithm, called method of optimal directions (MOD), is an improvement of the algorithm presented by Engan, Aase and Husoy see (Proc. ICASSP '98, Seattle, USA, p.1817-20, 1998). The MOD is applied to speech and electrocardiogram (ECG) signals, and the designed frames are tested on signals outside the training sets. Experiments demonstrate that the approximation capabilities, in terms of mean squared error (MSE), of the optimized frames are significantly better than those obtained using frames designed by the algorithm of Engan et. al. Experiments show typical reduction in MSE by 20-50%. <|reference_end|>", "<|reference_start|> A Practical Algorithm for Topic Modeling with Provable Guarantees: Topic models provide a useful method for dimensionality reduction and exploratory data analysis in large text corpora. Most approaches to topic model inference have been based on a maximum likelihood objective. Efficient algorithms exist that approximate this objective, but they have no provable guarantees. Recently, algorithms have been introduced that provide provable bounds, but these algorithms are not practical because they are inefficient and not robust to violations of model assumptions. In this paper we present an algorithm for topic model inference that is both provable and practical. The algorithm produces results comparable to the best MCMC implementations while running orders of magnitude faster. <|reference_end|>", "<|reference_start|> Local stability and robustness of sparse dictionary learning in the presence of noise: A popular approach within the signal processing and machine learning communities consists in modelling signals as sparse linear combinations of atoms selected from a learned dictionary. While this paradigm has led to numerous empirical successes in various fields ranging from image to audio processing, there have only been a few theoretical arguments supporting these evidences. In particular, sparse coding, or sparse dictionary learning, relies on a non-convex procedure whose local minima have not been fully analyzed yet. In this paper, we consider a probabilistic model of sparse signals, and show that, with high probability, sparse coding admits a local minimum around the reference dictionary generating the signals. Our study takes into account the case of over-complete dictionaries and noisy signals, thus extending previous work limited to noiseless settings and/or under-complete dictionaries. The analysis we conduct is non-asymptotic and makes it possible to understand how the key quantities of the problem, such as the coherence or the level of noise, can scale with respect to the dimension of the signals, the number of atoms, the sparsity and the number of observations. <|reference_end|>", "<|reference_start|> Exact Recovery of Sparsely Used Overcomplete Dictionaries: We consider the problem of learning overcomplete dictionaries in the context of sparse coding, where each sample selects a sparse subset of dictionary elements. Our method consists of two stages, viz., initial estimation of the dictionary, and a clean-up phase involving estimation of the coefficient matrix, and re-estimation of the dictionary. We prove that our method exactly recovers both the dictionary and the coefficient matrix under a set of sufficient conditions. <|reference_end|>" ]
[ 8, 17, 22, 34 ]
{"<|multi_cite_1_1|>": "ss-793499", "<|multi_cite_1_2|>": "ss-875839", "<|multi_cite_2_1|>": "ss-1022631", "<|multi_cite_2_2|>": "ss-1667401", "<|cite_3|>": "ss-1022631", "<|cite_4|>": "ss-1686435", "<|cite_5|>": "ss-1086778", "<|multi_cite_6_1|>": "ss-1006718", "<|multi_cite_6_2|>": "ss-901083", "<|multi_cite_6_3|>": "ss-766242", "<|multi_cite_6_4|>": "ss-1047519", "<|multi_cite_6_5|>": "ss-1366544", "<|cite_7|>": "ss-2005529", "<|cite_8|>": "arxiv-39423", "<|cite_9|>": "ss-2005529", "<|cite_10|>": "arxiv-39423", "<|cite_11|>": "ss-2005529", "<|cite_12|>": "arxiv-39423", "<|cite_13|>": "ss-1436233", "<|cite_15|>": "arxiv-36690", "<|cite_16|>": "ss-1436233", "<|cite_18|>": "ss-1436233", "<|cite_20|>": "arxiv-36690", "<|multi_cite_21_1|>": "ss-1006718", "<|multi_cite_21_2|>": "ss-901083", "<|multi_cite_21_3|>": "ss-766242", "<|multi_cite_21_4|>": "ss-1047519", "<|multi_cite_21_5|>": "ss-1366544", "<|cite_22|>": "ss-1006718", "<|cite_23|>": "ss-901083", "<|cite_24|>": "ss-766242", "<|cite_25|>": "arxiv-33224", "<|cite_26|>": "ss-2005529", "<|cite_27|>": "arxiv-39423", "<|cite_28|>": "ss-2005529", "<|cite_29|>": "arxiv-39423", "<|cite_30|>": "arxiv-39423", "<|multi_cite_31_1|>": "arxiv-39423", "<|multi_cite_31_2|>": "ss-2005529", "<|cite_32|>": "ss-2005529", "<|cite_33|>": "arxiv-39423", "<|multi_cite_34_1|>": "arxiv-17533", "<|multi_cite_34_2|>": "ss-797262", "<|multi_cite_34_3|>": "arxiv-35777", "<|multi_cite_34_4|>": "ss-1436234", "<|cite_35|>": "ss-993272", "<|cite_36|>": "ss-2005529", "<|cite_37|>": "ss-1541383", "<|cite_38|>": "ss-1137802", "<|cite_39|>": "arxiv-38804"}
2101.06551-1
<|cite_start|> (Reference: Community Detection in Networks with Node Attributes: Community detection algorithms are fundamental tools that allow us to uncover organizational principles in networks. When detecting communities, there are two possible sources of information one can use: the network structure, and the features and attributes of nodes. Even though communities form around nodes that have common edges and common attributes, typically, algorithms have only focused on one of these two data modalities: community detection algorithms traditionally focus only on the network structure, while clustering algorithms mostly consider only node attributes. In this paper, we develop Communities from Edge Structure and Node Attributes (CESNA), an accurate and scalable algorithm for detecting overlapping communities in networks with node attributes. CESNA statistically models the interaction between the network structure and the node attributes, which leads to more accurate community detection as well as improved robustness in the presence of noise in the network structure. CESNA has a linear runtime in the network size and is able to process networks an order of magnitude larger than comparable approaches. Last, CESNA also helps with the interpretation of detected communities by finding relevant node attributes for each community.) <|cite_end|> <|cite_start|> (Reference: Block-lda: Jointly modeling entity-annotated text and entity-entity links: Identifying latent groups of entities from observed interactions between pairs of entities is a frequently encountered problem in areas like analysis of protein interactions and social networks. We present a model that combines aspects of mixed membership stochastic block models and topic models to improve entity-entity link modeling by jointly modeling links and text about the entities that are linked. We apply the model to two datasets: a protein-protein interaction (PPI) dataset supplemented with a corpus of abstracts of scientific publications annotated with the proteins in the PPI dataset and an Enron email corpus. The model is evaluated by inspecting induced topics to understand the nature of the data and by quantitative methods such as functional category prediction of proteins and perplexity which exhibit improvements when joint modeling is used over baselines that use only link or text information.) <|cite_end|> <|cite_start|> (Reference: Learning to Discover Social Circles in Ego Networks: Our personal social networks are big and cluttered, and currently there is no good way to organize them. Social networking sites allow users to manually categorize their friends into social circles (e.g. 'circles' on Google+, and 'lists' on Facebook and Twitter), however they are laborious to construct and must be updated whenever a user's network grows. We define a novel machine learning task of identifying users' social circles. We pose the problem as a node clustering problem on a user's ego-network, a network of connections between her friends. We develop a model for detecting circles that combines network structure as well as user profile information. For each circle we learn its members and the circle-specific user profile similarity metric. Modeling node membership to multiple circles allows us to detect overlapping as well as hierarchically nested circles. Experiments show that our model accurately identifies circles on a diverse set of data from Facebook, Google+, and Twitter for all of which we obtain hand-labeled ground-truth.) <|cite_end|> <|cite_start|> (Reference: Joint cluster analysis of attribute data and relationship data: the connected k-center problem: Attribute data and relationship data are two principle types of data, representing the intrinsic and extrinsic properties of entities. While attribute data has been the main source of data for cluster analysis, relationship data such as social networks or metabolic networks are becoming increasingly available. It is also common to observe both data types carry orthogonal information such as in market segmentation and community identification, which calls for a joint cluster analysis of both data types so as to achieve more accurate results. For this purpose, we introduce the novel Connected kCenter problem, taking into account attribute data as well as relationship data. We analyze the complexity of this problem and prove its NP-completeness. We also present a constant factor approximation algorithm, based on which we further design NetScan, a heuristic algorithm that is efficient for large, real databases. Our experimental evaluation demonstrates the meaningfulness and accuracy of the NetScan results.) <|cite_end|> <|cite_start|> (Reference: Graph Clustering based on Structural/Attribute Similarities: The goal of graph clustering is to partition vertices in a large graph into different clusters based on various criteria such as vertex connectivity or neighborhood similarity. Graph clustering techniques are very useful for detecting densely connected groups in a large graph. Many existing graph clustering methods mainly focus on the topological structure for clustering, but largely ignore the vertex properties which are often heterogenous. In this paper, we propose a novel graph clustering algorithm, SA-Cluster, based on both structural and attribute similarities through a unified distance measure. Our method partitions a large graph associated with attributes into k clusters so that each cluster contains a densely connected subgraph with homogeneous attribute values. An effective method is proposed to automatically learn the degree of contributions of structural similarity and attribute similarity. Theoretical analysis is provided to show that SA-Cluster is converging. Extensive experimental results demonstrate the effectiveness of SA-Cluster through comparison with the state-of-the-art graph clustering and summarization methods.) <|cite_end|>. A study closely related to our approach, proposes a generative model for networks with node attributes <|cite_start|> (Reference: Community Detection in Networks with Node Attributes: Community detection algorithms are fundamental tools that allow us to uncover organizational principles in networks. When detecting communities, there are two possible sources of information one can use: the network structure, and the features and attributes of nodes. Even though communities form around nodes that have common edges and common attributes, typically, algorithms have only focused on one of these two data modalities: community detection algorithms traditionally focus only on the network structure, while clustering algorithms mostly consider only node attributes. In this paper, we develop Communities from Edge Structure and Node Attributes (CESNA), an accurate and scalable algorithm for detecting overlapping communities in networks with node attributes. CESNA statistically models the interaction between the network structure and the node attributes, which leads to more accurate community detection as well as improved robustness in the presence of noise in the network structure. CESNA has a linear runtime in the network size and is able to process networks an order of magnitude larger than comparable approaches. Last, CESNA also helps with the interpretation of detected communities by finding relevant node attributes for each community.) <|cite_end|>. However, the depth of the features, especially the nodes' attributes, is shallow and the node attribute (\textit{hashtag}) is insufficient to analyse the depth of similarity between network entities in a complex environment such as Twitter in which the structural component is not fully captured due to reliance on directed edges. The connected k-centre approach employs both structural and attribute information for a given network partition <|cite_start|> (Reference: Joint cluster analysis of attribute data and relationship data: the connected k-center problem: Attribute data and relationship data are two principle types of data, representing the intrinsic and extrinsic properties of entities. While attribute data has been the main source of data for cluster analysis, relationship data such as social networks or metabolic networks are becoming increasingly available. It is also common to observe both data types carry orthogonal information such as in market segmentation and community identification, which calls for a joint cluster analysis of both data types so as to achieve more accurate results. For this purpose, we introduce the novel Connected kCenter problem, taking into account attribute data as well as relationship data. We analyze the complexity of this problem and prove its NP-completeness. We also present a constant factor approximation algorithm, based on which we further design NetScan, a heuristic algorithm that is efficient for large, real databases. Our experimental evaluation demonstrates the meaningfulness and accuracy of the NetScan results.) <|cite_end|>. The problem is NP-hard, leading to many heuristics. Similarly, SA-cluster method combines structural and attributes' similarities for community detection by partitioning a network into cohesive k-clusters with structural and attribute information using a distance metric to estimate pairwise node similarity or closeness <|cite_start|> (Reference: Graph Clustering based on Structural/Attribute Similarities: The goal of graph clustering is to partition vertices in a large graph into different clusters based on various criteria such as vertex connectivity or neighborhood similarity. Graph clustering techniques are very useful for detecting densely connected groups in a large graph. Many existing graph clustering methods mainly focus on the topological structure for clustering, but largely ignore the vertex properties which are often heterogenous. In this paper, we propose a novel graph clustering algorithm, SA-Cluster, based on both structural and attribute similarities through a unified distance measure. Our method partitions a large graph associated with attributes into k clusters so that each cluster contains a densely connected subgraph with homogeneous attribute values. An effective method is proposed to automatically learn the degree of contributions of structural similarity and attribute similarity. Theoretical analysis is provided to show that SA-Cluster is converging. Extensive experimental results demonstrate the effectiveness of SA-Cluster through comparison with the state-of-the-art graph clustering and summarization methods.) <|cite_end|>. Conventional methods, such as normalised cut <|cite_start|> (Reference: {Normalized Cuts and Image Segmentation: We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We have applied this approach to segmenting static images and found results very encouraging.) <|cite_end|>and modularity <|cite_start|> (Reference: Modularity and Community Structure in Networks: Many networks of interest in the sciences, including social networks, computer networks, and metabolic and regulatory networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure is one of the outstanding issues in the study of networked systems. One highly effective approach is the optimization of the quality function known as "modularity" over the possible divisions of a network. Here I show that the modularity can be expressed in terms of the eigenvectors of a characteristic matrix for the network, which I call the modularity matrix, and that this expression leads to a spectral algorithm for community detection that returns results of demonstrably higher quality than competing methods in shorter running times. I illustrate the method with applications to several published network data sets.) <|cite_end|>, are based on topological structures. However, many networks come with incomplete information, e.g., a \textit{terrorist network} or food web <|cite_start|> (Reference: Community detection in incomplete information networks: With the recent advances in information networks, the problem of community detection has attracted much attention in the last decade. While network community detection has been ubiquitous, the task of collecting complete network data remains challenging in many real-world applications. Usually the collected network is incomplete with most of the edges missing. Commonly, in such networks, all nodes with attributes are available while only the edges within a few local regions of the network can be observed. In this paper, we study the problem of detecting communities in incomplete information networks with missing edges. We first learn a distance metric to reproduce the link-based distance between nodes from the observed edges in the local information regions. We then use the learned distance metric to estimate the distance between any pair of nodes in the network. A hierarchical clustering approach is proposed to detect communities within the incomplete information networks. Empirical studies on real-world information networks demonstrate that our proposed method can effectively detect community structures within incomplete information networks.) <|cite_end|>; thus, community detection in networks with edge uncertainty or incomplete information is getting traction. Inferring links in incomplete networks is challenging, because the information is usually localised within a small, linked group. The full wealth of data has been used to learn a generalisable distance metric to complete the missing information <|cite_start|> (Reference: Community detection in incomplete information networks: With the recent advances in information networks, the problem of community detection has attracted much attention in the last decade. While network community detection has been ubiquitous, the task of collecting complete network data remains challenging in many real-world applications. Usually the collected network is incomplete with most of the edges missing. Commonly, in such networks, all nodes with attributes are available while only the edges within a few local regions of the network can be observed. In this paper, we study the problem of detecting communities in incomplete information networks with missing edges. We first learn a distance metric to reproduce the link-based distance between nodes from the observed edges in the local information regions. We then use the learned distance metric to estimate the distance between any pair of nodes in the network. A hierarchical clustering approach is proposed to detect communities within the incomplete information networks. Empirical studies on real-world information networks demonstrate that our proposed method can effectively detect community structures within incomplete information networks.) <|cite_end|>. However, this approach is too complicated and does not account for the breadth required in \textit{textual} aspects in networks with many transient connections, such as Twitter\footnote{As shown in Figure~\ref{fig:Twitter-ecosystem}, Twitter communities are formed based on many factors.}. The MCT is a two-stage clustering technique that recognises different modalities as information sources; it incorporates multiview aspects at various levels, structural and textual, using independent features. <|paper_end|>
[ "<|reference_start|> Graph Clustering based on Structural/Attribute Similarities: The goal of graph clustering is to partition vertices in a large graph into different clusters based on various criteria such as vertex connectivity or neighborhood similarity. Graph clustering techniques are very useful for detecting densely connected groups in a large graph. Many existing graph clustering methods mainly focus on the topological structure for clustering, but largely ignore the vertex properties which are often heterogenous. In this paper, we propose a novel graph clustering algorithm, SA-Cluster, based on both structural and attribute similarities through a unified distance measure. Our method partitions a large graph associated with attributes into k clusters so that each cluster contains a densely connected subgraph with homogeneous attribute values. An effective method is proposed to automatically learn the degree of contributions of structural similarity and attribute similarity. Theoretical analysis is provided to show that SA-Cluster is converging. Extensive experimental results demonstrate the effectiveness of SA-Cluster through comparison with the state-of-the-art graph clustering and summarization methods. <|reference_end|>", "<|reference_start|> Joint cluster analysis of attribute data and relationship data: the connected k-center problem: Attribute data and relationship data are two principle types of data, representing the intrinsic and extrinsic properties of entities. While attribute data has been the main source of data for cluster analysis, relationship data such as social networks or metabolic networks are becoming increasingly available. It is also common to observe both data types carry orthogonal information such as in market segmentation and community identification, which calls for a joint cluster analysis of both data types so as to achieve more accurate results. For this purpose, we introduce the novel Connected kCenter problem, taking into account attribute data as well as relationship data. We analyze the complexity of this problem and prove its NP-completeness. We also present a constant factor approximation algorithm, based on which we further design NetScan, a heuristic algorithm that is efficient for large, real databases. Our experimental evaluation demonstrates the meaningfulness and accuracy of the NetScan results. <|reference_end|>", "<|reference_start|> Community detection in incomplete information networks: With the recent advances in information networks, the problem of community detection has attracted much attention in the last decade. While network community detection has been ubiquitous, the task of collecting complete network data remains challenging in many real-world applications. Usually the collected network is incomplete with most of the edges missing. Commonly, in such networks, all nodes with attributes are available while only the edges within a few local regions of the network can be observed. In this paper, we study the problem of detecting communities in incomplete information networks with missing edges. We first learn a distance metric to reproduce the link-based distance between nodes from the observed edges in the local information regions. We then use the learned distance metric to estimate the distance between any pair of nodes in the network. A hierarchical clustering approach is proposed to detect communities within the incomplete information networks. Empirical studies on real-world information networks demonstrate that our proposed method can effectively detect community structures within incomplete information networks. <|reference_end|>", "<|reference_start|> Community detection in incomplete information networks: With the recent advances in information networks, the problem of community detection has attracted much attention in the last decade. While network community detection has been ubiquitous, the task of collecting complete network data remains challenging in many real-world applications. Usually the collected network is incomplete with most of the edges missing. Commonly, in such networks, all nodes with attributes are available while only the edges within a few local regions of the network can be observed. In this paper, we study the problem of detecting communities in incomplete information networks with missing edges. We first learn a distance metric to reproduce the link-based distance between nodes from the observed edges in the local information regions. We then use the learned distance metric to estimate the distance between any pair of nodes in the network. A hierarchical clustering approach is proposed to detect communities within the incomplete information networks. Empirical studies on real-world information networks demonstrate that our proposed method can effectively detect community structures within incomplete information networks. <|reference_end|>" ]
[ 4, 6, 10, 11 ]
{"<|cite_1|>": "ss-2532478", "<|multi_cite_2_1|>": "ss-979170", "<|multi_cite_2_3|>": "arxiv-669191", "<|multi_cite_2_4|>": "ss-1423212", "<|multi_cite_3_1|>": "ss-2211227", "<|multi_cite_3_2|>": "ss-2211228", "<|cite_5|>": "ss-2211229", "<|multi_cite_6_2|>": "ss-1691371", "<|cite_7|>": "ss-2354700", "<|multi_cite_8_1|>": "ss-1518366", "<|multi_cite_8_2|>": "ss-1379313", "<|cite_9|>": "ss-1691371", "<|cite_10|>": "ss-2211230", "<|multi_cite_11_1|>": "ss-1439477", "<|multi_cite_11_2|>": "arxiv-56133", "<|cite_12|>": "ss-2521979", "<|cite_13|>": "ss-1634372", "<|multi_cite_14_1|>": "arxiv-669191", "<|multi_cite_14_2|>": "ss-1691371", "<|cite_15|>": "ss-2354700", "<|cite_16|>": "ss-2211231", "<|cite_17|>": "arxiv-669191", "<|cite_18|>": "ss-898564", "<|cite_19|>": "ss-1691371", "<|cite_20|>": "ss-2581743", "<|cite_21|>": "ss-2397500", "<|multi_cite_22_1|>": "ss-2211227", "<|multi_cite_22_2|>": "ss-2211228", "<|cite_24|>": "ss-1018798", "<|cite_25|>": "ss-1073130", "<|cite_26|>": "ss-2199087", "<|cite_27|>": "ss-2211232", "<|cite_28|>": "ss-1325448", "<|cite_29|>": "ss-744618", "<|multi_cite_30_1|>": "ss-1317238", "<|multi_cite_30_2|>": "ss-2274675", "<|multi_cite_30_3|>": "ss-2153765", "<|multi_cite_30_4|>": "arxiv-56133", "<|multi_cite_31_1|>": "ss-768129", "<|multi_cite_31_2|>": "ss-2537575", "<|multi_cite_31_3|>": "ss-1171036", "<|multi_cite_31_4|>": "ss-2454739", "<|cite_32|>": "ss-793106", "<|cite_33|>": "arxiv-669191", "<|multi_cite_34_1|>": "ss-979170", "<|multi_cite_34_3|>": "arxiv-669191", "<|cite_35|>": "ss-1518369", "<|cite_36|>": "ss-1000646", "<|cite_37|>": "ss-805762", "<|cite_38|>": "ss-1000646", "<|cite_39|>": "ss-805762", "<|multi_cite_40_1|>": "ss-2211233", "<|multi_cite_40_2|>": "ss-2532478", "<|multi_cite_40_3|>": "ss-1832000", "<|cite_41|>": "ss-1266722", "<|cite_42|>": "ss-2199087", "<|cite_43|>": "ss-2211234", "<|cite_44|>": "ss-1832000", "<|cite_45|>": "ss-842801", "<|cite_47|>": "ss-1454435", "<|cite_48|>": "ss-1518369", "<|cite_49|>": "ss-1266722", "<|cite_50|>": "arxiv-2955", "<|cite_51|>": "ss-744618", "<|cite_52|>": "ss-744618", "<|cite_53|>": "ss-1832000", "<|cite_54|>": "ss-833341", "<|cite_55|>": "ss-979170", "<|multi_cite_65_1|>": "ss-1126779", "<|multi_cite_65_2|>": "ss-1317238", "<|multi_cite_65_3|>": "ss-1256251", "<|cite_66|>": "ss-975016", "<|multi_cite_57_1|>": "ss-1376384", "<|multi_cite_57_2|>": "ss-1279488", "<|multi_cite_58_1|>": "ss-1087096", "<|multi_cite_58_2|>": "arxiv-143354", "<|cite_67|>": "ss-1087096", "<|multi_cite_59_1|>": "arxiv-56133", "<|multi_cite_59_2|>": "ss-1317238", "<|multi_cite_59_3|>": "ss-2153765", "<|multi_cite_59_4|>": "ss-2211235", "<|multi_cite_59_5|>": "ss-762479", "<|cite_60|>": "arxiv-56133", "<|cite_68|>": "ss-2211235", "<|cite_69|>": "ss-762479", "<|cite_61|>": "ss-1325448", "<|cite_62|>": "ss-744618", "<|cite_63|>": "ss-2274675", "<|cite_70|>": "ss-2274675"}
1012.0016
<|paper_start|> Title: Is Light-Tree Structure Optimal for Multicast Routing in Sparse Light Splitting WDM Networks? Abstract: Is Light-Tree Structure Optimal for Multicast Routing in Sparse Light Splitting WDM Networks?: To minimize the number of wavelengths required by a multicast session in sparse light splitting wavelength division multiplexing (WDM) networks, a light-hierarchy structure, which occupies the same wavelength on all links, is proposed to span as many destinations as possible. Different from a light-tree, a light-hierarchy accepts cycles, which are used to traverse crosswise a 4-degree (or above) multicast incapable (MI) node twice (or above) and switch two light signals on the same wavelengths to two destinations in the same multicast session. In this paper, firstly, a graph renewal and distance priority light-tree algorithm (GRDP-LT) is introduced to improve the quality of light-trees built for a multicast request. Then, it is extended to compute light-hierarchies. Obtained numerical results demonstrate the GRDP-LT light-trees can achieve a much lower links stress, better wavelength channel cost, and smaller average end-to-end delay as well as diameter than the currently most efficient algorithm. Furthermore, compared to light-trees, the performance in terms of link stress and network throughput is greatly improved again by employing the light-hierarchy, while consuming the same amount of wavelength channel cost. Introduction \label{introdcution} With the inherent capacity to provide high bandwidth and small delay, all-optical Wavelength Division Multiplexing (WDM) networks enable the growth of bandwidth-driven and time sensitive multimedia applications, such as video distribution, multimedia conferencing, and so on <|cite_start|> (Reference: Multicast routing and wavelength assignment in WDM networks with limited drop-offs: In WDM networks with limited drop-offs, the route of a multicast connection consists of a set of light-trees. Each of the light-tree is rooted at the source node and contains no more than a limited number, say k, destination nodes due to the power loss of dropping optical signals off at destination nodes. We call such a light-tree k-drop light-tree. In this paper we study the multicast routing problem of constructing a set of k-drop light-trees that have the minimal network cost. The network cost of a set of light-trees is defined as the summation of the link cost of all the light-trees. We first prove that this problem is polynomial-time solvable for k=2 and NP-hard for k/spl ges/3. We then propose a 4-approximation algorithm for the problem for k /spl ges/3. A wavelength assignment algorithm is also proposed to assign wavelengths to the light-trees of a multicast connection. In the end we give simulation results showing that k-drop multitree muting can significantly save not only the network cost but also wavelengths used. Moreover, when k/spl ges/5 its performance is very close to the case where k is infinite (i.e., the case of using a single tree for a multicast connection).) <|cite_end|>. Multicast, which aims to distribute messages simultaneously from the same source to various group members, is highly required to satisfy these applications. Multicast is bandwidth-efficient because it eliminates the necessity for the source to send an individual copy of the message to each destination, and it avoids flooding the whole network by broadcasting <|cite_start|> (Reference: Multicasting in WDM networks: Wavelength-division multiplexing (WDM) networks are believed to be a promising candidate to meet the explosive increase of bandwidth demand in the Internet. In this article, we survey the problems of and approaches to multicasting in WDM networks. In particular, we address the issues in the context of three types of WDM networks: broadcast-and-select, wavelength-routed, and optical burst-switched (OBS) WDM networks. Broadcast-and-select WDM networks are typically for WDM LANs¿MANs, and can be either single-hop or multihop. Various multicast scheduling algorithms (MSAs) are discussed for single-hop networks. For multihop networks, we discuss how channel sharing can be employed to effectively support multicast. In a wavelength-routed WDM network, supporting multicast leads to the multicast routing and wavelength assignment (MC-RWA) problem, which has been discussed for different scenarios, including sparse-splitting networks. We also discuss the problem of efficiently supporting multicast in optical burst-switched (OBS) networks, where the overheads due to control packets and guard bands need to considered.) <|cite_end|>. However, it is a challenging work to implement multicast in Wide Area Networks (WANs) WDM networks due to high complexity of multicast routing <|cite_start|> (Reference: Multicast routing and wavelength assignment in WDM networks with limited drop-offs: In WDM networks with limited drop-offs, the route of a multicast connection consists of a set of light-trees. Each of the light-tree is rooted at the source node and contains no more than a limited number, say k, destination nodes due to the power loss of dropping optical signals off at destination nodes. We call such a light-tree k-drop light-tree. In this paper we study the multicast routing problem of constructing a set of k-drop light-trees that have the minimal network cost. The network cost of a set of light-trees is defined as the summation of the link cost of all the light-trees. We first prove that this problem is polynomial-time solvable for k=2 and NP-hard for k/spl ges/3. We then propose a 4-approximation algorithm for the problem for k /spl ges/3. A wavelength assignment algorithm is also proposed to assign wavelengths to the light-trees of a multicast connection. In the end we give simulation results showing that k-drop multitree muting can significantly save not only the network cost but also wavelengths used. Moreover, when k/spl ges/5 its performance is very close to the case where k is infinite (i.e., the case of using a single tree for a multicast connection).) <|cite_end|>, let alone in spare light splitting <|cite_start|> (Reference: Benefits of multicasting in all-optical networks: All-optical WDM networks are fast becoming the natural choice for future backbone. In this paper, we establish the efficiency of multicasting over unicasting in all-optical WDM networks, assess the usefulness of wavelength conversion for multicasting, and explore the issues related to the splitting (or copying) capability of the nodes. The comparison between multicasting and unicasting is based on the number of wavelengths as well as the amount of bandwidth required for a given set of multicasting sessions. For each multicasting session, a source-specific multicasting forest (or trees) is constructed first, taking into account the sparse splitting capability of the nodes in the network. Then, each multicasting tree is partitioned into segments according to the sparse wavelength conversion capability of the nodes on the tree such that each segment needs to be assigned the same wavelength. Simulation results obtained for a practical network such as NSFNET and randomly generated networks show that multicasting can reduce both the bandwidth consumed and the number of wavelengths required by as much as 50% or more when the size (i.e. the number of destinations) of each multicasting session is reasonably large. Such a reduction due to multicasting is not affected much by the wavelength conversion capability, the number of multicasting sessions and the size of the networks whose topology is more or less random. The results have also shown that sparse splitting can be nearly as effective as full splitting for multicasting.) <|cite_end|> WDM mesh Networks, where some nodes namely multicast capable nodes (MC <|cite_start|> (Reference: Benefits of multicasting in all-optical networks: All-optical WDM networks are fast becoming the natural choice for future backbone. In this paper, we establish the efficiency of multicasting over unicasting in all-optical WDM networks, assess the usefulness of wavelength conversion for multicasting, and explore the issues related to the splitting (or copying) capability of the nodes. The comparison between multicasting and unicasting is based on the number of wavelengths as well as the amount of bandwidth required for a given set of multicasting sessions. For each multicasting session, a source-specific multicasting forest (or trees) is constructed first, taking into account the sparse splitting capability of the nodes in the network. Then, each multicasting tree is partitioned into segments according to the sparse wavelength conversion capability of the nodes on the tree such that each segment needs to be assigned the same wavelength. Simulation results obtained for a practical network such as NSFNET and randomly generated networks show that multicasting can reduce both the bandwidth consumed and the number of wavelengths required by as much as 50% or more when the size (i.e. the number of destinations) of each multicasting session is reasonably large. Such a reduction due to multicasting is not affected much by the wavelength conversion capability, the number of multicasting sessions and the size of the networks whose topology is more or less random. The results have also shown that sparse splitting can be nearly as effective as full splitting for multicasting.) <|cite_end|>) can support multicast and the others namely multicast incapable nodes (MI <|cite_start|> (Reference: Benefits of multicasting in all-optical networks: All-optical WDM networks are fast becoming the natural choice for future backbone. In this paper, we establish the efficiency of multicasting over unicasting in all-optical WDM networks, assess the usefulness of wavelength conversion for multicasting, and explore the issues related to the splitting (or copying) capability of the nodes. The comparison between multicasting and unicasting is based on the number of wavelengths as well as the amount of bandwidth required for a given set of multicasting sessions. For each multicasting session, a source-specific multicasting forest (or trees) is constructed first, taking into account the sparse splitting capability of the nodes in the network. Then, each multicasting tree is partitioned into segments according to the sparse wavelength conversion capability of the nodes on the tree such that each segment needs to be assigned the same wavelength. Simulation results obtained for a practical network such as NSFNET and randomly generated networks show that multicasting can reduce both the bandwidth consumed and the number of wavelengths required by as much as 50% or more when the size (i.e. the number of destinations) of each multicasting session is reasonably large. Such a reduction due to multicasting is not affected much by the wavelength conversion capability, the number of multicasting sessions and the size of the networks whose topology is more or less random. The results have also shown that sparse splitting can be nearly as effective as full splitting for multicasting.) <|cite_end|>) cannot. MC nodes are equipped with Splitter-and-Delivery cross-connect <|cite_start|> (Reference: Power-efficient design of multicast wavelength routed networks: In this paper, we introduce the power-efficient design space for multicast wavelength-routed networks. The power-efficient design space is based on the impact of power on the overall design of wavelength-routed networks. Two cross-connect architectures on this design concept are investigated. One is an existing architecture called splitter-and-delivery (SaD). The other is a new architecture called multicast-only splitter-and-delivery (MOSaD). The MOSaD architecture uses power splitters for multicast connections only, allowing unicast connections to pass without enduring unnecessary power losses. Our cross-connect design provides a strictly nonblocking service for unicast connections while eliminating unnecessary power loss of the SaD cross-connect. Experimental results demonstrate that the MOSaD architecture provides substantial savings in cost and reduction in signal power loss with minimal effects on the blocking performance of the network.) <|cite_end|> while MI nodes are equipped with Tap-and-Continue (TaC <|cite_start|> (Reference: Cost-effective Implementation of Multicasting in Wavelength-Routed Networks: Multicasting in the optical domain has been recently shown to provide substantial savings in terms of the network-wide average packet hop distance and the total number of transceivers in the network. Current proposed multicasting architectures [e.g., splitter-and-delivery (SaD)] employ power splitting mechanisms which have the side effect of high fabrication cost due to the large number of splitters and the need for optical amplifiers. We propose a low-cost novel architecture called tap-and-continue (TaC) for realizing multicasting. This architecture provides a natural evolution from current unicast cross-connects and is based on tapping devices. We prove that any multicasting session can be feasibly realized in networks employing only TaC cross-connects, and the problem of finding the optimal multiple-destination minimum cost trail in such networks is NP-complete. Therefore, we develop a 4-approximation algorithm for multiple-destination routing. Simulation results demonstrate that the TaC cross-connect provides a realistic, cost-effective approach for implementing multicasting with negligible blocking degradation especially in multifiber networks.) <|cite_end|>) cross-connect, which is only able to tap into a small amount of light power and forward the rest to one outgoing port. In sparse light splitting WDM networks, multicast routing is to find a set of light distribution structures to cover all the multicast group members under optical constraints. In the absence of wavelength conversion, the same wavelength should be retained over all the links of a light distribution structure. The main objective of multicast routing and wavelength assignment (MRWA) <|cite_start|> (Reference: On multicasting in wavelength-routing mesh networks: ) <|cite_end|> problem is to optimize the optical network resources in terms of total cost (wavelength channel cost), link stress (maximum number of wavelengths required per fiber), optical power attenuation (impacted by the average end-to-end delay and diameter of the tree) as well as the network throughput. Normally, the light-tree structure <|cite_start|> (Reference: Light-trees: optical multicasting for improved performance in wavelength-routed networks: We introduce the concept of a light-tree in a wavelength-routed optical network. A light-tree is a point-to-multipoint generalization of a lightpath. A lightpath is a point-to-point all-optical wavelength channel connecting a transmitter at a source node to a receiver at a destination node. Lightpath communication can significantly reduce the number of hops (or lightpaths) a packet has to traverse; and this reduction can, in turn, significantly improve the network's throughput. We extend the lightpath concept by incorporating an optical multicasting capability at the routing nodes in order to increase the logical connectivity of the network and further decrease its hop distance. We refer to such a point-to-multipoint extension as a light-tree. Light-trees can not only provide improved performance for unicast traffic, but they naturally can better support multicast traffic and broadcast traffic. In this study, we shall concentrate on the application and advantages of light-trees to unicast and broadcast traffic. We formulate the light-tree-based virtual topology design problem as an optimization problem with one of two possible objective functions: for a given traffic matrix, (i) minimize the network-wide average packet hop distance, or (ii) minimize the total number of transceivers in the network. We demonstrate that an optimum light-tree-based virtual topology has clear advantages over an optimum lightpath-based virtual topology with respect to the above two objectives.) <|cite_end|> is thought to be optimal and a set of light-trees (or a light-forest <|cite_start|> (Reference: Constrained multicast routing in wdm networks with sparse light splitting: As WDM technology matures and multicast applications become increasingly popular, supporting multicast at the WDM layer becomes an important and yet challenging topic. In this paper, we study constrained multicast routing in WDM networks with sparse light splitting, i.e., where some switches are incapable of splitting light (or copying data in the optical domain). Specifically, we propose four WDM multicast routing algorithms, namely, Re-route-to Source, Re-route-to-Any, Member-First, and Member-Only. Given the network topology, multicast membership information, and light splitting capability of the switches, these algorithms construct a source-based multicast light-forest (consisting one or more multicast trees) for each multicast session. The performance of these algorithms are compared in terms of the average number of wavelengths used per forest (or multicast session), average number of branches involved (bandwidth) per forest as well as average number of hops encountered (delay) from a multicast source to a multicast member.) <|cite_end|>) is employed to accommodate a multicast session. Accordingly, numerous light-trees construction algorithms have been developed such as Reroute-to-Source, Member-First and Member-Only <|cite_start|> (Reference: Constrained multicast routing in wdm networks with sparse light splitting: As WDM technology matures and multicast applications become increasingly popular, supporting multicast at the WDM layer becomes an important and yet challenging topic. In this paper, we study constrained multicast routing in WDM networks with sparse light splitting, i.e., where some switches are incapable of splitting light (or copying data in the optical domain). Specifically, we propose four WDM multicast routing algorithms, namely, Re-route-to Source, Re-route-to-Any, Member-First, and Member-Only. Given the network topology, multicast membership information, and light splitting capability of the switches, these algorithms construct a source-based multicast light-forest (consisting one or more multicast trees) for each multicast session. The performance of these algorithms are compared in terms of the average number of wavelengths used per forest (or multicast session), average number of branches involved (bandwidth) per forest as well as average number of hops encountered (delay) from a multicast source to a multicast member.) <|cite_end|>. Reroute-to-Source makes use of the shortest path tree and hence is optimal in delay and diameter, but its cost and link stress are too big to stand <|cite_start|> (Reference: Avoidance of multicast incapable branching nodes for multicast routing in WDM networks: In this articlewestudy themulticast routing problem in all-opticalWDMnetworks under the spare light splitting constraint. To implement a multicast session, several light-trees may have to be used due to the limited fanouts of network nodes. Although many multicast routing algorithms have been proposed in order to reduce the total number of wavelength channels used (total cost) for a multicast session, the maximum number of wavelengths required in one fiber link (link stress) and the end-to-end delay are two parameters which are not always taken into consideration. It is known that the shortest path tree (SPT) results in the optimal end-to-end delay, but it can not be employed directly for multicast routing in sparse light splitting WDM networks. Hence, we propose a novel wavelength routing algorithm which tries to avoid the multicast incapable branching nodes (MIBs, branching nodes without splitting capability) in the shortest-path-based multicast tree to diminish the link stress. Good parts of the shortest-path-tree are retained by the algorithm to reduce the end-to-end delay. The algorithm consists of tree steps: (1) aDijkstraPro algorithmwith priority assignment and node adoption is introduced to produce a SPT with up to 38% fewer MIB nodes in the NSF topology and 46% fewerMIB nodes in the USA Longhaul topology, (2) critical articulation and deepest branch heuristics are used to process the MIB nodes, (3) a distance-based light-tree reconnection algorithm is proposed to create the multicast light-trees. Extensive simulations demonstrate the algorithm's efficiency in terms of link stress and end-to-end delay.) <|cite_end|>. Member-Only is based on the Minimum Path Heuristic and thus currently through to achieve the best cost and link stress <|cite_start|> (Reference: On multicasting in wavelength-routing mesh networks: ) <|cite_end|> <|cite_start|> (Reference: Avoidance of multicast incapable branching nodes for multicast routing in WDM networks: In this articlewestudy themulticast routing problem in all-opticalWDMnetworks under the spare light splitting constraint. To implement a multicast session, several light-trees may have to be used due to the limited fanouts of network nodes. Although many multicast routing algorithms have been proposed in order to reduce the total number of wavelength channels used (total cost) for a multicast session, the maximum number of wavelengths required in one fiber link (link stress) and the end-to-end delay are two parameters which are not always taken into consideration. It is known that the shortest path tree (SPT) results in the optimal end-to-end delay, but it can not be employed directly for multicast routing in sparse light splitting WDM networks. Hence, we propose a novel wavelength routing algorithm which tries to avoid the multicast incapable branching nodes (MIBs, branching nodes without splitting capability) in the shortest-path-based multicast tree to diminish the link stress. Good parts of the shortest-path-tree are retained by the algorithm to reduce the end-to-end delay. The algorithm consists of tree steps: (1) aDijkstraPro algorithmwith priority assignment and node adoption is introduced to produce a SPT with up to 38% fewer MIB nodes in the NSF topology and 46% fewerMIB nodes in the USA Longhaul topology, (2) critical articulation and deepest branch heuristics are used to process the MIB nodes, (3) a distance-based light-tree reconnection algorithm is proposed to create the multicast light-trees. Extensive simulations demonstrate the algorithm's efficiency in terms of link stress and end-to-end delay.) <|cite_end|> <|cite_start|> (Reference: Constrained multicast routing in wdm networks with sparse light splitting: As WDM technology matures and multicast applications become increasingly popular, supporting multicast at the WDM layer becomes an important and yet challenging topic. In this paper, we study constrained multicast routing in WDM networks with sparse light splitting, i.e., where some switches are incapable of splitting light (or copying data in the optical domain). Specifically, we propose four WDM multicast routing algorithms, namely, Re-route-to Source, Re-route-to-Any, Member-First, and Member-Only. Given the network topology, multicast membership information, and light splitting capability of the switches, these algorithms construct a source-based multicast light-forest (consisting one or more multicast trees) for each multicast session. The performance of these algorithms are compared in terms of the average number of wavelengths used per forest (or multicast session), average number of branches involved (bandwidth) per forest as well as average number of hops encountered (delay) from a multicast source to a multicast member.) <|cite_end|>. In the case of full light splitting, one light-tree is enough to cover all the multicast members and thus the light-tree structure is optimal of total cost and link stress. But, is the light-tree structure still optimal for sparse light splitting WDM networks? The answer is no. Under splitting constraint, several light-trees may be required to establish one multicast group. The quality of optimization not only depends on the quality of each light-tree but also depends on the number of light-trees built for a multicast session. Given a multicast session, more destinations a light distribution can span, the fewer light distribution structures a multicast session will require. Based on this basic idea, in our study we propose a new multicast structure: light-hierarchy to span as many destinations as possible aiming at improving the link stress and network throughput. Similar to a light-tree, only one wavelength is occupied over all the links in a light-hierarchy; while different from a light-tree, a light-hierarchy accepts cycles. The cycles in a light-hierarchy permit to traverse an at least 4-degree MI node twice (or more) and thus crosswise switch two signals on the same wavelengths to two destinations in the same group by using two different input and output ports pairs. In this paper, a Graph Renewal Strategy is proposed to improve the link stress and total cost of light-trees, and an In Tree Distance Priority is applied to improve the delay and diameter of light-trees. Then, the Graph Renewal Strategy is extended to compute light-hierarchies to improve the multicast performance again in terms of link stress and network throughput. The rest of the paper is organized as follows. Firstly, the all-optical multicast routing problem is described and the famous Member-Only algorithm is reviewed in Section~\ref{sec: All-Optical Multicast Routing Problem}. Then Graph Renewal Strategy, In Tree Distance Priority and a new multicast structure: light-hierarchy are proposed. Based on these strategies, two multicast routing algorithms, namely GRDP Light-Tree algorithm and GRDP Light-Hierarchy algorithm, are presented in Section~\ref{sec: Proposed Solutions}. Accompanied with the routing problem, the wavelength assignment problem is solved in Section~\ref{sec: Wavelength Assignment}. Numerical results are obtained in Section~\ref{sec: Performance Evaluation And Simulation}. Finally, we conclude this paper in Section~\ref{sec: Conclusion}. <|paper_end|>
[ "<|reference_start|> Multicasting in WDM networks: Wavelength-division multiplexing (WDM) networks are believed to be a promising candidate to meet the explosive increase of bandwidth demand in the Internet. In this article, we survey the problems of and approaches to multicasting in WDM networks. In particular, we address the issues in the context of three types of WDM networks: broadcast-and-select, wavelength-routed, and optical burst-switched (OBS) WDM networks. Broadcast-and-select WDM networks are typically for WDM LANs¿MANs, and can be either single-hop or multihop. Various multicast scheduling algorithms (MSAs) are discussed for single-hop networks. For multihop networks, we discuss how channel sharing can be employed to effectively support multicast. In a wavelength-routed WDM network, supporting multicast leads to the multicast routing and wavelength assignment (MC-RWA) problem, which has been discussed for different scenarios, including sparse-splitting networks. We also discuss the problem of efficiently supporting multicast in optical burst-switched (OBS) networks, where the overheads due to control packets and guard bands need to considered. <|reference_end|>", "<|reference_start|> Cost-effective Implementation of Multicasting in Wavelength-Routed Networks: Multicasting in the optical domain has been recently shown to provide substantial savings in terms of the network-wide average packet hop distance and the total number of transceivers in the network. Current proposed multicasting architectures [e.g., splitter-and-delivery (SaD)] employ power splitting mechanisms which have the side effect of high fabrication cost due to the large number of splitters and the need for optical amplifiers. We propose a low-cost novel architecture called tap-and-continue (TaC) for realizing multicasting. This architecture provides a natural evolution from current unicast cross-connects and is based on tapping devices. We prove that any multicasting session can be feasibly realized in networks employing only TaC cross-connects, and the problem of finding the optimal multiple-destination minimum cost trail in such networks is NP-complete. Therefore, we develop a 4-approximation algorithm for multiple-destination routing. Simulation results demonstrate that the TaC cross-connect provides a realistic, cost-effective approach for implementing multicasting with negligible blocking degradation especially in multifiber networks. <|reference_end|>", "<|reference_start|> Constrained multicast routing in wdm networks with sparse light splitting: As WDM technology matures and multicast applications become increasingly popular, supporting multicast at the WDM layer becomes an important and yet challenging topic. In this paper, we study constrained multicast routing in WDM networks with sparse light splitting, i.e., where some switches are incapable of splitting light (or copying data in the optical domain). Specifically, we propose four WDM multicast routing algorithms, namely, Re-route-to Source, Re-route-to-Any, Member-First, and Member-Only. Given the network topology, multicast membership information, and light splitting capability of the switches, these algorithms construct a source-based multicast light-forest (consisting one or more multicast trees) for each multicast session. The performance of these algorithms are compared in terms of the average number of wavelengths used per forest (or multicast session), average number of branches involved (bandwidth) per forest as well as average number of hops encountered (delay) from a multicast source to a multicast member. <|reference_end|>", "<|reference_start|> Avoidance of multicast incapable branching nodes for multicast routing in WDM networks: In this articlewestudy themulticast routing problem in all-opticalWDMnetworks under the spare light splitting constraint. To implement a multicast session, several light-trees may have to be used due to the limited fanouts of network nodes. Although many multicast routing algorithms have been proposed in order to reduce the total number of wavelength channels used (total cost) for a multicast session, the maximum number of wavelengths required in one fiber link (link stress) and the end-to-end delay are two parameters which are not always taken into consideration. It is known that the shortest path tree (SPT) results in the optimal end-to-end delay, but it can not be employed directly for multicast routing in sparse light splitting WDM networks. Hence, we propose a novel wavelength routing algorithm which tries to avoid the multicast incapable branching nodes (MIBs, branching nodes without splitting capability) in the shortest-path-based multicast tree to diminish the link stress. Good parts of the shortest-path-tree are retained by the algorithm to reduce the end-to-end delay. The algorithm consists of tree steps: (1) aDijkstraPro algorithmwith priority assignment and node adoption is introduced to produce a SPT with up to 38% fewer MIB nodes in the NSF topology and 46% fewerMIB nodes in the USA Longhaul topology, (2) critical articulation and deepest branch heuristics are used to process the MIB nodes, (3) a distance-based light-tree reconnection algorithm is proposed to create the multicast light-trees. Extensive simulations demonstrate the algorithm's efficiency in terms of link stress and end-to-end delay. <|reference_end|>" ]
[ 1, 7, 10, 12 ]
{"<|cite_1|>": "ss-2222657", "<|cite_2|>": "ss-1163652", "<|cite_3|>": "ss-2222657", "<|cite_4|>": "ss-1340688", "<|cite_5|>": "ss-1340688", "<|cite_6|>": "ss-1340688", "<|cite_7|>": "ss-2222439", "<|cite_8|>": "ss-2222440", "<|cite_9|>": "ss-2222658", "<|cite_10|>": "ss-1889310", "<|cite_11|>": "ss-2222441", "<|cite_12|>": "ss-2222441", "<|cite_13|>": "arxiv-17668", "<|multi_cite_15_1|>": "ss-2222658", "<|multi_cite_15_2|>": "arxiv-17668", "<|multi_cite_15_3|>": "ss-2222441"}
2201.04252
<|paper_start|> Title: Generating Connected, Simple, and Realistic Cyber Graphs for Smart Grids Abstract: Generating Connected, Simple, and Realistic Cyber Graphs for Smart Grids: Smart grids integrate communication systems with power networks to enable power grids operation and command through real-time data collection and control signals. Designing, analyzing, and simulating smart grid infrastructures as well as predicting the impact of power network failures strongly depend on the topologies of the underlying power network and communication system. Despite the substantial impact that the communication systems bring to smart grid operation, the topology of communication systems employed in smart grids was less studied. The power community lacks realistic generative communication system models that can be calibrated to match real-world data. To address this issue, this paper proposes a framework to generate the underlying topological graphs for the communication systems deployed in smart grids by mimicking the topology of real-world smart grids. In this regard, we have updated the Chung-Lu algorithm to guarantee the communication network connectivity and to match the degree distribution of a real-world smart grid rather than following an expected degree distribution. In addition, key characteristics of communication systems such as diameter, average shortest paths, clustering coefficients, assortativity, and spectral gap were taken into consideration to generate the most similar real-world communication network for smart grid studies. We believe that the proposed algorithm to generate realistic cyber graphs for smart grid studies will benefit the power community. Introduction \label{sec:intro} Communication systems play a major role in the deployment of smart grids empowering them to be more resistant, secure, reliable and manageable and ensuring connectivity of the grid components. The backbone of communication systems in smart grids is represented by the information and communication technologies that allow two-way communication and automated control. Communication systems improve the efficiency and reliability of smart grids by gathering and transmitting a wide variety of data for grid control and decision-making purposes. The integration of cyber communications and control systems into the power distribution infrastructure has a profound impact on the operation, reliability, and efficiency of the power grid. The power and communication systems in modern power grids are highly intertwined. Analyzing, simulating, designing, and predicting the impact of network failures strongly rely on the knowledge of a communication network topology <|cite_start|> (Reference: Generating statistically correct random topologies for testing smart grid communication and control networks: In order to design an efficient communication scheme and examine the efficiency of any networked control architecture in smart grid applications, we need to characterize statistically its information source, namely the power grid itself. Investigating the statistical properties of power grids has the immediate benefit of providing a natural simulation platform, producing a large number of power grid test cases with realistic topologies, with scalable network size, and with realistic electrical parameter settings. The second benefit is that one can start analyzing the performance of decentralized control algorithms over information networks whose topology matches that of the underlying power network and use network scientific approaches to determine analytically if these architectures would scale well. With these motivations, in this paper we study both the topological and electrical characteristics of power grid networks based on a number of synthetic and real-world power systems. The most interesting discoveries include: the power grid is sparsely connected with obvious small-world properties; its nodal degree distribution can be well fitted by a mixture distribution coming from the sum of a truncated geometric random variable and an irregular discrete random variable; the power grid has very distinctive graph spectral density and its algebraic connectivity scales as a power function of the network size; the line impedance has a heavy-tailed distribution, which can be captured quite accurately by a clipped double Pareto lognormal distribution. Based on the discoveries mentioned above, we propose an algorithm that generates random topology power grids featuring the same topology and electrical characteristics found from the real data.) <|cite_end|>. Thus, studying the underlying communication network topology is essential for the smart grid operation and control <|cite_start|> (Reference: Coordination of Transmission, Distribution and Communication Systems for Prompt Power System Recovery after Disasters:Report – Grid and Communication Interdependency Review and Characterization of Typical Communication Systems: ) <|cite_end|>. In spite of the many models proposed for electrical power systems <|cite_start|> (Reference: The Power Grid Library for Benchmarking AC Optimal Power Flow Algorithms: In recent years, the power systems research community has seen an explosion of novel methods for formulating the AC power flow equations. Consequently, benchmarking studies using the seminal AC Optimal Power Flow (AC-OPF) problem have emerged as the primary method for evaluating these emerging methods. However, it is often difficult to directly compare these studies due to subtle differences in the AC-OPF problem formulation as well as the network, generation, and loading data that are used for evaluation. To help address these challenges, this IEEE PES Task Force report proposes a standardized AC-OPF mathematical formulation and the PGLib-OPF networks for benchmarking AC-OPF algorithms. A motivating study demonstrates some limitations of the established network datasets in the context of benchmarking AC-OPF algorithms and a validation study demonstrates the efficacy of using the PGLib-OPF networks for this purpose. In the interest of scientific discourse and future additions, the PGLib-OPF benchmark library is open-access and all the of network data is provided under a creative commons license.) <|cite_end|>, the problem of modeling the underlying communication network in smart grids was less studied. In fact, despite the huge efforts deployed for studying smart grids operation and control, modeling smart grids is still at its infancy. There is not enough realistic and practical information about the topology of the underlying communication network in smart grids. So far, various efforts have focused on developing cyber-physical test models for general use by the power system community <|cite_start|> (Reference: A cyber-physical modeling and assessment framework for power grid infrastructures: The integration of cyber communications and control systems into the power grid infrastructure is widespread and has a profound impact on the operation, reliability, and efficiency of the grid. Cyber technologies allow for efficient management of the power system, but they may contain vulnerabilities that need to be managed. One important possible consequence is the introduction of cyber-induced or cyber-enabled disruptions of physical components. In this paper, we propose an online framework for assessing the operational reliability impacts due to threats to the cyber infrastructure. This framework is an important step toward addressing the critical challenge of understanding and analyzing complex cyber-physical systems at scale.) <|cite_end|> <|cite_start|> (Reference: 2016 IEEE International Conference on Smart Grid Communications, SmartGridComm 2016, Sydney, Australia, November 6-9, 2016: ) <|cite_end|> <|cite_start|> (Reference: 2019 Principles, Systems and Applications of IP Telecommunications, IPTComm 2019, Chicago, IL, USA, October 14-16, 2019: ) <|cite_end|> <|cite_start|> (Reference: 2019 20th International Conference on Intelligent System Application to Power Systems (ISAP): Zero Steady State Error on Voltage Source Inverters) <|cite_end|>. These studies consider different characteristics of communication systems including vulnerabilities of communication devices, attack paths, etc., to design a practical cyber layer for cyber-physical power systems. However, taking all these characteristics into consideration makes these approaches computationally intractable for larger cyber graphs as the number of attack paths increases exponentially with the number of nodes. To analyze the impact of cyber-graphs on power networks operation, e.g., cascading failure analysis, we first need a fast and reliable framework to generate realistic cyber graphs for power test cases irrespective of their size. A few efforts were conducted for generating realistic cyber graphs for power test cases. A graph generator based on the characteristics of Luxembourg smart grid, which is a power-line communication (PLC) system <|cite_start|> (Reference: For the Grid and Through the Grid: The Role of Power Line Communications in the Smart Grid: Is Power Line Communications (PLC) a good candidate for Smart Grid applications? The objective of this paper is to address this important question. To do so we provide an overview of what PLC can deliver today by surveying its history and describing the most recent technological advances in the area. We then address Smart Grid applications as instances of sensor networking and network control problems and discuss the main conclusion one can draw from the literature on these subjects. The application scenario of PLC within the Smart Grid is then analyzed in detail. Since a necessary ingredient of network planning is modeling, we also discuss two aspects of engineering modeling that relate to our question. The first aspect is modeling the PLC channel through fading models. The second aspect we review is the Smart Grid control and traffic modeling problem which allows us to achieve a better understanding of the communications requirements. Finally, this paper reports recent studies on the electrical and topological properties of a sample power distribution network. Power grid topological studies are very important for PLC networking as the power grid is not only the information source \textit{but also} the information delivery system - a unique feature when PLC is used for the Smart Grid.) <|cite_end|>, was presented in <|cite_start|> (Reference: Generating statistically correct random topologies for testing smart grid communication and control networks: In order to design an efficient communication scheme and examine the efficiency of any networked control architecture in smart grid applications, we need to characterize statistically its information source, namely the power grid itself. Investigating the statistical properties of power grids has the immediate benefit of providing a natural simulation platform, producing a large number of power grid test cases with realistic topologies, with scalable network size, and with realistic electrical parameter settings. The second benefit is that one can start analyzing the performance of decentralized control algorithms over information networks whose topology matches that of the underlying power network and use network scientific approaches to determine analytically if these architectures would scale well. With these motivations, in this paper we study both the topological and electrical characteristics of power grid networks based on a number of synthetic and real-world power systems. The most interesting discoveries include: the power grid is sparsely connected with obvious small-world properties; its nodal degree distribution can be well fitted by a mixture distribution coming from the sum of a truncated geometric random variable and an irregular discrete random variable; the power grid has very distinctive graph spectral density and its algebraic connectivity scales as a power function of the network size; the line impedance has a heavy-tailed distribution, which can be captured quite accurately by a clipped double Pareto lognormal distribution. Based on the discoveries mentioned above, we propose an algorithm that generates random topology power grids featuring the same topology and electrical characteristics found from the real data.) <|cite_end|> to create random but realistic smart grid communication topologies. Different characteristics of power grid including nodal degree distribution, graph spectrum and connectivity scaling property, etc., were analyzed in <|cite_start|> (Reference: Generating statistically correct random topologies for testing smart grid communication and control networks: In order to design an efficient communication scheme and examine the efficiency of any networked control architecture in smart grid applications, we need to characterize statistically its information source, namely the power grid itself. Investigating the statistical properties of power grids has the immediate benefit of providing a natural simulation platform, producing a large number of power grid test cases with realistic topologies, with scalable network size, and with realistic electrical parameter settings. The second benefit is that one can start analyzing the performance of decentralized control algorithms over information networks whose topology matches that of the underlying power network and use network scientific approaches to determine analytically if these architectures would scale well. With these motivations, in this paper we study both the topological and electrical characteristics of power grid networks based on a number of synthetic and real-world power systems. The most interesting discoveries include: the power grid is sparsely connected with obvious small-world properties; its nodal degree distribution can be well fitted by a mixture distribution coming from the sum of a truncated geometric random variable and an irregular discrete random variable; the power grid has very distinctive graph spectral density and its algebraic connectivity scales as a power function of the network size; the line impedance has a heavy-tailed distribution, which can be captured quite accurately by a clipped double Pareto lognormal distribution. Based on the discoveries mentioned above, we propose an algorithm that generates random topology power grids featuring the same topology and electrical characteristics found from the real data.) <|cite_end|> for designing efficient communication schemes for power test cases. Heuristic algorithms were employed in <|cite_start|> (Reference: Reliable Communication Networks for Smart Grid Transmission Systems: ) <|cite_end|> to improve the communication reliability for smart grids at the transmission level. However, many generic graph generation algorithms such as configuration model <|cite_start|> (Reference: The structure and function of complex networks: Inspired by empirical studies of networked systems such as the Internet, social networks, and biological networks, researchers have in recent years developed a variety of techniques and models to help us understand or predict the behavior of these systems. Here we review developments in this field, including such concepts as the small-world effect, degree distributions, clustering, network correlations, random graph models, models of network growth and preferential attachment, and dynamical processes taking place on networks.) <|cite_end|>, Havel-Hakimi algorithm <|cite_start|> (Reference: On Realizability of a Set of Integers as Degrees of the Vertices of a Linear Graph. I: This paper is mainly concerned with the realizability of a set of n integers as the degrees of vertices of an n-vertex linear graph. Other related problems, such as when a set of integers is realizable as a connected graph, connected graph without “parallel” elements, separable graph, and nonseparable graph, are considered. The relationship between this problem and the problem of isomers in the organic chemistry is described. A similar problem in weighted graphs is also studied.) <|cite_end|>, and Chung-Lu algorithm do not guarantee the graphical and connected graphs properties, and thus are not fit for designing communication systems for smart grids. Horv\'{a}t-Modes model, on the contrary, yields both connected and graphical outputs. Due to its edge connection mechanism it produces graphs with large diameter and low assortativity, which are not realistic for communication systems. Thus, it is important to propose generative graph algorithms that take into account the real-world characteristics of communication systems in the design process. Power system statistics were leveraged to design optimal communication systems for smart grids in <|cite_start|> (Reference: Generating statistically correct random topologies for testing smart grid communication and control networks: In order to design an efficient communication scheme and examine the efficiency of any networked control architecture in smart grid applications, we need to characterize statistically its information source, namely the power grid itself. Investigating the statistical properties of power grids has the immediate benefit of providing a natural simulation platform, producing a large number of power grid test cases with realistic topologies, with scalable network size, and with realistic electrical parameter settings. The second benefit is that one can start analyzing the performance of decentralized control algorithms over information networks whose topology matches that of the underlying power network and use network scientific approaches to determine analytically if these architectures would scale well. With these motivations, in this paper we study both the topological and electrical characteristics of power grid networks based on a number of synthetic and real-world power systems. The most interesting discoveries include: the power grid is sparsely connected with obvious small-world properties; its nodal degree distribution can be well fitted by a mixture distribution coming from the sum of a truncated geometric random variable and an irregular discrete random variable; the power grid has very distinctive graph spectral density and its algebraic connectivity scales as a power function of the network size; the line impedance has a heavy-tailed distribution, which can be captured quite accurately by a clipped double Pareto lognormal distribution. Based on the discoveries mentioned above, we propose an algorithm that generates random topology power grids featuring the same topology and electrical characteristics found from the real data.) <|cite_end|>. Similarly, the communication system statistics can be leveraged for designing a realistic communication system for smart grids which is the centerpiece of this paper. Along this parallel, we first derive the statistical metrics of a real-world smart grid's communication system and then propose a graph generator based on the statistical information of the smart grid's communication system. Different graph attributes of a real-world smart grid communication graph such as diameter, assortativity, etc., are taken into consideration in designing the communication system for power test cases. Moreover, we adapt the Chung-Lu algorithm to preserve the connectivity of the graph since the connectivity is a key characteristic of communication systems. In addition, edge switching operation is employed to prevent self loops and parallel paths in the communication graph. The contributions of this work are outlined as follows: (1) To generate connected, simple, and realistic cyber graphs for smart grids, we propose a simple and elegant framework by updating the Chung-Lu algorithm. (2) To satisfy the required degree distribution, we propose an adaptive remaining degree approach instead of the fixed expected degrees. (3) To minimize the length of cross edges between power and cyber graphs, we employ the Hungarian algorithm for optimal matching between cyber and power nodes. (4) To compare the proposed method with the currently available approaches, we implement other graph generation methods in the literature such as configuration model, Havel-Hakimi, Horv\'{a}t-Modes, and Chung-Lu algorithms using the same degree sequence and analyze the global characteristics of the output graphs. The remainder of this paper is divided into four sections. Section II proposes the cyber graph generation framework. Section III presents the generated graphs and discusses their global characteristics. Finally, Section IV concludes the paper. <|paper_end|>
[ "<|reference_start|> Coordination of Transmission, Distribution and Communication Systems for Prompt Power System Recovery after Disasters:Report – Grid and Communication Interdependency Review and Characterization of Typical Communication Systems: <|reference_end|>", "<|reference_start|> 2019 20th International Conference on Intelligent System Application to Power Systems (ISAP): Zero Steady State Error on Voltage Source Inverters <|reference_end|>", "<|reference_start|> Generating statistically correct random topologies for testing smart grid communication and control networks: In order to design an efficient communication scheme and examine the efficiency of any networked control architecture in smart grid applications, we need to characterize statistically its information source, namely the power grid itself. Investigating the statistical properties of power grids has the immediate benefit of providing a natural simulation platform, producing a large number of power grid test cases with realistic topologies, with scalable network size, and with realistic electrical parameter settings. The second benefit is that one can start analyzing the performance of decentralized control algorithms over information networks whose topology matches that of the underlying power network and use network scientific approaches to determine analytically if these architectures would scale well. With these motivations, in this paper we study both the topological and electrical characteristics of power grid networks based on a number of synthetic and real-world power systems. The most interesting discoveries include: the power grid is sparsely connected with obvious small-world properties; its nodal degree distribution can be well fitted by a mixture distribution coming from the sum of a truncated geometric random variable and an irregular discrete random variable; the power grid has very distinctive graph spectral density and its algebraic connectivity scales as a power function of the network size; the line impedance has a heavy-tailed distribution, which can be captured quite accurately by a clipped double Pareto lognormal distribution. Based on the discoveries mentioned above, we propose an algorithm that generates random topology power grids featuring the same topology and electrical characteristics found from the real data. <|reference_end|>", "<|reference_start|> On Realizability of a Set of Integers as Degrees of the Vertices of a Linear Graph. I: This paper is mainly concerned with the realizability of a set of n integers as the degrees of vertices of an n-vertex linear graph. Other related problems, such as when a set of integers is realizable as a connected graph, connected graph without “parallel” elements, separable graph, and nonseparable graph, are considered. The relationship between this problem and the problem of isomers in the organic chemistry is described. A similar problem in weighted graphs is also studied. <|reference_end|>" ]
[ 1, 6, 9, 12 ]
{"<|cite_1|>": "ss-712486", "<|cite_2|>": "ss-982455", "<|cite_3|>": "ss-1865476", "<|multi_cite_4_1|>": "ss-1121540", "<|multi_cite_4_2|>": "ss-1960472", "<|multi_cite_4_3|>": "ss-1921548", "<|multi_cite_4_4|>": "ss-1328614", "<|cite_5|>": "arxiv-16585", "<|cite_6|>": "ss-712486", "<|cite_7|>": "ss-712486", "<|cite_8|>": "ss-2410840", "<|cite_9|>": "ss-1511514", "<|cite_10|>": "ss-762217", "<|cite_11|>": "ss-712486"}
2009.04416
<|paper_start|> Title: Phasic Policy Gradient Abstract: Phasic Policy Gradient: We introduce Phasic Policy Gradient (PPG), a reinforcement learning framework which modifies traditional on-policy actor-critic methods by separating policy and value function training into distinct phases. In prior methods, one must choose between using a shared network or separate networks to represent the policy and value function. Using separate networks avoids interference between objectives, while using a shared network allows useful features to be shared. PPG is able to achieve the best of both worlds by splitting optimization into two phases, one that advances training and one that distills features. PPG also enables the value function to be more aggressively optimized with a higher level of sample reuse. Compared to PPO, we find that PPG significantly improves sample efficiency on the challenging Procgen Benchmark. Introduction Model free reinforcement learning (RL) has enjoyed remarkable success in recent years, achieving impressive results in diverse domains including DoTA <|cite_start|> (Reference: Dota 2 with Large Scale Deep Reinforcement Learning: On April 13th, 2019, OpenAI Five became the first AI system to defeat the world champions at an esports game. The game of Dota 2 presents novel challenges for AI systems such as long time horizons, imperfect information, and complex, continuous state-action spaces, all challenges which will become increasingly central to more capable AI systems. OpenAI Five leveraged existing reinforcement learning techniques, scaled to learn from batches of approximately 2 million frames every 2 seconds. We developed a distributed training system and tools for continual training which allowed us to train OpenAI Five for 10 months. By defeating the Dota 2 world champion (Team OG), OpenAI Five demonstrates that self-play reinforcement learning can achieve superhuman performance on a difficult task.) <|cite_end|>, Starcraft II <|cite_start|> (Reference: Grandmaster level in StarCraft II using multi-agent reinforcement learning: ) <|cite_end|>, and robotic control <|cite_start|> (Reference: Solving Rubik's Cube with a Robot Hand: We demonstrate that models trained only in simulation can be used to solve a manipulation problem of unprecedented complexity on a real robot. This is made possible by two key components: a novel algorithm, which we call automatic domain randomization (ADR) and a robot platform built for machine learning. ADR automatically generates a distribution over randomized environments of ever-increasing difficulty. Control policies and vision state estimators trained with ADR exhibit vastly improved sim2real transfer. For control policies, memory-augmented models trained on an ADR-generated distribution of environments show clear signs of emergent meta-learning at test time. The combination of ADR with our custom robot platform allows us to solve a Rubik's cube with a humanoid robot hand, which involves both control and state estimation problems. Videos summarizing our results are available: https://openai.com/blog/solving-rubiks-cube/) <|cite_end|>. Although policy gradient methods like PPO <|cite_start|> (Reference: Proximal Policy Optimization Algorithms: We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.) <|cite_end|>, A3C <|cite_start|> (Reference: Asynchronous Methods for Deep Reinforcement Learning: We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.) <|cite_end|>, and IMPALA are behind some of the most high profile results, many related algorithms have proposed a variety of policy objectives <|cite_start|> (Reference: Trust Region Policy Optimization: We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.) <|cite_end|> <|cite_start|> (Reference: Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation: In this work, we propose to apply trust region optimization to deep reinforcement learning using a recently proposed Kronecker-factored approximation to the curvature. We extend the framework of natural policy gradient and propose to optimize both the actor and the critic using Kronecker-factored approximate curvature (K-FAC) with trust region; hence we call our method Actor Critic using Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this is the first scalable trust region natural gradient method for actor-critic methods. It is also a method that learns non-trivial tasks in continuous control as well as discrete control policies directly from raw pixel inputs. We tested our approach across discrete domains in Atari games as well as continuous domains in the MuJoCo environment. With the proposed methods, we are able to achieve higher rewards and a 2- to 3-fold improvement in sample efficiency on average, compared to previous state-of-the-art on-policy actor-critic methods. Code is available at https://github.com/openai/baselines) <|cite_end|> <|cite_start|> (Reference: Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning: In this paper, we aim to develop a simple and scalable reinforcement learning algorithm that uses standard supervised learning methods as subroutines. Our goal is an algorithm that utilizes only simple and convergent maximum likelihood loss functions, while also being able to leverage off-policy data. Our proposed approach, which we refer to as advantage-weighted regression (AWR), consists of two standard supervised learning steps: one to regress onto target values for a value function, and another to regress onto weighted target actions for the policy. The method is simple and general, can accommodate continuous and discrete actions, and can be implemented in just a few lines of code on top of standard supervised learning methods. We provide a theoretical motivation for AWR and analyze its properties when incorporating off-policy data from experience replay. We evaluate AWR on a suite of standard OpenAI Gym benchmark tasks, and show that it achieves competitive performance compared to a number of well-established state-of-the-art RL algorithms. AWR is also able to acquire more effective policies than most off-policy algorithms when learning from purely static datasets with no additional environmental interactions. Furthermore, we demonstrate our algorithm on challenging continuous control tasks with highly complex simulated characters.) <|cite_end|> <|cite_start|> (Reference: V-MPO: On-Policy Maximum a Posteriori Policy Optimization for Discrete and Continuous Control: Some of the most successful applications of deep reinforcement learning to challenging domains in discrete and continuous control have used policy gradient methods in the on-policy setting. However, policy gradients can suffer from large variance that may limit performance, and in practice require carefully tuned entropy regularization to prevent policy collapse. As an alternative to policy gradient algorithms, we introduce V-MPO, an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) that performs policy iteration based on a learned state-value function. We show that V-MPO surpasses previously reported scores for both the Atari-57 and DMLab-30 benchmark suites in the multi-task setting, and does so reliably without importance weighting, entropy regularization, or population-based tuning of hyperparameters. On individual DMLab and Atari levels, the proposed algorithm can achieve scores that are substantially higher than has previously been reported. V-MPO is also applicable to problems with high-dimensional, continuous action spaces, which we demonstrate in the context of learning to control simulated humanoids with 22 degrees of freedom from full state observations and 56 degrees of freedom from pixel observations, as well as example OpenAI Gym tasks where V-MPO achieves substantially higher asymptotic scores than previously reported.) <|cite_end|> <|cite_start|> (Reference: Continuous control with deep reinforcement learning: We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.) <|cite_end|> <|cite_start|> (Reference: Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor: Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy. That is, to succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.) <|cite_end|>. All of these algorithms fundamentally rely on the actor-critic framework, with two key quantities driving learning: the policy and the value function. In practice, whether or not to share parameters between the policy and the value function networks is an important implementation decision. There is a clear advantage to sharing parameters: features trained by each objective can be used to better optimize the other. However, there are also disadvantages to sharing network parameters. First, it is not clear how to appropriately balance the competing objectives of the policy and the value function. Any method that jointly optimizes these two objectives with the same network must assign a relative weight to each. Regardless of how well this hyperparameter is chosen, there is a risk that the optimization of one objective will interfere with the optimization of the other. Second, the use of a shared network all but requires the policy and value function objectives to be trained with the same data, and consequently the same level of sample reuse. This is an artificial and undesirable restriction. We address these problems with Phasic Policy Gradient (PPG), an algorithm which preserves the feature sharing between the policy and value function, while otherwise decoupling their training. PPG operates in two alternating phases: the first phase trains the policy, and the second phase distills useful features from the value function. More generally, PPG can be used to perform any auxiliary optimization alongside RL, though in this work we take value function error to be the sole auxiliary objective. Using PPG, we highlight two important observations about on-policy actor-critic methods: \begin{enumerate} \item Interference between policy and value function optimization can negatively impact performance when parameters are shared between the policy and the value function networks. \item Value function optimization often tolerates a significantly higher level of sample reuse than policy optimization. \end{enumerate} By mitigating the interference between the policy and value function objectives while still sharing representations, and by optimizing each with the appropriate level of sample reuse, PPG significantly improves sample efficiency. Related Work <|cite_start|> (Reference: The Impact of Non-stationarity on Generalisation in Deep Reinforcement Learning: Non-stationarity arises in Reinforcement Learning (RL) even in stationary environments. Most RL algorithms collect new data throughout training, using a non-stationary behaviour policy. Furthermore, training targets in RL can change even with a fixed state distribution when the policy, critic, or bootstrap values are updated. We study these types of non-stationarity in supervised learning settings as well as in RL, finding that they can lead to worse generalisation performance when using deep neural network function approximators. Consequently, to improve generalisation of deep RL agents, we propose Iterated Relearning (ITER). ITER augments standard RL training by repeated knowledge transfer of the current policy into a freshly initialised network, which thereby experiences less non-stationarity during training. Experimentally, we show that ITER improves performance on the challenging generalisation benchmarks ProcGen and Multiroom.) <|cite_end|> recently proposed Iterative Relearning (ITER) to reduce the impact of non-stationarity during RL training. ITER and PPG share a striking similarity: both algorithms alternate between a standard RL phase and a distillation phase. However, the nature and purpose of the distillation phase varies. In ITER, the policy and value function teachers are periodically distilled into newly initialized student networks, in an effort to improve generalization. In PPG, the value function network is periodically distilled into the policy network, in an effort to improve sample efficiency. Prior work has considered the role the value function plays as an auxiliary task. <|cite_start|> (Reference: A Geometric Perspective on Optimal Representations for Reinforcement Learning: We propose a new perspective on representation learning in reinforcement learning based on geometric properties of the space of value functions. We leverage this perspective to provide formal evidence regarding the usefulness of value functions as auxiliary tasks. Our formulation considers adapting the representation to minimize the (linear) approximation of the value function of all stationary policies for a given environment. We show that this optimization reduces to making accurate predictions regarding a special class of value functions which we call adversarial value functions (AVFs). We demonstrate that using value functions as auxiliary tasks corresponds to an expected-error relaxation of our formulation, with AVFs a natural candidate, and identify a close relationship with proto-value functions (Mahadevan, 2005). We highlight characteristics of AVFs and their usefulness as auxiliary tasks in a series of experiments on the four-room domain.) <|cite_end|> investigate using value functions to train useful representations, specifically focusing on a special class of value functions called Adversarial Value Functions (AVFs). They find that AVFs provide a useful auxiliary objective in the four-room domain. <|cite_start|> (Reference: A Comparative Analysis of Expected and Distributional Reinforcement Learning: Since their introduction a year ago, distributional approaches to reinforcement learning (distributional RL) have produced strong results relative to the standard approach which models expected values (expected RL). However, aside from convergence guarantees, there have been few theoretical results investigating the reasons behind the improvements distributional RL provides. In this paper we begin the investigation into this fundamental question by analyzing the differences in the tabular, linear approximation, and non-linear approximation settings. We prove that in many realizations of the tabular and linear approximation settings, distributional RL behaves exactly the same as expected RL. In cases where the two methods behave differently, distributional RL can in fact hurt performance when it does not induce identical behaviour. We then continue with an empirical analysis comparing distributional and expected RL methods in control settings with non-linear approximators to tease apart where the improvements from distributional RL methods are coming from.) <|cite_end|> suggest that the benefits of distributional RL <|cite_start|> (Reference: A Distributional Perspective on Reinforcement Learning: In this paper we argue for the fundamental importance of the value distribution: the distribution of the random return received by a reinforcement learning agent. This is in contrast to the common approach to reinforcement learning which models the expectation of this return, or value. Although there is an established body of literature studying the value distribution, thus far it has always been used for a specific purpose such as implementing risk-aware behaviour. We begin with theoretical results in both the policy evaluation and control settings, exposing a significant distributional instability in the latter. We then use the distributional perspective to design a new algorithm which applies Bellman's equation to the learning of approximate value distributions. We evaluate our algorithm using the suite of games from the Arcade Learning Environment. We obtain both state-of-the-art results and anecdotal evidence demonstrating the importance of the value distribution in approximate reinforcement learning. Finally, we combine theoretical and empirical evidence to highlight the ways in which the value distribution impacts learning in the approximate setting.) <|cite_end|> can perhaps be attributed to the rich signal the value function distribution provides as an auxiliary task. We find that the representation learning performed by the value function is indeed critical in Procgen environments, although we consider only the value function of the current policy, and we do not model the full value distribution. Off-policy algorithms like Soft Actor-Critic (SAC) <|cite_start|> (Reference: Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor: Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy. That is, to succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.) <|cite_end|>, Deep Deterministic Policy Gradient (DDPG) <|cite_start|> (Reference: Continuous control with deep reinforcement learning: We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.) <|cite_end|>, and Actor-Critic with Experience Replay (ACER) <|cite_start|> (Reference: Sample Efficient Actor-Critic with Experience Replay: This paper presents an actor-critic deep reinforcement learning agent with experience replay that is stable, sample efficient, and performs remarkably well on challenging environments, including the discrete 57-game Atari domain and several continuous control problems. To achieve this, the paper introduces several innovations, including truncated importance sampling with bias correction, stochastic dueling network architectures, and a new trust region policy optimization method.) <|cite_end|> all employ replay buffers to improve sample efficiency via off-policy updates. PPG also utilizes a replay buffer, specifically when performing updates during the auxiliary phase. However, unlike these algorithms, PPG does not attempt to improve the policy from off-policy data. Rather, this replay buffer data is used only to better fit the value targets and to better train features for the policy. SAC also notably uses separate policy and value function networks, presumably, like PPG, to avoid interference between their respective objectives. Although we use the clipped surrogate objective from PPO <|cite_start|> (Reference: Proximal Policy Optimization Algorithms: We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.) <|cite_end|> throughout this work, PPG is in principle compatible with the policy objectives from any actor-critic algorithm. <|cite_start|> (Reference: What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study: In recent years, on-policy reinforcement learning (RL) has been successfully applied to many different continuous control tasks. While RL algorithms are often conceptually simple, their state-of-the-art implementations take numerous low- and high-level design decisions that strongly affect the performance of the resulting agents. Those choices are usually not extensively discussed in the literature, leading to discrepancy between published descriptions of algorithms and their implementations. This makes it hard to attribute progress in RL and slows down overall progress [Engstrom'20]. As a step towards filling that gap, we implement >50 such ``choices'' in a unified on-policy RL framework, allowing us to investigate their impact in a large-scale empirical study. We train over 250'000 agents in five continuous control environments of different complexity and provide insights and practical recommendations for on-policy training of RL agents.) <|cite_end|> recently performed a rigorous empirical comparison of many relevant algorithms in the on-policy setting. In particular, AWR <|cite_start|> (Reference: Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning: In this paper, we aim to develop a simple and scalable reinforcement learning algorithm that uses standard supervised learning methods as subroutines. Our goal is an algorithm that utilizes only simple and convergent maximum likelihood loss functions, while also being able to leverage off-policy data. Our proposed approach, which we refer to as advantage-weighted regression (AWR), consists of two standard supervised learning steps: one to regress onto target values for a value function, and another to regress onto weighted target actions for the policy. The method is simple and general, can accommodate continuous and discrete actions, and can be implemented in just a few lines of code on top of standard supervised learning methods. We provide a theoretical motivation for AWR and analyze its properties when incorporating off-policy data from experience replay. We evaluate AWR on a suite of standard OpenAI Gym benchmark tasks, and show that it achieves competitive performance compared to a number of well-established state-of-the-art RL algorithms. AWR is also able to acquire more effective policies than most off-policy algorithms when learning from purely static datasets with no additional environmental interactions. Furthermore, we demonstrate our algorithm on challenging continuous control tasks with highly complex simulated characters.) <|cite_end|> and V-MPO <|cite_start|> (Reference: V-MPO: On-Policy Maximum a Posteriori Policy Optimization for Discrete and Continuous Control: Some of the most successful applications of deep reinforcement learning to challenging domains in discrete and continuous control have used policy gradient methods in the on-policy setting. However, policy gradients can suffer from large variance that may limit performance, and in practice require carefully tuned entropy regularization to prevent policy collapse. As an alternative to policy gradient algorithms, we introduce V-MPO, an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) that performs policy iteration based on a learned state-value function. We show that V-MPO surpasses previously reported scores for both the Atari-57 and DMLab-30 benchmark suites in the multi-task setting, and does so reliably without importance weighting, entropy regularization, or population-based tuning of hyperparameters. On individual DMLab and Atari levels, the proposed algorithm can achieve scores that are substantially higher than has previously been reported. V-MPO is also applicable to problems with high-dimensional, continuous action spaces, which we demonstrate in the context of learning to control simulated humanoids with 22 degrees of freedom from full state observations and 56 degrees of freedom from pixel observations, as well as example OpenAI Gym tasks where V-MPO achieves substantially higher asymptotic scores than previously reported.) <|cite_end|> propose alternate policy objectives that move the current policy towards one which weights the likelihood of each action by the exponentiated advantage of that action. Such objectives could be used in PPG, in place of the PPO objective. There are also several trust region methods, similar in spirit to PPO, that would be compatible with PPG. Trust Region Policy Optimization (TRPO) <|cite_start|> (Reference: Trust Region Policy Optimization: We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.) <|cite_end|> proposed performing policy updates by optimizing a surrogate objective, whose gradient is the policy gradient estimator, subject to a constraint on the KL-divergence between the original policy and the updated policy. Actor Critic using Kronecker-Factored Trust Region (ACKTR) <|cite_start|> (Reference: Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation: In this work, we propose to apply trust region optimization to deep reinforcement learning using a recently proposed Kronecker-factored approximation to the curvature. We extend the framework of natural policy gradient and propose to optimize both the actor and the critic using Kronecker-factored approximate curvature (K-FAC) with trust region; hence we call our method Actor Critic using Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this is the first scalable trust region natural gradient method for actor-critic methods. It is also a method that learns non-trivial tasks in continuous control as well as discrete control policies directly from raw pixel inputs. We tested our approach across discrete domains in Atari games as well as continuous domains in the MuJoCo environment. With the proposed methods, we are able to achieve higher rewards and a 2- to 3-fold improvement in sample efficiency on average, compared to previous state-of-the-art on-policy actor-critic methods. Code is available at https://github.com/openai/baselines) <|cite_end|> uses Kronecker-factored approximated curvature (K-FAC) to perform a similar trust region update, but with a computational cost comparable to SGD. Both methods could be used in the PPG framework. <|paper_end|>
[ "<|reference_start|> The Impact of Non-stationarity on Generalisation in Deep Reinforcement Learning: Non-stationarity arises in Reinforcement Learning (RL) even in stationary environments. Most RL algorithms collect new data throughout training, using a non-stationary behaviour policy. Furthermore, training targets in RL can change even with a fixed state distribution when the policy, critic, or bootstrap values are updated. We study these types of non-stationarity in supervised learning settings as well as in RL, finding that they can lead to worse generalisation performance when using deep neural network function approximators. Consequently, to improve generalisation of deep RL agents, we propose Iterated Relearning (ITER). ITER augments standard RL training by repeated knowledge transfer of the current policy into a freshly initialised network, which thereby experiences less non-stationarity during training. Experimentally, we show that ITER improves performance on the challenging generalisation benchmarks ProcGen and Multiroom. <|reference_end|>", "<|reference_start|> A Geometric Perspective on Optimal Representations for Reinforcement Learning: We propose a new perspective on representation learning in reinforcement learning based on geometric properties of the space of value functions. We leverage this perspective to provide formal evidence regarding the usefulness of value functions as auxiliary tasks. Our formulation considers adapting the representation to minimize the (linear) approximation of the value function of all stationary policies for a given environment. We show that this optimization reduces to making accurate predictions regarding a special class of value functions which we call adversarial value functions (AVFs). We demonstrate that using value functions as auxiliary tasks corresponds to an expected-error relaxation of our formulation, with AVFs a natural candidate, and identify a close relationship with proto-value functions (Mahadevan, 2005). We highlight characteristics of AVFs and their usefulness as auxiliary tasks in a series of experiments on the four-room domain. <|reference_end|>", "<|reference_start|> What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study: In recent years, on-policy reinforcement learning (RL) has been successfully applied to many different continuous control tasks. While RL algorithms are often conceptually simple, their state-of-the-art implementations take numerous low- and high-level design decisions that strongly affect the performance of the resulting agents. Those choices are usually not extensively discussed in the literature, leading to discrepancy between published descriptions of algorithms and their implementations. This makes it hard to attribute progress in RL and slows down overall progress [Engstrom'20]. As a step towards filling that gap, we implement >50 such ``choices'' in a unified on-policy RL framework, allowing us to investigate their impact in a large-scale empirical study. We train over 250'000 agents in five continuous control environments of different complexity and provide insights and practical recommendations for on-policy training of RL agents. <|reference_end|>", "<|reference_start|> Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning: In this paper, we aim to develop a simple and scalable reinforcement learning algorithm that uses standard supervised learning methods as subroutines. Our goal is an algorithm that utilizes only simple and convergent maximum likelihood loss functions, while also being able to leverage off-policy data. Our proposed approach, which we refer to as advantage-weighted regression (AWR), consists of two standard supervised learning steps: one to regress onto target values for a value function, and another to regress onto weighted target actions for the policy. The method is simple and general, can accommodate continuous and discrete actions, and can be implemented in just a few lines of code on top of standard supervised learning methods. We provide a theoretical motivation for AWR and analyze its properties when incorporating off-policy data from experience replay. We evaluate AWR on a suite of standard OpenAI Gym benchmark tasks, and show that it achieves competitive performance compared to a number of well-established state-of-the-art RL algorithms. AWR is also able to acquire more effective policies than most off-policy algorithms when learning from purely static datasets with no additional environmental interactions. Furthermore, we demonstrate our algorithm on challenging continuous control tasks with highly complex simulated characters. <|reference_end|>" ]
[ 11, 12, 19, 20 ]
{"<|cite_5|>": "arxiv-239288", "<|cite_6|>": "ss-679381", "<|cite_7|>": "arxiv-229065", "<|cite_8|>": "arxiv-129813", "<|cite_9|>": "arxiv-91622", "<|multi_cite_11_1|>": "arxiv-73321", "<|multi_cite_11_2|>": "arxiv-132151", "<|multi_cite_11_3|>": "arxiv-226496", "<|multi_cite_11_4|>": "arxiv-225819", "<|multi_cite_11_5|>": "arxiv-83736", "<|multi_cite_11_6|>": "arxiv-144586", "<|cite_1|>": "ss-1979389", "<|cite_2|>": "arxiv-189765", "<|cite_3|>": "arxiv-189611", "<|cite_12|>": "arxiv-129961", "<|cite_13|>": "arxiv-144586", "<|cite_14|>": "arxiv-83736", "<|cite_15|>": "arxiv-109317", "<|cite_16|>": "arxiv-129813", "<|cite_4|>": "arxiv-270904", "<|cite_17|>": "arxiv-226496", "<|cite_18|>": "arxiv-225819", "<|cite_19|>": "arxiv-73321", "<|cite_20|>": "arxiv-132151"}
2102.01930
<|paper_start|> Title: General-Purpose Speech Representation Learning through a Self-Supervised Multi-Granularity Framework Abstract: General-Purpose Speech Representation Learning through a Self-Supervised Multi-Granularity Framework: This paper presents a self-supervised learning framework, named MGF, for general-purpose speech representation learning. In the design of MGF, speech hierarchy is taken into consideration. Specifically, we propose to use generative learning approaches to capture fine-grained information at small time scales and use discriminative learning approaches to distill coarse-grained or semantic information at large time scales. For phoneme-scale learning, we borrow idea from the masked language model but tailor it for the continuous speech signal by replacing classification loss with a contrastive loss. We corroborate our design by evaluating MGF representation on various downstream tasks, including phoneme classification, speaker classification, speech recognition, and emotion classification. Experiments verify that training at different time scales needs different training targets and loss functions, which in general complement each other and lead to a better performance. Introduction Unsupervised pre-training, or representation learning, has drawn wide interests in both academia and industry. The BERT model <|cite_start|> (Reference: {BERT:: Багато актуальних NLP завдань, включаючи задачу авто пунктуації, залежать від ефективного вирішення завдання прогнозування – визначення того, який саме  токен повинен бути наступним. У даній роботі розглянута підзадача прогнозування наступного токену на основі попередніх. Основною проблемою існуючих підходів є те, що вони не однаково ефективні. З метою вирішення цієї проблеми у даній роботі розглядається використання двонаправлених кодерів моделі BERT з даними, які були токенізовані.) <|cite_end|> has become a universal feature extractor for solving a wide range of natural language processing (NLP) tasks. Recently, it is reported that the image embedding learned in an unsupervised manner achieves comparable performance to its supervised counterparts in the image classification task <|cite_start|> (Reference: Momentum Contrast for Unsupervised Visual Representation Learning: We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks.) <|cite_end|> <|cite_start|> (Reference: A Simple Framework for Contrastive Learning of Visual Representations: This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.) <|cite_end|>. Actually, most contemporary unsupervised pre-training methods adopt the self-supervised learning approach. We use these two terms interchangeably in this paper to refer to methods that do not need human annotation. In the speech domain, pre-training is not a new concept. The speaker recognition task depends heavily on the supervised pre-training step to obtain a good feature embedding. Recently, self-supervised learning is also used to pre-train dedicated models for automatic speech recognition (ASR) <|cite_start|> (Reference: wav2vec: Unsupervised Pre-training for Speech Recognition: We explore unsupervised pre-training for speech recognition by learning representations of raw audio. wav2vec is trained on large amounts of unlabeled audio data and the resulting representations are then used to improve acoustic model training. We pre-train a simple multi-layer convolutional neural network optimized via a noise contrastive binary classification task. Our experiments on WSJ reduce WER of a strong character-based log-mel filterbank baseline by up to 36% when only a few hours of transcribed data is available. Our approach achieves 2.43% WER on the nov92 test set. This outperforms Deep Speech 2, the best reported character-based system in the literature while using two orders of magnitude less labeled training data.) <|cite_end|> <|cite_start|> (Reference: vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations: We propose vq-wav2vec to learn discrete representations of audio segments through a wav2vec-style self-supervised context prediction task. The algorithm uses either a gumbel softmax or online k-means clustering to quantize the dense representations. Discretization enables the direct application of algorithms from the NLP community which require discrete inputs. Experiments show that BERT pre-training achieves a new state of the art on TIMIT phoneme classification and WSJ speech recognition.) <|cite_end|> <|cite_start|> (Reference: wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations: We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.) <|cite_end|> <|cite_start|> (Reference: Deep Contextualized Acoustic Representations For Semi-Supervised Speech Recognition: We propose a novel approach to semi-supervised automatic speech recognition (ASR). We first exploit a large amount of unlabeled audio data via representation learning, where we reconstruct a temporal slice of filterbank features from past and future context frames. The resulting deep contextualized acoustic representations (DeCoAR) are then used to train a CTC-based end-to-end ASR system using a smaller amount of labeled audio data. In our experiments, we show that systems trained on DeCoAR consistently outperform ones trained on conventional filterbank features, giving 42% and 19% relative improvement over the baseline on WSJ eval92 and LibriSpeech test-clean, respectively. Our approach can drastically reduce the amount of labeled data required; unsupervised training on LibriSpeech then supervision with 100 hours of labeled data achieves performance on par with training on all 960 hours directly. Pre-trained models and code will be released online.) <|cite_end|>. In this work, however, we are not focusing on these task-oriented pre-training. Instead, we aim to pre-train a general-purpose feature extractor which embeds a speech signal into a feature representation that could be used for a variety of downstream speech tasks, in a way similar to how pre-trained language and image representations are used in their respective domains. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{speech_structure.pdf} \caption{Speech hierarchy. The waveform is sampled at 16K Hz. Sample points within a 10ms segment form a frame, which is the basic operating unit in many speech algorithms. Phonemic information can be extracted from several frames, as we illustrate with red boxes. The whole sentence lasts for more than one second. } \label{fig:structure} \end{figure} The main difficulty in learning a general-purpose speech representation is that speech carries complex hierarchical structure (samples, phonemes, and sentences) which contains relevant information at different time scales <|cite_start|> (Reference: Learning Problem-agnostic Speech Representations from Multiple Self-supervised Tasks: Learning good representations without supervision is still an open issue in machine learning, and is particularly challenging for speech signals, which are often characterized by long sequences with a complex hierarchical structure. Some recent works, however, have shown that it is possible to derive useful speech representations by employing a self-supervised encoder-discriminator approach. This paper proposes an improved self-supervised method, where a single neural encoder is followed by multiple workers that jointly solve different self-supervised tasks. The needed consensus across different tasks naturally imposes meaningful constraints to the encoder, contributing to discover general representations and to minimize the risk of learning superficial ones. Experiments show that the proposed approach can learn transferable, robust, and problem-agnostic features that carry on relevant information from the speech signal, such as speaker identity, phonemes, and even higher-level features such as emotional cues. In addition, a number of design choices make the encoder easily exportable, facilitating its direct usage or adaptation to different problems.) <|cite_end|>. In this work, we propose a Multi-Granularity Framework, named MGF, to train the model at multiple time scales. A key innovation in MGF is to adopt different learning approaches for the learning at different time scales. In particular, we use generative approaches to capture fine-grained information for small time scales on the order of a few milliseconds, and we adopt discriminative approaches to distill semantic information for large time scales which correspond to a phoneme and a sentence. In order to realize phoneme-level contrastive learning, we extend the token-oriented masked language model (MLM) model <|cite_start|> (Reference: {BERT:: Багато актуальних NLP завдань, включаючи задачу авто пунктуації, залежать від ефективного вирішення завдання прогнозування – визначення того, який саме  токен повинен бути наступним. У даній роботі розглянута підзадача прогнозування наступного токену на основі попередніх. Основною проблемою існуючих підходів є те, що вони не однаково ефективні. З метою вирішення цієї проблеми у даній роботі розглядається використання двонаправлених кодерів моделі BERT з даними, які були токенізовані.) <|cite_end|> to continuous masked language model (cMLM) to accommodate the continuous speech signals without token boundaries. MGF is implemented by a deep bidirectional Transformer <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|> <|cite_start|> (Reference: {BERT:: Багато актуальних NLP завдань, включаючи задачу авто пунктуації, залежать від ефективного вирішення завдання прогнозування – визначення того, який саме  токен повинен бути наступним. У даній роботі розглянута підзадача прогнозування наступного токену на основі попередніх. Основною проблемою існуючих підходів є те, що вони не однаково ефективні. З метою вирішення цієї проблеми у даній роботі розглядається використання двонаправлених кодерів моделі BERT з даними, які були токенізовані.) <|cite_end|>. We evaluate the MGF representation on multiple downstream tasks and benchmark datasets, which becomes the second main contribution of our work. The performance of MGF is first evaluated on phoneme classification and speaker classification tasks, following the other general-purpose speech representation learning work <|cite_start|> (Reference: Representation Learning with Contrastive Predictive Coding: While supervised learning has enabled great progress in many applications, unsupervised learning has not seen such widespread adoption, and remains an important and challenging endeavor for artificial intelligence. In this work, we propose a universal unsupervised learning approach to extract useful representations from high-dimensional data, which we call Contrastive Predictive Coding. The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models. We use a probabilistic contrastive loss which induces the latent space to capture information that is maximally useful to predict future samples. It also makes the model tractable by using negative sampling. While most prior work has focused on evaluating representations for a particular modality, we demonstrate that our approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.) <|cite_end|> <|cite_start|> (Reference: Mockingjay: Unsupervised Speech Representation Learning with Deep Bidirectional Transformer Encoders: We present Mockingjay as a new speech representation learning approach, where bidirectional Transformer encoders are pre-trained on a large amount of unlabeled speech. Previous speech representation methods learn through conditioning on past frames and predicting information about future frames. Whereas Mockingjay is designed to predict the current frame through jointly conditioning on both past and future contexts. The Mockingjay representation improves performance for a wide range of downstream tasks, including phoneme classification, speaker recognition, and sentiment classification on spoken content, while outperforming other approaches. Mockingjay is empirically powerful and can be fine-tuned with downstream models, with only 2 epochs we further improve performance dramatically. In a low resource setting with only 0.1% of labeled data, we outperform the result of Mel-features that uses all 100% labeled data.) <|cite_end|>. We find that features learned by MGF is very powerful on these two orthogonal tasks. On the LibriSpeech dataset, MGF representation achieves a phoneme classification accuracy of 73.4\% under linear evaluation, surpassing the existing unsupervised pre-training methods by a large margin. On the speaker classification task, MGF representation is the first to achieve an accuracy of 100\%. We further evaluate MGF in other three downstream tasks. First, in view of the saturated performance in speaker classification, we propose a new and harder task named \textit{one-shot speaker classification}, where only one utterance per speaker is provided in the fine-tuning stage. In this task, MGF is evaluated against the well-known x-vector and d-vector and is shown to achieve better performance. Second, we compare MGF with a task-specific pre-training model wav2vec in the ASR task. Third, we test MGF representation on the IEMOCAP emotion classification task. Surprisingly, simply appending a fully-connected layer after MGF achieves the top performance among all existing audio-based approaches. Related Work \label{sec:2} There are two camps of self-supervised learning approaches, namely discriminative and generative approaches. We will first review these two approaches for speech pre-training, and then discuss other related work that motivates MGF. \subsection{Discriminative Approaches} \label{subsec:2.1} Discriminative approaches acquire supervision signal from the contrastive distance between a selected positive sample and several negative samples. By carefully designing the training target and the data sampling procedure, samples can be automatically labelled. Contrastive predictive coding (CPC) <|cite_start|> (Reference: Representation Learning with Contrastive Predictive Coding: While supervised learning has enabled great progress in many applications, unsupervised learning has not seen such widespread adoption, and remains an important and challenging endeavor for artificial intelligence. In this work, we propose a universal unsupervised learning approach to extract useful representations from high-dimensional data, which we call Contrastive Predictive Coding. The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models. We use a probabilistic contrastive loss which induces the latent space to capture information that is maximally useful to predict future samples. It also makes the model tractable by using negative sampling. While most prior work has focused on evaluating representations for a particular modality, we demonstrate that our approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.) <|cite_end|> is a contrastive learning method based on predicting the future in the latent space. The representations of temporally nearby segments are treated as positive samples while those of temporally distant segments are treated as negative samples. However, one could easily find a counter example in speech processing. For example, a word appears twice in an utterance with the same meaning. When the first appearance is the anchor, the second appearance should not be treated as a negative sample no matter how far it is. Previous work <|cite_start|> (Reference: An Unsupervised Autoregressive Model for Speech Representation Learning: This paper proposes a novel unsupervised autoregressive neural model for learning generic speech representations. In contrast to other speech representation learning methods that aim to remove noise or speaker variabilities, ours is designed to preserve information for a wide range of downstream tasks. In addition, the proposed model does not require any phonetic or word boundary labels, allowing the model to benefit from large quantities of unlabeled data. Speech representations learned by our model significantly improve performance on both phone classification and speaker verification over the surface features and other supervised and unsupervised approaches. Further analysis shows that different levels of speech information are captured by our model at different layers. In particular, the lower layers tend to be more discriminative for speakers, while the upper layers provide more phonetic content.) <|cite_end|> also notices that the choice of negative samples in CPC has huge effect on its performance on the phoneme classification task. While CPC itself is a general-purpose speech pre-training method, it can be leveraged in some task-specific pre-training models, such as wav2vec <|cite_start|> (Reference: wav2vec: Unsupervised Pre-training for Speech Recognition: We explore unsupervised pre-training for speech recognition by learning representations of raw audio. wav2vec is trained on large amounts of unlabeled audio data and the resulting representations are then used to improve acoustic model training. We pre-train a simple multi-layer convolutional neural network optimized via a noise contrastive binary classification task. Our experiments on WSJ reduce WER of a strong character-based log-mel filterbank baseline by up to 36% when only a few hours of transcribed data is available. Our approach achieves 2.43% WER on the nov92 test set. This outperforms Deep Speech 2, the best reported character-based system in the literature while using two orders of magnitude less labeled training data.) <|cite_end|>, vq-wav2vec <|cite_start|> (Reference: vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations: We propose vq-wav2vec to learn discrete representations of audio segments through a wav2vec-style self-supervised context prediction task. The algorithm uses either a gumbel softmax or online k-means clustering to quantize the dense representations. Discretization enables the direct application of algorithms from the NLP community which require discrete inputs. Experiments show that BERT pre-training achieves a new state of the art on TIMIT phoneme classification and WSJ speech recognition.) <|cite_end|>, and wav2vec 2.0 <|cite_start|> (Reference: wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations: We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.) <|cite_end|>. Vq-wav2vec proposes a quantization algorithm so that wav2vec (which adopts CPC) can be combined with the BERT model <|cite_start|> (Reference: {BERT:: Багато актуальних NLP завдань, включаючи задачу авто пунктуації, залежать від ефективного вирішення завдання прогнозування – визначення того, який саме  токен повинен бути наступним. У даній роботі розглянута підзадача прогнозування наступного токену на основі попередніх. Основною проблемою існуючих підходів є те, що вони не однаково ефективні. З метою вирішення цієї проблеми у даній роботі розглядається використання двонаправлених кодерів моделі BERT з даними, які були токенізовані.) <|cite_end|> to achieve better performance. Wav2vec 2.0 improves vq-wav2vec by training the entire model end-to-end. It also uses a very large unlabelled dataset for pre-training. These task-specific pre-train models are very powerful in their target task, but perform poorly in other speech tasks. \subsection{Generative Approaches} Generative approaches learn to reconstruct signal in the input space or features in some latent spaces. Training is supervised by the reconstruction loss. Autoregressive predictive coding (APC) <|cite_start|> (Reference: An Unsupervised Autoregressive Model for Speech Representation Learning: This paper proposes a novel unsupervised autoregressive neural model for learning generic speech representations. In contrast to other speech representation learning methods that aim to remove noise or speaker variabilities, ours is designed to preserve information for a wide range of downstream tasks. In addition, the proposed model does not require any phonetic or word boundary labels, allowing the model to benefit from large quantities of unlabeled data. Speech representations learned by our model significantly improve performance on both phone classification and speaker verification over the surface features and other supervised and unsupervised approaches. Further analysis shows that different levels of speech information are captured by our model at different layers. In particular, the lower layers tend to be more discriminative for speakers, while the upper layers provide more phonetic content.) <|cite_end|> uses an autoregressive model to encode the history and predict the future. A follow-up work <|cite_start|> (Reference: Improved Speech Representations with Multi-Target Autoregressive Predictive Coding: Training objectives based on predictive coding have recently been shown to be very effective at learning meaningful representations from unlabeled speech. One example is Autoregressive Predictive Coding (Chung et al., 2019), which trains an autoregressive RNN to generate an unseen future frame given a context such as recent past frames. The basic hypothesis of these approaches is that hidden states that can accurately predict future frames are a useful representation for many downstream tasks. In this paper we extend this hypothesis and aim to enrich the information encoded in the hidden states by training the model to make more accurate future predictions. We propose an auxiliary objective that serves as a regularization to improve generalization of the future frame prediction task. Experimental results on phonetic classification, speech recognition, and speech translation not only support the hypothesis, but also demonstrate the effectiveness of our approach in learning representations that contain richer phonetic content.) <|cite_end|> adds an auxiliary objective which encourages the model to additionally remember the past. DeCoAR <|cite_start|> (Reference: Deep Contextualized Acoustic Representations For Semi-Supervised Speech Recognition: We propose a novel approach to semi-supervised automatic speech recognition (ASR). We first exploit a large amount of unlabeled audio data via representation learning, where we reconstruct a temporal slice of filterbank features from past and future context frames. The resulting deep contextualized acoustic representations (DeCoAR) are then used to train a CTC-based end-to-end ASR system using a smaller amount of labeled audio data. In our experiments, we show that systems trained on DeCoAR consistently outperform ones trained on conventional filterbank features, giving 42% and 19% relative improvement over the baseline on WSJ eval92 and LibriSpeech test-clean, respectively. Our approach can drastically reduce the amount of labeled data required; unsupervised training on LibriSpeech then supervision with 100 hours of labeled data achieves performance on par with training on all 960 hours directly. Pre-trained models and code will be released online.) <|cite_end|> borrows the bidirectional learning idea from ELMo <|cite_start|> (Reference: Deep contextualized word representations: We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.) <|cite_end|> so that it can learn deep contextualized acoustic representations for semi-supervised speech recognition. Inspired by the MLM proposed in BERT <|cite_start|> (Reference: {BERT:: Багато актуальних NLP завдань, включаючи задачу авто пунктуації, залежать від ефективного вирішення завдання прогнозування – визначення того, який саме  токен повинен бути наступним. У даній роботі розглянута підзадача прогнозування наступного токену на основі попередніх. Основною проблемою існуючих підходів є те, що вони не однаково ефективні. З метою вирішення цієї проблеми у даній роботі розглядається використання двонаправлених кодерів моделі BERT з даними, які були токенізовані.) <|cite_end|>, recent works <|cite_start|> (Reference: Mockingjay: Unsupervised Speech Representation Learning with Deep Bidirectional Transformer Encoders: We present Mockingjay as a new speech representation learning approach, where bidirectional Transformer encoders are pre-trained on a large amount of unlabeled speech. Previous speech representation methods learn through conditioning on past frames and predicting information about future frames. Whereas Mockingjay is designed to predict the current frame through jointly conditioning on both past and future contexts. The Mockingjay representation improves performance for a wide range of downstream tasks, including phoneme classification, speaker recognition, and sentiment classification on spoken content, while outperforming other approaches. Mockingjay is empirically powerful and can be fine-tuned with downstream models, with only 2 epochs we further improve performance dramatically. In a low resource setting with only 0.1% of labeled data, we outperform the result of Mel-features that uses all 100% labeled data.) <|cite_end|> <|cite_start|> (Reference: {TERA:: Tera is the Hausa and English name for the Nyimatli [nimaáli] people as they call themselves, and their language. Their communities lie principally in the north and east of present-day Gombe State and in the adjoining area of Borno State in north-eastern Nigeria. There are approximately 100,000 people who speak the language as their mother tongue (Gordon & Grimes 2005: 175), many of whom also use Hausa as the local lingua franca; increasing numbers are trilingual as the result of the growing importance of English in commerce and education.) <|cite_end|> have explored using BERT-style objective in speech pre-training. In Mockingjay <|cite_start|> (Reference: Mockingjay: Unsupervised Speech Representation Learning with Deep Bidirectional Transformer Encoders: We present Mockingjay as a new speech representation learning approach, where bidirectional Transformer encoders are pre-trained on a large amount of unlabeled speech. Previous speech representation methods learn through conditioning on past frames and predicting information about future frames. Whereas Mockingjay is designed to predict the current frame through jointly conditioning on both past and future contexts. The Mockingjay representation improves performance for a wide range of downstream tasks, including phoneme classification, speaker recognition, and sentiment classification on spoken content, while outperforming other approaches. Mockingjay is empirically powerful and can be fine-tuned with downstream models, with only 2 epochs we further improve performance dramatically. In a low resource setting with only 0.1% of labeled data, we outperform the result of Mel-features that uses all 100% labeled data.) <|cite_end|>, part of input frames are masked to zeros and the pre-trained encoder is required to predict the masked frame from its neighborhood. TERA <|cite_start|> (Reference: {TERA:: Tera is the Hausa and English name for the Nyimatli [nimaáli] people as they call themselves, and their language. Their communities lie principally in the north and east of present-day Gombe State and in the adjoining area of Borno State in north-eastern Nigeria. There are approximately 100,000 people who speak the language as their mother tongue (Gordon & Grimes 2005: 175), many of whom also use Hausa as the local lingua franca; increasing numbers are trilingual as the result of the growing importance of English in commerce and education.) <|cite_end|> extends Mockingjay by introducing channel alteration and magnitude alteration. \subsection{Multi-Task Approaches} PASE <|cite_start|> (Reference: Learning Problem-agnostic Speech Representations from Multiple Self-supervised Tasks: Learning good representations without supervision is still an open issue in machine learning, and is particularly challenging for speech signals, which are often characterized by long sequences with a complex hierarchical structure. Some recent works, however, have shown that it is possible to derive useful speech representations by employing a self-supervised encoder-discriminator approach. This paper proposes an improved self-supervised method, where a single neural encoder is followed by multiple workers that jointly solve different self-supervised tasks. The needed consensus across different tasks naturally imposes meaningful constraints to the encoder, contributing to discover general representations and to minimize the risk of learning superficial ones. Experiments show that the proposed approach can learn transferable, robust, and problem-agnostic features that carry on relevant information from the speech signal, such as speaker identity, phonemes, and even higher-level features such as emotional cues. In addition, a number of design choices make the encoder easily exportable, facilitating its direct usage or adaptation to different problems.) <|cite_end|> uses multiple regressors and discriminators to learn a problem-agnostic speech encoder. Another work PASE+ <|cite_start|> (Reference: Multi-task self-supervised learning for Robust Speech Recognition: Despite the growing interest in unsupervised learning, extracting meaningful knowledge from unlabelled audio remains an open challenge. To take a step in this direction, we recently proposed a problem-agnostic speech encoder (PASE), that combines a convolutional encoder followed by multiple neural networks, called workers, tasked to solve self-supervised problems (i.e., ones that do not require manual annotations as ground truth). PASE was shown to capture relevant speech information, including speaker voice-print and phonemes. This paper proposes PASE+, an improved version of PASE for robust speech recognition in noisy and reverberant environments. To this end, we employ an online speech distortion module, that contaminates the input signals with a variety of random disturbances. We then propose a revised encoder that better learns short- and long-term speech dynamics with an efficient combination of recurrent and convolutional networks. Finally, we refine the set of workers used in self-supervision to encourage better cooperation. Results on TIMIT, DIRHA and CHiME-5 show that PASE+ significantly outperforms both the previous version of PASE as well as common acoustic features. Interestingly, PASE+ learns transferable representations suitable for highly mismatched acoustic conditions.) <|cite_end|> improves PASE for robust speech recognition in noisy and reverberant environments by introducing data augmentation, more regression tasks, and a collection of architecture modification. Our work and PASE both consider combinations of generative and discriminative objectives. However, PASE does not consider speech hierarchy. In our work, different objectives are used to handle signals at different time scales. \subsection{Self-Supervised Learning in Other Domains} Our work is inspired by some self-supervised learning methods in other domains. BERT <|cite_start|> (Reference: {BERT:: Багато актуальних NLP завдань, включаючи задачу авто пунктуації, залежать від ефективного вирішення завдання прогнозування – визначення того, який саме  токен повинен бути наступним. У даній роботі розглянута підзадача прогнозування наступного токену на основі попередніх. Основною проблемою існуючих підходів є те, що вони не однаково ефективні. З метою вирішення цієї проблеми у даній роботі розглядається використання двонаправлених кодерів моделі BERT з даними, які були токенізовані.) <|cite_end|> is a milestone work for pre-training in NLP. The core of BERT is the MLM, where some input tokens are randomly masked out, and the training objective is to predict the vocabulary ID of the masked word based only on its context. SimCLR <|cite_start|> (Reference: A Simple Framework for Contrastive Learning of Visual Representations: This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.) <|cite_end|> proposes a simple contrastive learning framework for visual representation learning. It adopts the contrastive loss between augmented views of an image without relying on specialized architecture design or memory bank mechanism. BERT and SimCLR inspired our phoneme-scale and sentence-scale contrastive learning, respectively. <|paper_end|>
[ "<|reference_start|> Representation Learning with Contrastive Predictive Coding: While supervised learning has enabled great progress in many applications, unsupervised learning has not seen such widespread adoption, and remains an important and challenging endeavor for artificial intelligence. In this work, we propose a universal unsupervised learning approach to extract useful representations from high-dimensional data, which we call Contrastive Predictive Coding. The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models. We use a probabilistic contrastive loss which induces the latent space to capture information that is maximally useful to predict future samples. It also makes the model tractable by using negative sampling. While most prior work has focused on evaluating representations for a particular modality, we demonstrate that our approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments. <|reference_end|>", "<|reference_start|> Mockingjay: Unsupervised Speech Representation Learning with Deep Bidirectional Transformer Encoders: We present Mockingjay as a new speech representation learning approach, where bidirectional Transformer encoders are pre-trained on a large amount of unlabeled speech. Previous speech representation methods learn through conditioning on past frames and predicting information about future frames. Whereas Mockingjay is designed to predict the current frame through jointly conditioning on both past and future contexts. The Mockingjay representation improves performance for a wide range of downstream tasks, including phoneme classification, speaker recognition, and sentiment classification on spoken content, while outperforming other approaches. Mockingjay is empirically powerful and can be fine-tuned with downstream models, with only 2 epochs we further improve performance dramatically. In a low resource setting with only 0.1% of labeled data, we outperform the result of Mel-features that uses all 100% labeled data. <|reference_end|>", "<|reference_start|> {TERA:: Tera is the Hausa and English name for the Nyimatli [nimaáli] people as they call themselves, and their language. Their communities lie principally in the north and east of present-day Gombe State and in the adjoining area of Borno State in north-eastern Nigeria. There are approximately 100,000 people who speak the language as their mother tongue (Gordon & Grimes 2005: 175), many of whom also use Hausa as the local lingua franca; increasing numbers are trilingual as the result of the growing importance of English in commerce and education. <|reference_end|>", "<|reference_start|> A Simple Framework for Contrastive Learning of Visual Representations: This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels. <|reference_end|>" ]
[ 13, 24, 25, 31 ]
{"<|cite_1|>": "ss-1457177", "<|multi_cite_2_1|>": "arxiv-234041", "<|multi_cite_2_2|>": "arxiv-248169", "<|multi_cite_3_1|>": "arxiv-199549", "<|multi_cite_3_2|>": "arxiv-228465", "<|multi_cite_3_3|>": "ss-769086", "<|multi_cite_3_4|>": "arxiv-237630", "<|cite_4|>": "arxiv-198607", "<|cite_5|>": "ss-1457177", "<|multi_cite_6_1|>": "arxiv-126595", "<|multi_cite_6_2|>": "ss-1457177", "<|multi_cite_7_1|>": "arxiv-165446", "<|multi_cite_7_2|>": "arxiv-231175", "<|cite_8|>": "arxiv-165446", "<|cite_9|>": "arxiv-198522", "<|cite_10|>": "arxiv-199549", "<|cite_11|>": "arxiv-228465", "<|cite_12|>": "ss-769086", "<|cite_13|>": "ss-1457177", "<|cite_14|>": "arxiv-198522", "<|cite_15|>": "arxiv-258778", "<|cite_16|>": "arxiv-237630", "<|cite_17|>": "arxiv-148438", "<|cite_18|>": "ss-1457177", "<|multi_cite_19_1|>": "arxiv-231175", "<|multi_cite_19_2|>": "ss-1965639", "<|cite_20|>": "arxiv-231175", "<|cite_21|>": "ss-1965639", "<|cite_22|>": "arxiv-198607", "<|cite_23|>": "arxiv-244859", "<|cite_24|>": "ss-1457177", "<|cite_25|>": "arxiv-248169"}
2406.06184
<|paper_start|> Title: Deep Multi-Objective Reinforcement Learning for Utility-Based Infrastructural Maintenance Optimization Abstract: Deep Multi-Objective Reinforcement Learning for Utility-Based Infrastructural Maintenance Optimization: In this paper, we introduce Multi-Objective Deep Centralized Multi-Agent Actor-Critic (MO- DCMAC), a multi-objective reinforcement learning (MORL) method for infrastructural maintenance optimization, an area traditionally dominated by single-objective reinforcement learning (RL) approaches. Previous single-objective RL methods combine multiple objectives, such as probability of collapse and cost, into a singular reward signal through reward-shaping. In contrast, MO-DCMAC can optimize a policy for multiple objectives directly, even when the utility function is non-linear. We evaluated MO-DCMAC using two utility functions, which use probability of collapse and cost as input. The first utility function is the Threshold utility, in which MO-DCMAC should minimize cost so that the probability of collapse is never above the threshold. The second is based on the Failure Mode, Effects, and Criticality Analysis (FMECA) methodology used by asset managers to asses maintenance plans. We evaluated MO-DCMAC, with both utility functions, in multiple maintenance environments, including ones based on a case study of the historical quay walls of Amsterdam. The performance of MO-DCMAC was compared against multiple rule-based policies based on heuristics currently used for constructing maintenance plans. Our results demonstrate that MO-DCMAC outperforms traditional rule-based policies across various environments and utility functions. Introduction \label{sec:introduction} For any nation, a robust and functional infrastructure system is required for the efficient transportation of commercial goods, individuals, and essential services, such as clean water and electricity. This is evidenced by the distinct correlation between a country's Gross Domestic Product (GDP) and the level of development of its infrastructure <|cite_start|> (Reference: The Effect of Investment in Transportation Infrastructure on the Debt-to-GDP Ratio: This paper examines the relationship between investment in transportation infrastructure capital and the debt-to-gross domestic product (GDP) ratio. We analyse the effect of bringing forward investment originally planned for future years to be executed during times of economic crisis and also consider the possible advantages of carrying out such investments with private sector financing. This paper presents a model which shows how policy aimed to encourage investment in transportation infrastructure projects through private sector participation may help raise long-term GDP and thus lead to a lower debt-to-GDP ratio. The theoretical model is then applied to current empirical data from Israel.) <|cite_end|>. Given this direct relationship, it is vital to have a comprehensive maintenance strategy that ensures all infrastructural components are maintained at or above the minimum required service level and whereby unnecessary maintenance is avoided. Such strategic planning is crucial not only for the sustained economic performance of a nation but also for safeguarding the well-being and quality of life of its populace. The typical maintenance strategy for large infrastructural assets is to define either a proactive or reactive maintenance policy. A reactive maintenance policy entails that maintenance is only executed if the asset is almost failing or has failed. The main benefit of this strategy is that maintenance is not done unnecessarily and is therefore cost-efficient; however, Sawnson <|cite_start|> (Reference: Linking maintenance strategies to performance: ) <|cite_end|> describes this strategy as a fire-fighting strategy for maintenance planning because maintenance is only done if an asset is almost failing or already failed. These failures could be catastrophic with infrastructural assets, such as the collapse of the Grimburgwal quay wall in Amsterdam or the deadly collapse of the Morandi bridge in Genoa <|cite_start|> (Reference: Pre-collapse space geodetic observations of critical infrastructure: The Morandi Bridge, Genoa, Italy: We present a methodology for the assessment of possible pre-failure bridge deformations, based on Synthetic Aperture Radar (SAR) observations. We apply this methodology to obtain a detailed 15-year survey of the Morandi bridge (Polcevera Viaduct) in the form of relative displacements across the structure prior to its collapse on August 14th 2018. We generated a displacement map for the structure from space-based SAR measurements acquired by the Italian constellation COSMO-SkyMed and the European constellation Sentinel-1A/B over the period 2009–2018. Historical satellite datasets include Envisat data spanning 2003–2011. The map reveals that the bridge was undergoing an increased magnitude of deformations over time prior to its collapse. This technique shows that the deck next to the collapsed pier was characterized since 2015 by increasing relative displacements. The COSMO-SkyMed dataset reveals the increased deformation magnitude over time of several points located near the strands of this deck between 12th March 2017 and August 2018.) <|cite_end|>. A proactive maintenance policy would prevent these failures by performing maintenance so that the asset will never deteriorate to a level it can fail. This comes with the downside that proactive policies are less cost-efficient than reactive ones. How much less depends on how well the proactive policy is formulated. An example of a proactive policy is a time-based maintenance strategy, where maintenance is performed at a set interval. Time-based maintenance is easy to implement but is, in most cases, not optimal because maintenance is even performed when the asset is still in a healthy state and is, therefore, more costly <|cite_start|> (Reference: The influence of practical factors on the benefits of condition-based maintenance over time-based maintenance: ) <|cite_end|>. Condition-based maintenance alleviates this by only doing maintenance if an asset is in a specific condition. However, this warrants sufficient asset monitoring through inspections or other methods. To improve this monitoring, predictive maintenance is applied <|cite_start|> (Reference: Predictive maintenance in the Industry 4.0: A systematic literature review: ) <|cite_end|>, whereby the future state of an asset is predicted or when a failure will occur, minimizing the required maintenance and inspection needed, both costly operations and with scarce capacity. A prescriptive maintenance policy uses these predictions to construct an optimal maintenance plan by not only planning maintenance to prevent a single failure but also for a longer time span in which multiple failures can occur. Many different methods can be used for prescriptive maintenance; some examples are integer linear programming <|cite_start|> (Reference: An integer linear programming approach for pavement maintenance and rehabilitation optimization: ABSTRACT A highway in poor conditions can raise transportation costs. Due to budgetary constraints, pavement maintenance programming is considered a difficult decision-making problem. In this article we propose a novel mathematical model and a different variant of the pavement maintenance management problem, solved with integer linear programming. The novelty of this approach is the use of the Pavement Surface Rating as the condition indicator, along with a proposed conversion strategy between most used performance indices. Additionally, we propose a simpler and broader deterioration model, when compared to existent ones, using a table system. This renders the model to be solved easily, allowing it to be implemented worldwide, given its generic characteristics. Many computational experiments were performed, both on artificial benchmark instances and on a real-world case study. The proposed model is shown to obtain optimal solutions in short computational times, and it is able to solve much larger instances than the ones found in the literature. Optimal solutions from benchmark instances, consisting of 5,000 segments and an analysis period of 30 years, were found in less than 45 minutes. Additionally, the optimal solutions have a difference of more than 20% in average, when compared to a greedy algorithm.) <|cite_end|> <|cite_start|> (Reference: A mixed-integer linear programming model for integrated production and preventive maintenance scheduling in the capital goods industry: The scheduling literature is extensive, but much of this work is theoretical and does not capture the complexity of real world systems. Capital goods companies produce products with deep and complex product structures, each of which requires the coordination of jobbing, batch, flow and assembly processes. Many components require numerous operations on multiple machines. Integrated scheduling problems simultaneously consider two or more simultaneous decisions. Previous production scheduling research in the capital goods industry has neglected maintenance scheduling and used metaheuristics with stochastic search that cannot guarantee an optimal solution. This paper presents a novel mixed integer linear programming model for simultaneously solving the integrated production and preventive maintenance scheduling problem in the capital goods industry, which was tested using data from a collaborating company. The objective was to minimise total costs including: tardiness and earliness penalty costs; component and assembly holding costs; preventive maintenance costs; and set-up, production, transfer and production idle time costs. Thus, the objective function and problem formulation were more extensive than previous research. The tool was successfully tested using data obtained from a collaborating company. It was found that the company’s total cost could be reduced by up to 63.5%.) <|cite_end|> or genetic algorithms <|cite_start|> (Reference: Multi-year maintenance planning framework using multi-attribute utility theory and genetic algorithms: ) <|cite_end|> <|cite_start|> (Reference: Pre-Earthquake Multi-Objective Probabilistic Retrofit Optimization of Bridge Networks Based on Sustainability: Planning retrofit actions on bridge networks under tight budget constraints is a challenging process. Because of the uncertainties associated with this process, a probabilistic approach is necessary. In this paper, a probabilistic methodology to establish optimum pre-earthquake retrofit plans for bridge networks based on sustainability is developed. A multicriteria optimization problem is formulated to find the optimum timing of retrofit actions for bridges within a network. The sustainability of a bridge network and the total cost of retrofit actions are considered as conflicting criteria. The sustainability is quantified in terms of the expected economic losses. The uncertainties associated with seismic hazard and structural vulnerability are considered. The methodology is illustrated on an existing bridge network. Genetic algorithms are used to solve the multicriteria optimization problem. The effects of deterioration on bridge seismic performance are considered. The effects of the time horizon on the Pareto optimal solutions are also investigated.) <|cite_end|> <|cite_start|> (Reference: Multi-objective optimization for sustainable road network maintenance under traffic equilibrium: Incorporating costs and environmental impacts: ) <|cite_end|>. In recent years, reinforcement learning has been shown to be a promising research direction for prescriptive maintenance <|cite_start|> (Reference: Managing engineering systems with large state and action spaces through deep reinforcement learning: Decision-making for engineering systems can be efficiently formulated as a Markov Decision Process (MDP) or a Partially Observable MDP (POMDP). Typical MDP and POMDP solution procedures utilize offline knowledge about the environment and provide detailed policies for relatively small systems with tractable state and action spaces. However, in large multi-component systems the sizes of these spaces easily explode, as system states and actions scale exponentially with the number of components, whereas environment dynamics are difficult to be described in explicit forms for the entire system and may only be accessible through numerical simulators. In this work, to address these issues, an integrated Deep Reinforcement Learning (DRL) framework is introduced. The Deep Centralized Multi-agent Actor Critic (DCMAC) is developed, an off-policy actor-critic DRL approach, providing efficient life-cycle policies for large multi-component systems operating in high-dimensional spaces. Apart from deep function approximations that parametrize large state spaces, DCMAC also adopts a factorized representation of the system actions, being able to designate individualized component- and subsystem-level decisions, while maintaining a centralized value function for the entire system. DCMAC compares well against Deep Q-Network (DQN) solutions and exact policies, where applicable, and outperforms optimized baselines that are based on time-based, condition-based and periodic policies.) <|cite_end|> <|cite_start|> (Reference: Deep reinforcement learning driven inspection and maintenance planning under incomplete information and constraints: Determination of inspection and maintenance policies for minimizing long-term risks and costs in deteriorating engineering environments constitutes a complex optimization problem. Major computational challenges include the (i) curse of dimensionality, due to exponential scaling of state/action set cardinalities with the number of components; (ii) curse of history, related to exponentially growing decision-trees with the number of decision-steps; (iii) presence of state uncertainties, induced by inherent environment stochasticity and variability of inspection/monitoring measurements; (iv) presence of constraints, pertaining to stochastic long-term limitations, due to resource scarcity and other infeasible/undesirable system responses. In this work, these challenges are addressed within a joint framework of constrained Partially Observable Markov Decision Processes (POMDP) and multi-agent Deep Reinforcement Learning (DRL). POMDPs optimally tackle (ii)-(iii), combining stochastic dynamic programming with Bayesian inference principles. Multi-agent DRL addresses (i), through deep function parametrizations and decentralized control assumptions. Challenge (iv) is herein handled through proper state augmentation and Lagrangian relaxation, with emphasis on life-cycle risk-based constraints and budget limitations. The underlying algorithmic steps are provided, and the proposed framework is found to outperform well-established policy baselines and facilitate adept prescription of inspection and intervention actions, in cases where decisions must be made in the most resource- and risk-aware manner.) <|cite_end|> <|cite_start|> (Reference: Grouping of Maintenance Actions with Deep Reinforcement Learning and Graph Convolutional Networks: : Reinforcement learning (RL) has shown promising performance in several applications such as robotics and games. However, the use of RL in emerging real-world domains such as smart industry and asset management remains scarce. This paper addresses the problem of optimal maintenance planning using historical data. We propose a novel Deep RL (DRL) framework based on Graph Convolutional Networks (GCN) to leverage the inherent graph structure of typical assets. As demonstrator, we employ an underground sewer pipe network. In particular, instead of dispersed maintenance actions of individual pipes across the network, the GCN ensures the grouping of maintenance actions of geographically close pipes. We perform experiments using the distinct physical characteristics, deterioration profiles, and historical data of sewer inspections within an urban environment. The results show that combining Deep Q-Networks (DQN) with GCN leads to structurally more reliable networks and a higher degree of maintenance grouping, compared to DQN with fully-connected layers and standard preventive and corrective maintenance strategy that are often adopted in practice. Our approach shows potential for developing efficient and practical maintenance plans in terms of cost and reliability.) <|cite_end|> <|cite_start|> (Reference: Inference and Maintenance Planning of Monitored Structures through Markov Chain Monte Carlo and Deep Reinforcement Learning: : A key computational challenge in maintenance planning for deteriorating structures is to concurrently secure (i) optimality of decisions over long planning horizons, and (ii) accuracy of real-time parameter updates in high-dimensional stochastic spaces. Both are often encumbered by the presence of discretized continuous-state models that describe the underlying deterioration processes, and the emergence of combinatorial decision spaces due to multi-component environments. Recent advances in Deep Reinforcement Learning (DRL) formulations for inspection and maintenance planning provide us with powerful frameworks to handle efficiently near-optimal decision-making in immense state and action spaces without the need for offline system knowledge. Moreover, Bayesian Model Updating (BMU), aided by advanced sampling methods, allows us to address dimensionality and accuracy issues related to discretized degradation processes. Building upon these concepts, we develop a joint framework in this work, coupling DRL, more specifically deep Q-learning and actor-critic algorithms, with BMU through Hamiltonian Monte Carlo. Single-and multi-component systems are examined, and it is shown that the proposed methodology yields reduced lifelong maintenance costs, and policies of high fidelity and sophistication compared to traditional optimized time-and condition-based maintenance strategies.) <|cite_end|> <|cite_start|> (Reference: Hierarchical reinforcement learning for transportation infrastructure maintenance planning: ) <|cite_end|> <|cite_start|> (Reference: A deep reinforcement learning approach for real-time sensor-driven decision making and predictive analytics: ) <|cite_end|>. With Reinforcement Learning (RL), the problem of prescriptive maintenance is formulated under a sequential decision-making setting. This means that every decision made does not just impact the immediate future but also has long-term effects. For instance, neglecting maintenance could lead to catastrophic failure or require extensive maintenance in the future. RL learns to optimally make decisions through trial-and-error, whereby at each decision step, it receives a singular reward signal, which tells how well it is performing. The goal is to maximize the cumulative reward signal. Other reasons RL is promising for infrastructural maintenance planning is its' ability to scale to larger assets, plan for decades in the future, and plan under uncertainty <|cite_start|> (Reference: Managing engineering systems with large state and action spaces through deep reinforcement learning: Decision-making for engineering systems can be efficiently formulated as a Markov Decision Process (MDP) or a Partially Observable MDP (POMDP). Typical MDP and POMDP solution procedures utilize offline knowledge about the environment and provide detailed policies for relatively small systems with tractable state and action spaces. However, in large multi-component systems the sizes of these spaces easily explode, as system states and actions scale exponentially with the number of components, whereas environment dynamics are difficult to be described in explicit forms for the entire system and may only be accessible through numerical simulators. In this work, to address these issues, an integrated Deep Reinforcement Learning (DRL) framework is introduced. The Deep Centralized Multi-agent Actor Critic (DCMAC) is developed, an off-policy actor-critic DRL approach, providing efficient life-cycle policies for large multi-component systems operating in high-dimensional spaces. Apart from deep function approximations that parametrize large state spaces, DCMAC also adopts a factorized representation of the system actions, being able to designate individualized component- and subsystem-level decisions, while maintaining a centralized value function for the entire system. DCMAC compares well against Deep Q-Network (DQN) solutions and exact policies, where applicable, and outperforms optimized baselines that are based on time-based, condition-based and periodic policies.) <|cite_end|> <|cite_start|> (Reference: Deep reinforcement learning driven inspection and maintenance planning under incomplete information and constraints: Determination of inspection and maintenance policies for minimizing long-term risks and costs in deteriorating engineering environments constitutes a complex optimization problem. Major computational challenges include the (i) curse of dimensionality, due to exponential scaling of state/action set cardinalities with the number of components; (ii) curse of history, related to exponentially growing decision-trees with the number of decision-steps; (iii) presence of state uncertainties, induced by inherent environment stochasticity and variability of inspection/monitoring measurements; (iv) presence of constraints, pertaining to stochastic long-term limitations, due to resource scarcity and other infeasible/undesirable system responses. In this work, these challenges are addressed within a joint framework of constrained Partially Observable Markov Decision Processes (POMDP) and multi-agent Deep Reinforcement Learning (DRL). POMDPs optimally tackle (ii)-(iii), combining stochastic dynamic programming with Bayesian inference principles. Multi-agent DRL addresses (i), through deep function parametrizations and decentralized control assumptions. Challenge (iv) is herein handled through proper state augmentation and Lagrangian relaxation, with emphasis on life-cycle risk-based constraints and budget limitations. The underlying algorithmic steps are provided, and the proposed framework is found to outperform well-established policy baselines and facilitate adept prescription of inspection and intervention actions, in cases where decisions must be made in the most resource- and risk-aware manner.) <|cite_end|>. This ability to plan under uncertainty is essential because a key insight behind prescriptive maintenance is to explicitly take uncertainty about the condition state of the asset into account while deciding the best course of action. This uncertainty stems from the fact that the structural condition of most infrastructural assets is not fully observable due to their physical locations (e.g., bridge piles being submerged in water) and/or incomplete inspection due to the complexity of the asset's components. Taking this uncertainty into account is essential in forming an effective maintenance strategy. In addition to considering the uncertainty, it is essential to take the maintenance goal into account. This goal is, in most cases, formulated into a set of objectives for which the maintenance plan should be optimized, such as monetary cost, safety, or availability. However, comparing these objectives is a challenging task. For example, we can clearly calculate the maintenance cost for either a road or sewer pipes; conversely, we can not quickly determine the economic benefits of performing the maintenance on those roads and sewer pipes, either due to the number of economic actors that utilize a road or that it has not a clear economic benefit in case of the sewer pipes. Furthermore, these maintenance activities aim to achieve several objectives, including ensuring serviceability for roads and sewer pipes and maintaining road availability. Therefore, optimizing infrastructural maintenance requires a multi-objective approach where these objectives are weighted against each other. Existing RL methods for prescriptive maintenance can only optimize for a single objective, whereas most real-world maintenance optimization problems are multi-objective. Therefore, we introduce \emph{Multi-Objective Deep Centralized Multi-Agent Actor-Critic (MO-DCMAC)}, which is based on the DMCAC <|cite_start|> (Reference: Managing engineering systems with large state and action spaces through deep reinforcement learning: Decision-making for engineering systems can be efficiently formulated as a Markov Decision Process (MDP) or a Partially Observable MDP (POMDP). Typical MDP and POMDP solution procedures utilize offline knowledge about the environment and provide detailed policies for relatively small systems with tractable state and action spaces. However, in large multi-component systems the sizes of these spaces easily explode, as system states and actions scale exponentially with the number of components, whereas environment dynamics are difficult to be described in explicit forms for the entire system and may only be accessible through numerical simulators. In this work, to address these issues, an integrated Deep Reinforcement Learning (DRL) framework is introduced. The Deep Centralized Multi-agent Actor Critic (DCMAC) is developed, an off-policy actor-critic DRL approach, providing efficient life-cycle policies for large multi-component systems operating in high-dimensional spaces. Apart from deep function approximations that parametrize large state spaces, DCMAC also adopts a factorized representation of the system actions, being able to designate individualized component- and subsystem-level decisions, while maintaining a centralized value function for the entire system. DCMAC compares well against Deep Q-Network (DQN) solutions and exact policies, where applicable, and outperforms optimized baselines that are based on time-based, condition-based and periodic policies.) <|cite_end|> and MOCAC <|cite_start|> (Reference: Actor-critic multi-objective reinforcement learning for non-linear utility functions: ) <|cite_end|> algorithms for maintenance planning and multi-objective reinforcement learning respectively. MO-DCMAC learns how to construct a maintenance plan in the same sequence as is currently done by asset managers, whereby the score is how well the maintenance plan performs according to the utility used. If we consider infrastructural maintenance optimization as a multi-objective problem, we also need to consider what kind of scenario our method will be used. Hayes et al. <|cite_start|> (Reference: A Practical Guide to Multi-Objective Reinforcement Learning and Planning: Real-world decision-making tasks are generally complex, requiring trade-offs between multiple, often conflicting, objectives. Despite this, the majority of research in reinforcement learning and decision-theoretic planning either assumes only a single objective, or that multiple objectives can be adequately handled via a simple linear combination. Such approaches may oversimplify the underlying problem and hence produce suboptimal results. This paper serves as a guide to the application of multi-objective methods to difficult problems, and is aimed at researchers who are already familiar with single-objective reinforcement learning and planning methods who wish to adopt a multi-objective perspective on their research, as well as practitioners who encounter multi-objective decision problems in practice. It identifies the factors that may influence the nature of the desired solution, and illustrates by example how these influence the design of multi-objective decision-making systems for complex problems.) <|cite_end|> describe multiple scenarios for when a multi-objective approach is required. In most of these scenarios described by Hayes et al., the weighting of the objectives is not determined; however, asset managers who currently plan the maintenance know how to weigh these objectives with existing methodologies. One of these methodologies is the \emph{Failure Mode, Effects and Criticality Analysis (FMECA)} methodology <|cite_start|> (Reference: Fuzzy logic prioritization of failures in a system failure mode, effects and criticality analysis: ) <|cite_end|>, where the utility (criticality score) of the different ways an asset can fail depends on both costs and risk (as well as other factors such as environment effects), which are combined in a non-linear manner. Therefore, according to Hayes et al. <|cite_start|> (Reference: A Practical Guide to Multi-Objective Reinforcement Learning and Planning: Real-world decision-making tasks are generally complex, requiring trade-offs between multiple, often conflicting, objectives. Despite this, the majority of research in reinforcement learning and decision-theoretic planning either assumes only a single objective, or that multiple objectives can be adequately handled via a simple linear combination. Such approaches may oversimplify the underlying problem and hence produce suboptimal results. This paper serves as a guide to the application of multi-objective methods to difficult problems, and is aimed at researchers who are already familiar with single-objective reinforcement learning and planning methods who wish to adopt a multi-objective perspective on their research, as well as practitioners who encounter multi-objective decision problems in practice. It identifies the factors that may influence the nature of the desired solution, and illustrates by example how these influence the design of multi-objective decision-making systems for complex problems.) <|cite_end|>, we should consider the \textit{known utility function scenario} because we know how to scalarize all the objectives into a singular value. One can assume that if we know how to weigh objectives against each other, we could use existing reinforcement learning methods for the maintenance optimization of larger assets <|cite_start|> (Reference: Managing engineering systems with large state and action spaces through deep reinforcement learning: Decision-making for engineering systems can be efficiently formulated as a Markov Decision Process (MDP) or a Partially Observable MDP (POMDP). Typical MDP and POMDP solution procedures utilize offline knowledge about the environment and provide detailed policies for relatively small systems with tractable state and action spaces. However, in large multi-component systems the sizes of these spaces easily explode, as system states and actions scale exponentially with the number of components, whereas environment dynamics are difficult to be described in explicit forms for the entire system and may only be accessible through numerical simulators. In this work, to address these issues, an integrated Deep Reinforcement Learning (DRL) framework is introduced. The Deep Centralized Multi-agent Actor Critic (DCMAC) is developed, an off-policy actor-critic DRL approach, providing efficient life-cycle policies for large multi-component systems operating in high-dimensional spaces. Apart from deep function approximations that parametrize large state spaces, DCMAC also adopts a factorized representation of the system actions, being able to designate individualized component- and subsystem-level decisions, while maintaining a centralized value function for the entire system. DCMAC compares well against Deep Q-Network (DQN) solutions and exact policies, where applicable, and outperforms optimized baselines that are based on time-based, condition-based and periodic policies.) <|cite_end|> <|cite_start|> (Reference: Deep reinforcement learning driven inspection and maintenance planning under incomplete information and constraints: Determination of inspection and maintenance policies for minimizing long-term risks and costs in deteriorating engineering environments constitutes a complex optimization problem. Major computational challenges include the (i) curse of dimensionality, due to exponential scaling of state/action set cardinalities with the number of components; (ii) curse of history, related to exponentially growing decision-trees with the number of decision-steps; (iii) presence of state uncertainties, induced by inherent environment stochasticity and variability of inspection/monitoring measurements; (iv) presence of constraints, pertaining to stochastic long-term limitations, due to resource scarcity and other infeasible/undesirable system responses. In this work, these challenges are addressed within a joint framework of constrained Partially Observable Markov Decision Processes (POMDP) and multi-agent Deep Reinforcement Learning (DRL). POMDPs optimally tackle (ii)-(iii), combining stochastic dynamic programming with Bayesian inference principles. Multi-agent DRL addresses (i), through deep function parametrizations and decentralized control assumptions. Challenge (iv) is herein handled through proper state augmentation and Lagrangian relaxation, with emphasis on life-cycle risk-based constraints and budget limitations. The underlying algorithmic steps are provided, and the proposed framework is found to outperform well-established policy baselines and facilitate adept prescription of inspection and intervention actions, in cases where decisions must be made in the most resource- and risk-aware manner.) <|cite_end|> <|cite_start|> (Reference: Hierarchical reinforcement learning for transportation infrastructure maintenance planning: ) <|cite_end|> because we can, with the utility function, scalarize the objectives to a singular reward value. However, these existing methods are undesirable for multi-objective maintenance optimization, even with a known utility function. One reason these methods are undesirable is that they require scalarization at every training step, whereas the utility function weighs the objectives for the whole maintenance plan. Moreover, in this paper, we consider FMECA and other non-linear methodologies as our utility function, which could lead to our maintenance optimization problem being intractable <|cite_start|> (Reference: A Survey of Multi-Objective Sequential Decision-Making: Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-objective problems. Therefore, we identify three distinct scenarios in which converting such a problem to a single-objective one is impossible, infeasible, or undesirable. Furthermore, we propose a taxonomy that classifies multi-objective methods according to the applicable scenario, the nature of the scalarization function (which projects multi-objective values to scalar ones), and the type of policies considered. We show how these factors determine the nature of an optimal solution, which can be a single policy, a convex hull, or a Pareto front. Using this taxonomy, we survey the literature on multi-objective methods for planning and learning. Finally, we discuss key applications of such methods and outline opportunities for future work.) <|cite_end|> if we use standard reinforcement learning for our problem. Moreover, we formulate maintenance optimization for infrastructural assets as a \emph{multi-objective partially observable Markov decision process model (MOPOMDP)} <|cite_start|> (Reference: Evolving Policies for Multi-Reward Partially Observable Markov Decision Processes (MR-POMDPs): Plans and decisions in many real-world scenarios are made under uncertainty and to satisfy multiple, possibly conflicting, objectives. In this work, we contribute the multi-reward partially-observable Markov decision process (MR-POMDP) as a general modelling framework. To solve MR-POMDPs, we present two hybrid (memetic) multi-objective evolutionary algorithms that generate non-dominated sets of policies (in the form of stochastic finite state controllers). Performance comparisons between the methods on multi-objective problems in robotics (with 2, 3 and 5 objectives), web-advertising (with 3, 4 and 5 objectives) and infectious disease control (with 3 objectives), revealed that memetic variants outperformed their original counterparts. We anticipate that the MR-POMDP along with multi-objective evolutionary solvers will prove useful in a variety of theoretical and real-world applications.) <|cite_end|>. This MODPOMDP is formulated not only to handle uncertainty about the state of the components but also to model different reward signals that can be taken into account with a non-linear utility function. Additionally, we introduce a novel method for incorporating a probability value as a reward or objective. This reward value will ensure that episodic return is always a valid probability, which can be discounted. Lastly, we test MO-DCMAC on multiple environments where maintenance is planned for an infrastructural asset. For these environments, we use the historical quay walls of Amsterdam as a real-world use case (Sections~\ref{sec:smaller_quay_wall} and \ref{sec:larger_quay_wall}). Related Work \label{sec:background} In the background section, we discuss the key topics to understand MO-DCMAC. We begin with explaining standard reinforcement learning (RL) and deep reinforcement learning (DRL). In this part, we will also focus on actor-critic methods, seeing that MO-DCMAC is one of them. Thereafter, we will explain multi-objective reinforcement learning (MORL) and how it differs from standard RL. For example, we will state the essential background on why standard RL methods can not be used for our multi-objective maintenance optimization problem even though we have a known utility function. \subsection{Reinforcement Learning} In the realm of maintenance optimization, addressing the challenge of determining the most effective sequence of maintenance actions over time is essential. This problem is inherently a sequential decision-making dilemma, where the objective is to optimize the scheduling and execution of maintenance activities to maximize system reliability and minimize costs. To achieve this, we want an agent to learn an optimal maintenance policy. An agent learns this by interacting within an environment, whereby the environment describes the possible maintenance actions and their effect. To frame this issue in a structured manner, we can formulate the maintenance optimization problem as a Markov decision process (MDP). A MDP is tuple $\mathcal{M}_{\text{MDP}}=\left \langle S, A, T, \mathbf{R}, \gamma, H \right \rangle$, in which: \begin{itemize} \item $s_t \in S$ is set of states. \item $a_t \in A$ are all the possible actions. \item $T: S \times A \times S \rightarrow \left [0,1 \right ]$ is the transition function that describes the probability of transitioning from state $s_t$ to state $s_{t+1}$ if action $a_{t}$ is taken at timestep $t$. \item $\mathbf{R}: S \times A \times S \rightarrow \mathbb{R}$ is the reward function. The reward function maps the state $s_t$ and the taken action $a_t$ to a scalar reward value $r_t$. This reward value $r_t$ tells how well the agent is performing. \item The discount factor $\gamma \in [0,1 ]$ determines the significance of future rewards in the learning process. \item $H$ is the horizon, which indicates how long an episode will be. \end{itemize} With an MDP, we want to have the agent learn policy $\pi$ that maximizes the discounted cumulative reward called return: \begin{align} \pi^{*} &= \text{arg } \underset{\pi}{\text{max}}\: \mathbb{E} \left [ R_{t} \mid \pi, s_{t} = s\right ] \nonumber \\ &= \text{arg } \underset{\pi}{\text{max}}\: \mathbb{E} \left [ \sum_{k=t}^{H}\gamma^{k}r_{k} \mid \pi, s_{t} = s \right ]\label{eq:policy_maximizes_cum_rew}, \end{align} with $\pi^{*}$ being the optimal policy. The expected return of policy from taking action $a_{t}$ at state $s_{t}$ is described through the state-action value function (Q-function): \begin{equation} \label{eq:q_function} Q^{\pi}{\left(s, a \right)} = \mathbb{E}\left [ R_{t} \mid \pi, s_t=s, a_t=a\right ], \end{equation} with the optimal Q-function being $Q^{*}{\left(s,a \right)}=\underset{\pi}{\max}\:Q^{\pi}{\left(s,a\right)}$. Similarly, the expected return from a given state $s_t$ is formulated through the value function (V-function), \begin{equation} \label{eq:value_function} V^{\pi}{\left(s\right)} = \mathbb{E}\left [ R_{t} \mid \pi, s_t=s\right ]. \end{equation} The aforementioned equations can be solved using tabular approaches where state-action or state value is stored for each possible state and action. However, as the number of states or actions grows, a tabular approach, like Q-learning <|cite_start|> (Reference: Technical Note: Q-Learning: ) <|cite_end|>, does not scale well. Deep reinforcement learning (DRL) provides a scaleable solution. Deep Q-learning (DQN) <|cite_start|> (Reference: Human-level control through deep reinforcement learning: ) <|cite_end|> uses a neural network to learn the state-action value (Equation~\ref{eq:q_function}). DQN and similar methods that learn a state or state-action value are called value-based methods. Another approach is policy-based methods. These methods directly learn a policy $\pi{\left (a_t \mid s_t \right)}$, which defines a probability distribution over the possible actions (Equation~\ref{eq:policy_maximizes_cum_rew}). A major downside of policy-based methods is that they encounter a lot of variance during training and are thus unstable without any modifications. One approach to reduce variance in policy-based methods is using actor-critic methods. An actor-critic method consists of two parts: the actor, which learns a policy $\pi$, and the critic, which learns to estimate the future expected returns through either a state-action (Equation~\ref{eq:q_function}) or only a state (Equation~\ref{eq:value_function}). Besides reducing the variance, it also enables the actor to be updated at every timestep and reduces the variance in training. Advantage Actor-Critic, introduced by Mnih et al. <|cite_start|> (Reference: Asynchronous Methods for Deep Reinforcement Learning: We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.) <|cite_end|>, further reduces the variance by updating through the advantage function, $A{\left(s,a \right )}=Q{\left(s,a \right )}-V{\left(s \right )}$, which measures how much better or worse an action $a_t$ is compared to the other actions at state $s_{t}$. The advantage function can also be approximated by using only the state value-function, by $A{\left(s_t,a_t \right )} \approx r_{t} + \gamma V{\left(s_{t+1}\right)} - V{\left(s_{t}\right)}$, allowing the critic to only use the state value function $V$, instead of also the state-action value function. The policy is then updated through the following loss function, \begin{align} L{\left ( \pi \right)} =& - \sum_{t=0}^{H}A{\left (s_{t},a_{t} \right)} \log{\left (\pi_{\theta}{\left (a_{t} \mid s_{t} \right)} \right)} \nonumber \\ =& - \sum_{t=0}^{H}\left ( r_{t} + \gamma V_\psi{\left(s_{t+1} \right)} - V_\psi{\left(s_{t} \right) } \right ) \nonumber \\ & \cdot \log{\left (\pi_{\theta}{\left (a_{t} \mid s_{t} \right)} \right)}\label{eq:actor_critic_pol_update}, \end{align} whereby $\theta$ are the weights for $\pi$, the actor, and $\psi$ the weights for $V$, the critic. \subsection{Multi-Objective Reinforcement Learning} Standard RL methods can only optimize a policy for single-objective problems, like optimizing for cost. As such, these methods fall short of many real-world problems in which multiple objectives are in play, such as infrastructural maintenance optimization. For example, in some instances, we cannot convert an objective to cost, such as safety, or it is a complex task to capture the output of asset reliability assessment methodologies, such as FMECA, to a reward function that can combine these objectives. Therefore, we will utilize \emph{multi-objective reinforcement learning} to optimize a policy for infrastructural maintenance optimization, such that we can use these assessment methodologies in the learning process. In multi-objective reinforcement learning (MORL), the environment and interactions are formulated as a multi-objective Markov decision process (MOMDP) <|cite_start|> (Reference: A Practical Guide to Multi-Objective Reinforcement Learning and Planning: Real-world decision-making tasks are generally complex, requiring trade-offs between multiple, often conflicting, objectives. Despite this, the majority of research in reinforcement learning and decision-theoretic planning either assumes only a single objective, or that multiple objectives can be adequately handled via a simple linear combination. Such approaches may oversimplify the underlying problem and hence produce suboptimal results. This paper serves as a guide to the application of multi-objective methods to difficult problems, and is aimed at researchers who are already familiar with single-objective reinforcement learning and planning methods who wish to adopt a multi-objective perspective on their research, as well as practitioners who encounter multi-objective decision problems in practice. It identifies the factors that may influence the nature of the desired solution, and illustrates by example how these influence the design of multi-objective decision-making systems for complex problems.) <|cite_end|>. This is the tuple $\mathcal{M}_{\text{MOMDP}}=\left \langle S, A, T, \vec{\mathbf{R}}, \gamma, H \right \rangle$. The only noticeable difference between a standard MDP and a MOMDP is the reward function $\vec{\mathbf{R}}: S \times A \times S \rightarrow \mathbb{R}^d $, which returns a vector of $d$ objectives, instead of a singular scalar value. A MOMDP can be reduced to a standard MDP if $d=1$ because the reward function would only return a singular reward value. In standard RL, we can easily determine if a policy $\pi$ is performing better than another policy ${\pi}'$ if \begin{equation}\label{eq:policy_compare} \mathbb{E}{\left [R_t \mid \pi, s_{t}=s \right]} > \mathbb{E}{\left [R_t \mid{\pi}', s_{t}=s \right]}. \end{equation} In contrast, this is not trivial in MORL. If we have $d=2$ and the expected return for $\pi$ is $\left(0, 10\right)$ and for ${\pi}'$ it is $\left (10, 0 \right )$, we can not decide which policy is performing better or even if they are performing equally because one objective might be significantly more important than the other objective. This complexity in comparing objectives is evident in infrastructural maintenance optimization. For instance, is it worth improving the probability of collapse from 2\% to 1\% if the cost doubles? Asset managers use methodologies like FMECA to assess the best maintenance policy. Therefore, we will focus on the \emph{utility-based approach} as described by Hayes et al. <|cite_start|> (Reference: A Practical Guide to Multi-Objective Reinforcement Learning and Planning: Real-world decision-making tasks are generally complex, requiring trade-offs between multiple, often conflicting, objectives. Despite this, the majority of research in reinforcement learning and decision-theoretic planning either assumes only a single objective, or that multiple objectives can be adequately handled via a simple linear combination. Such approaches may oversimplify the underlying problem and hence produce suboptimal results. This paper serves as a guide to the application of multi-objective methods to difficult problems, and is aimed at researchers who are already familiar with single-objective reinforcement learning and planning methods who wish to adopt a multi-objective perspective on their research, as well as practitioners who encounter multi-objective decision problems in practice. It identifies the factors that may influence the nature of the desired solution, and illustrates by example how these influence the design of multi-objective decision-making systems for complex problems.) <|cite_end|>, so we can include methodologies such as FMECA in the training procedure. A utility function $u: \mathbb{R}^d \rightarrow \mathbb{R}$ is a mapping of the episodic return $\vec{R}=\sum_{k=0}^{H}\gamma^{k}\vec{r}_k$ to a singular value. The utility function needs to be \emph{strictly monotonically increasing}, meaning that if one of the objectives increases, the utility return can never decrease. It is nearly impossible to find an optimal policy if the utility is not monotonically increasing. This is because, in utility-based MORL, the agent aims to learn a policy that maximizes the utility. However, if the utility function is not monotonically increasing, the agent will reach a certain point during training where it can only improve the utility further if it first decreases. This will cause the agent to get stuck at a local optimum. There are two methods of optimizing over a known utility function in MORL <|cite_start|> (Reference: A Practical Guide to Multi-Objective Reinforcement Learning and Planning: Real-world decision-making tasks are generally complex, requiring trade-offs between multiple, often conflicting, objectives. Despite this, the majority of research in reinforcement learning and decision-theoretic planning either assumes only a single objective, or that multiple objectives can be adequately handled via a simple linear combination. Such approaches may oversimplify the underlying problem and hence produce suboptimal results. This paper serves as a guide to the application of multi-objective methods to difficult problems, and is aimed at researchers who are already familiar with single-objective reinforcement learning and planning methods who wish to adopt a multi-objective perspective on their research, as well as practitioners who encounter multi-objective decision problems in practice. It identifies the factors that may influence the nature of the desired solution, and illustrates by example how these influence the design of multi-objective decision-making systems for complex problems.) <|cite_end|>, namely the \emph{Scalarized Expected Return} (SER) criterion, which is when the utility function is applied on the expected return: \begin{equation} \label{eq:ser_criterion} \pi^{*} = \text{arg } \underset{\pi}{\text{max}}\: u \left ( \mathbb{E} \left [ \sum_{k=t}^{H}\gamma^{k}\vec{r}_{k} \mid \pi, s_t=s\right ]\right). \end{equation} The other criterion is the \emph{Expected Scalarized Return} (ESR), which tries to maximize the expected return of the utility function: \begin{equation} \label{eq:esr_criterion} \pi^{*} = \text{arg } \underset{\pi}{\text{max}}\: \mathbb{E} \left [ u \left (\sum_{k=t}^{H}\gamma^{k}\vec{r}_{k} \right)\mid \pi, s_{t}=s \right ]. \end{equation} The difference is that while SER is concerned with the utility of the average outcome, ESR takes the utility over every single roll-out of the policy (and only then takes the average). When we examine most MORL methods, we see that the SER criterion is more used, while in multi-objective games, the ESR criterion is used <|cite_start|> (Reference: A Practical Guide to Multi-Objective Reinforcement Learning and Planning: Real-world decision-making tasks are generally complex, requiring trade-offs between multiple, often conflicting, objectives. Despite this, the majority of research in reinforcement learning and decision-theoretic planning either assumes only a single objective, or that multiple objectives can be adequately handled via a simple linear combination. Such approaches may oversimplify the underlying problem and hence produce suboptimal results. This paper serves as a guide to the application of multi-objective methods to difficult problems, and is aimed at researchers who are already familiar with single-objective reinforcement learning and planning methods who wish to adopt a multi-objective perspective on their research, as well as practitioners who encounter multi-objective decision problems in practice. It identifies the factors that may influence the nature of the desired solution, and illustrates by example how these influence the design of multi-objective decision-making systems for complex problems.) <|cite_end|>. One explanation for the ESR criterion being understudied is that it invalidates the Bellman equation if the utility is non-linear: \begin{align} \underset{\pi}{\max}\: &\mathbb{E}{\left [u{\left (\vec{R}_{t}^{-} + \sum_{k=t}^{\infty}\gamma^{k}\vec{r}_k \right )} \mid \pi, s_{t} \right]} \neq \nonumber \\ &u{\left (\vec{R}_{t}^{-} \right )}+\underset{\pi}{\max}\: \mathbb{E}{\left [u{\left ( \sum_{k=t}^{\infty}\gamma^{k}\vec{r}_k \mid \pi, s_{t}\right )} \right]},\label{eq:esr_bellman_invalid} \end{align} where $\vec{R}_{t}^{-}=\sum^{t-1}_{k=0}\gamma^{k} \vec{r}_{k}$ is the accrued reward, until timestep $t$. Equation~\ref{eq:esr_bellman_invalid} shows how the Bellman equation becomes invalidated because any non-linear utility function under the ESR criterion does not distribute across the accrued reward and the future returns <|cite_start|> (Reference: A Practical Guide to Multi-Objective Reinforcement Learning and Planning: Real-world decision-making tasks are generally complex, requiring trade-offs between multiple, often conflicting, objectives. Despite this, the majority of research in reinforcement learning and decision-theoretic planning either assumes only a single objective, or that multiple objectives can be adequately handled via a simple linear combination. Such approaches may oversimplify the underlying problem and hence produce suboptimal results. This paper serves as a guide to the application of multi-objective methods to difficult problems, and is aimed at researchers who are already familiar with single-objective reinforcement learning and planning methods who wish to adopt a multi-objective perspective on their research, as well as practitioners who encounter multi-objective decision problems in practice. It identifies the factors that may influence the nature of the desired solution, and illustrates by example how these influence the design of multi-objective decision-making systems for complex problems.) <|cite_end|>. Therefore, the most evident criterion to use would be the SER criterion; however, a policy learned with the SER criterion could differ significantly from one learned with the ESR criterion, as shown by R\u{a}dulescu et al. <|cite_start|> (Reference: Multi-Objective Multi-Agent Decision Making: A Utility-based Analysis and Survey: The majority of multi-agent system (MAS) implementations aim to optimise agents' policies with respect to a single objective, despite the fact that many real-world problem domains are inherently multi-objective in nature. Multi-objective multi-agent systems (MOMAS) explicitly consider the possible trade-offs between conflicting objective functions. We argue that, in MOMAS, such compromises should be analysed on the basis of the utility that these compromises have for the users of a system. As is standard in multi-objective optimisation, we model the user utility using utility functions that map value or return vectors to scalar values. This approach naturally leads to two different optimisation criteria: expected scalarised returns (ESR) and scalarised expected returns (SER). We develop a new taxonomy which classifies multi-objective multi-agent decision making settings, on the basis of the reward structures, and which and how utility functions are applied. This allows us to offer a structured view of the field, to clearly delineate the current state-of-the-art in multi-objective multi-agent decision making approaches and to identify promising directions for future research. Starting from the execution phase, in which the selected policies are applied and the utility for the users is attained, we analyse which solution concepts apply to the different settings in our taxonomy. Furthermore, we define and discuss these solution concepts under both ESR and SER optimisation criteria. We conclude with a summary of our main findings and a discussion of many promising future research directions in multi-objective multi-agent systems.) <|cite_end|>. Moreover, Hayes et al. <|cite_start|> (Reference: A Practical Guide to Multi-Objective Reinforcement Learning and Planning: Real-world decision-making tasks are generally complex, requiring trade-offs between multiple, often conflicting, objectives. Despite this, the majority of research in reinforcement learning and decision-theoretic planning either assumes only a single objective, or that multiple objectives can be adequately handled via a simple linear combination. Such approaches may oversimplify the underlying problem and hence produce suboptimal results. This paper serves as a guide to the application of multi-objective methods to difficult problems, and is aimed at researchers who are already familiar with single-objective reinforcement learning and planning methods who wish to adopt a multi-objective perspective on their research, as well as practitioners who encounter multi-objective decision problems in practice. It identifies the factors that may influence the nature of the desired solution, and illustrates by example how these influence the design of multi-objective decision-making systems for complex problems.) <|cite_end|> describe when ESR or SER should be used. SER is more flexible because it takes the expected return of the objectives as input for the utility function. This means that in our context of maintenance optimization, an asset might collapse in some runs so long it is safe on average. ESR is more strict because it optimizes the expected return of the utility function, which means that at every run, the asset should not collapse because, otherwise, the utility will be low. This strictness of the ESR criterion is more desired for maintenance optimization because an asset should never collapse. <|paper_end|>
[ "<|reference_start|> The influence of practical factors on the benefits of condition-based maintenance over time-based maintenance: <|reference_end|>", "<|reference_start|> Predictive maintenance in the Industry 4.0: A systematic literature review: <|reference_end|>", "<|reference_start|> A Practical Guide to Multi-Objective Reinforcement Learning and Planning: Real-world decision-making tasks are generally complex, requiring trade-offs between multiple, often conflicting, objectives. Despite this, the majority of research in reinforcement learning and decision-theoretic planning either assumes only a single objective, or that multiple objectives can be adequately handled via a simple linear combination. Such approaches may oversimplify the underlying problem and hence produce suboptimal results. This paper serves as a guide to the application of multi-objective methods to difficult problems, and is aimed at researchers who are already familiar with single-objective reinforcement learning and planning methods who wish to adopt a multi-objective perspective on their research, as well as practitioners who encounter multi-objective decision problems in practice. It identifies the factors that may influence the nature of the desired solution, and illustrates by example how these influence the design of multi-objective decision-making systems for complex problems. <|reference_end|>", "<|reference_start|> Multi-Objective Multi-Agent Decision Making: A Utility-based Analysis and Survey: The majority of multi-agent system (MAS) implementations aim to optimise agents' policies with respect to a single objective, despite the fact that many real-world problem domains are inherently multi-objective in nature. Multi-objective multi-agent systems (MOMAS) explicitly consider the possible trade-offs between conflicting objective functions. We argue that, in MOMAS, such compromises should be analysed on the basis of the utility that these compromises have for the users of a system. As is standard in multi-objective optimisation, we model the user utility using utility functions that map value or return vectors to scalar values. This approach naturally leads to two different optimisation criteria: expected scalarised returns (ESR) and scalarised expected returns (SER). We develop a new taxonomy which classifies multi-objective multi-agent decision making settings, on the basis of the reward structures, and which and how utility functions are applied. This allows us to offer a structured view of the field, to clearly delineate the current state-of-the-art in multi-objective multi-agent decision making approaches and to identify promising directions for future research. Starting from the execution phase, in which the selected policies are applied and the utility for the users is attained, we analyse which solution concepts apply to the different settings in our taxonomy. Furthermore, we define and discuss these solution concepts under both ESR and SER optimisation criteria. We conclude with a summary of our main findings and a discussion of many promising future research directions in multi-objective multi-agent systems. <|reference_end|>" ]
[ 3, 4, 20, 36 ]
{"<|cite_1|>": "ss-1348327", "<|cite_2|>": "ss-677719", "<|cite_4|>": "ss-1348328", "<|cite_5|>": "ss-2321178", "<|cite_6|>": "ss-1232383", "<|multi_cite_7_1|>": "ss-1348329", "<|multi_cite_7_2|>": "ss-1348330", "<|multi_cite_8_1|>": "ss-1348331", "<|multi_cite_8_2|>": "ss-1348332", "<|multi_cite_8_3|>": "ss-1348333", "<|multi_cite_9_1|>": "arxiv-179133", "<|multi_cite_9_2|>": "arxiv-276051", "<|multi_cite_9_3|>": "ss-1348334", "<|multi_cite_9_4|>": "ss-1348335", "<|multi_cite_9_5|>": "ss-1348336", "<|multi_cite_9_6|>": "ss-1348337", "<|multi_cite_10_1|>": "arxiv-179133", "<|multi_cite_10_2|>": "arxiv-276051", "<|cite_11|>": "arxiv-179133", "<|cite_12|>": "ss-787714", "<|cite_13|>": "arxiv-328009", "<|cite_14|>": "ss-1348338", "<|cite_15|>": "arxiv-328009", "<|multi_cite_16_1|>": "arxiv-179133", "<|multi_cite_16_2|>": "arxiv-276051", "<|multi_cite_16_3|>": "ss-1348336", "<|cite_17|>": "arxiv-56404", "<|cite_18|>": "ss-946455", "<|cite_19|>": "ss-738104", "<|cite_20|>": "ss-749221", "<|cite_21|>": "arxiv-91622", "<|cite_22|>": "arxiv-328009", "<|cite_23|>": "arxiv-328009", "<|cite_24|>": "arxiv-328009", "<|cite_25|>": "arxiv-328009", "<|cite_26|>": "arxiv-328009", "<|cite_27|>": "arxiv-222279", "<|cite_28|>": "arxiv-328009"}
2004.00188
<|paper_start|> Title: Improving Perceptual Quality of Drum Transcription with the Expanded Groove MIDI Dataset Abstract: Improving Perceptual Quality of Drum Transcription with the Expanded Groove MIDI Dataset: We introduce the Expanded Groove MIDI dataset (E-GMD), an automatic drum transcription (ADT) dataset that contains 444 hours of audio from 43 drum kits, making it an order of magnitude larger than similar datasets, and the first with human-performed velocity annotations. We use E-GMD to optimize classifiers for use in downstream generation by predicting expressive dynamics (velocity) and show with listening tests that they produce outputs with improved perceptual quality, despite similar results on classification metrics. Via the listening tests, we argue that standard classifier metrics, such as accuracy and F-measure score, are insufficient proxies of performance in downstream tasks because they do not fully align with the perceptual quality of generated outputs. Introduction Discriminative models predict the conditional distribution $p(y|x)$ over labels $y$ that correspond to an input $x$. In the space of automatic drum transcription (ADT), discriminative models are used to predict when and what drum hits are used in a drum performance conditional on audio input of a performance. While classifier metrics such as accuracy, precision, recall, and F-measure scores are often used to evaluate discriminative models, decision theory highlights that the true quantity of interest is the expected utility (or cost) of the inferred labels in a downstream task. Recent work on piano transcription has demonstrated the value of considering downstream generation, showing that separately classifying note onsets from note persistence led to dramatic improvements in the perceptual quality of generation due to a reduction in false positive onsets <|cite_start|> (Reference: Onsets and Frames: Dual-Objective Piano Transcription: We advance the state of the art in polyphonic piano music transcription by using a deep convolutional and recurrent neural network which is trained to jointly predict onsets and frames. Our model predicts pitch onset events and then uses those predictions to condition framewise pitch predictions. During inference, we restrict the predictions from the framewise detector by not allowing a new note to start unless the onset detector also agrees that an onset for that pitch is present in the frame. We focus on improving onsets and offsets together instead of either in isolation as we believe this correlates better with human musical perception. Our approach results in over a 100% relative improvement in note F1 score (with offsets) on the MAPS dataset. Furthermore, we extend the model to predict relative velocities of normalized audio which results in more natural-sounding transcriptions.) <|cite_end|>. For the application of drum transcription, we develop a new dataset and transcription model capable of transcribing drum hit velocity (loudness) and examine how that capability contributes to the perceived quality of the transcriptions. Our key contributions include: \begin{itemize} \itemsep0em \item The Expanded Groove MIDI dataset (E-GMD), the first dataset to capture both expressive timing and velocity of human performances and a dataset size that is an order of magnitude larger than similar datasets. \item Training expressive ADT models on E-GMD to predict timings, drum hit, and velocity by incorporating a separate velocity-prediction head. \item Demonstrating that predicting expressive dynamics (velocity) in addition to timing generates outputs with improved perceptual quality, as determined by listening tests, despite achieving similar results on classification metrics. \item Developing a new \textit{Shuffled mixup} strategy for data augmentation and regularization that effectively limits overfitting. \end{itemize} Audio samples of the dataset and examples used in the listening test are provided in the online supplement at \url{https://goo.gl/magenta/e-gmd-examples}, and the full dataset is available at \url{https://g.co/magenta/e-gmd} under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Related Work The recent work of Wu et al.~\shortcite{review} provides a comprehensive overview of ADT and includes evaluation of current state of the art methods. While there has been a large collection of studies published over ADT in recent years <|cite_start|> (Reference: Towards Multi-Instrument Drum Transcription.: Automatic drum transcription, a subtask of the more general automatic music transcription, deals with extracting drum instrument note onsets from an audio source. Recently, progress in transcription performance has been made using non-negative matrix factorization as well as deep learning methods. However, these works primarily focus on transcribing three drum instruments only: snare drum, bass drum, and hi-hat. Yet, for many applications, the ability to transcribe more drum instruments which make up standard drum kits used in western popular music would be desirable. In this work, convolutional and convolutional recurrent neural networks are trained to transcribe a wider range of drum instruments. First, the shortcomings of publicly available datasets in this context are discussed. To overcome these limitations, a larger synthetic dataset is introduced. Then, methods to train models using the new dataset focusing on generalization to real world data are investigated. Finally, the trained models are evaluated on publicly available datasets and results are discussed. The contributions of this work comprise: (i.) a large-scale synthetic dataset for drum transcription, (ii.) first steps towards an automatic drum transcription system that supports a larger range of instruments by evaluating and discussing training setups and the impact of datasets in this context, and (iii.) a publicly available set of trained models for drum transcription. Additional materials are available at this http URL) <|cite_end|> <|cite_start|> (Reference: Deep Unsupervised Drum Transcription: We introduce DrummerNet, a drum transcription system that is trained in an unsupervised manner. DrummerNet does not require any ground-truth transcription and, with the data-scalability of deep neural networks, learns from a large unlabeled dataset. In DrummerNet, the target drum signal is first passed to a (trainable) transcriber, then reconstructed in a (fixed) synthesizer according to the transcription estimate. By training the system to minimize the distance between the input and the output audio signals, the transcriber learns to transcribe without ground truth transcription. Our experiment shows that DrummerNet performs favorably compared to many other recent drum transcription systems, both supervised and unsupervised.) <|cite_end|> <|cite_start|> (Reference: Increasing drum transcription vocabulary using data synthesis: ,) <|cite_end|> <|cite_start|> (Reference: {From Labeled to Unlabeled Data – On the Data Challenge in Automatic Drum Transcription: Automatic Drum Transcription (ADT), like many other music information retrieval tasks, has made progress in the past years through the integration of machine learning and audio signal processing techniques. However, with the increasing popularity of data-hungry approaches such as deep learning, the insufficient amount of data becomes more and more a challenge that concerns the generality of the resulting models and the validity of the evaluation. To address this challenge in ADT, this paper first examines the existing labeled datasets and how representative they are of the research problem. Next, possibilities of using unlabeled data to improve general ADT systems are explored. Specifically, two paradigms that harness information from unlabeled data, namely feature learning and student-teacher learning, are applied to two major types of ADT systems. All systems are evaluated on four different drum datasets. The results highlight the necessity of more and larger annotated datasets and indicate the feasibility of exploiting unlabeled data for improving ADT systems.) <|cite_end|> <|cite_start|> (Reference: Improving Peak-picking Using Multiple Time-step Loss Functions: The majority of state-of-the-art methods for music infor-mation retrieval (MIR) tasks now utilise deep learningmethods reliant on minimisation of loss functions such ascross entropy. For tasks that include framewise binaryclassification (e.g., onset detection, music transcription)classes are derived from output activation functions byidentifying points of local maxima, or peaks. However, theoperating principles behind peak picking are different tothat of the cross entropy loss function, which minimises theabsolute difference between the output and target valuesfor a single frame. To generate activation functions moresuited to peak-picking, we propose two versions of a newloss function that incorporates information from multipletime-steps: 1)multi-individual, which uses multiple indi-vidual time-step cross entropies; and 2)multi-difference,which directly compares the difference between sequentialtime-step outputs. We evaluate the newly proposed lossfunctions alongside standard cross entropy in the popularMIR tasks of onset detection and automatic drum tran-scription. The results highlight the effectiveness of theseloss functions in the improvement of overall system ac-curacies for both MIR tasks. Additionally, directly com-paring the output from sequential time-steps in the multi-difference approach achieves the highest performance.) <|cite_end|> <|cite_start|> (Reference: Player Vs Transcriber: A Game Approach To Data Manipulation For Automatic Drum Transcription.: State-of-the-art automatic drum transcription (ADT) approaches utilise deep learning methods reliant on time-consuming manual annotations and require congruence be-tween training and testing data. When these conditions are not held, they often fail to generalise. We propose a game approach to ADT, termed player vs transcriber (PvT), in which a player model aims to reduce transcription accuracy of a transcriber model by manipulating training data in two ways. First, existing data may be augmented, allowing the transcriber to be trained using recordings with modified timbres. Second, additional individual recordings from sample libraries are included to generate rare combinations. We present three versions of the PvT model: AugExist , which augments pre-existing recordings; AugAddExist , which adds additional samples of drum hits to the AugExist system; and Generate , which generates training examples exclusively from individual drum hits from sample libraries. The three versions are evaluated alongside a state-of-the-art deep learning ADT system using two evaluation strategies. The results demonstrate that including the player network improves the ADT performance and suggests that this is due to improved gen-eralisability. The results also indicate that although the Generate model achieves relatively low results, it is a viable choice when annotations are not accessible.) <|cite_end|> <|cite_start|> (Reference: Bayesian Drum Transcription Based on Nonnegative Matrix Factor Decomposition with a Deep Score Prior: This paper describes a statistical method of automatic drum transcription that estimates a musical score of bass and snare drums and hi-hats from a drum signal separated from a popular music signal. One of the most effective approaches for this problem is to apply nonnegative matrix factor deconvolution (NMFD) for estimating the temporal activations of drums and then perform thresholding for estimating a drum score. Such a pure audio-based approach, however, cannot avoid musically unnatural scores. To solve this, we propose a unified Bayesian model that integrates an NMFD-based acoustic model evaluating the likelihood of a drum score for a drum spectrogram, with a deep language model serving as a prior (constraint) of the score. The language model can be trained with existing drum scores in the framework of autoencoding variational Bayes and has more expressive power than the conventional statistical models. We derive an inference algorithm using Gibbs sampling, which is a marriage of the solid formalism of Bayesian learning with the expressive power of deep learning. It is shown that the proposed method not only slightly improved the F-measure score but also increased musical naturalness of the transcribed drum scores than NMFD.) <|cite_end|>, most ADT research has maintained a focus on classifier metrics to assess quality. Of the approaches that have explored deep learning <|cite_start|> (Reference: Towards Multi-Instrument Drum Transcription.: Automatic drum transcription, a subtask of the more general automatic music transcription, deals with extracting drum instrument note onsets from an audio source. Recently, progress in transcription performance has been made using non-negative matrix factorization as well as deep learning methods. However, these works primarily focus on transcribing three drum instruments only: snare drum, bass drum, and hi-hat. Yet, for many applications, the ability to transcribe more drum instruments which make up standard drum kits used in western popular music would be desirable. In this work, convolutional and convolutional recurrent neural networks are trained to transcribe a wider range of drum instruments. First, the shortcomings of publicly available datasets in this context are discussed. To overcome these limitations, a larger synthetic dataset is introduced. Then, methods to train models using the new dataset focusing on generalization to real world data are investigated. Finally, the trained models are evaluated on publicly available datasets and results are discussed. The contributions of this work comprise: (i.) a large-scale synthetic dataset for drum transcription, (ii.) first steps towards an automatic drum transcription system that supports a larger range of instruments by evaluating and discussing training setups and the impact of datasets in this context, and (iii.) a publicly available set of trained models for drum transcription. Additional materials are available at this http URL) <|cite_end|> <|cite_start|> (Reference: Deep Unsupervised Drum Transcription: We introduce DrummerNet, a drum transcription system that is trained in an unsupervised manner. DrummerNet does not require any ground-truth transcription and, with the data-scalability of deep neural networks, learns from a large unlabeled dataset. In DrummerNet, the target drum signal is first passed to a (trainable) transcriber, then reconstructed in a (fixed) synthesizer according to the transcription estimate. By training the system to minimize the distance between the input and the output audio signals, the transcriber learns to transcribe without ground truth transcription. Our experiment shows that DrummerNet performs favorably compared to many other recent drum transcription systems, both supervised and unsupervised.) <|cite_end|> <|cite_start|> (Reference: Increasing drum transcription vocabulary using data synthesis: ,) <|cite_end|> <|cite_start|> (Reference: Improving Peak-picking Using Multiple Time-step Loss Functions: The majority of state-of-the-art methods for music infor-mation retrieval (MIR) tasks now utilise deep learningmethods reliant on minimisation of loss functions such ascross entropy. For tasks that include framewise binaryclassification (e.g., onset detection, music transcription)classes are derived from output activation functions byidentifying points of local maxima, or peaks. However, theoperating principles behind peak picking are different tothat of the cross entropy loss function, which minimises theabsolute difference between the output and target valuesfor a single frame. To generate activation functions moresuited to peak-picking, we propose two versions of a newloss function that incorporates information from multipletime-steps: 1)multi-individual, which uses multiple indi-vidual time-step cross entropies; and 2)multi-difference,which directly compares the difference between sequentialtime-step outputs. We evaluate the newly proposed lossfunctions alongside standard cross entropy in the popularMIR tasks of onset detection and automatic drum tran-scription. The results highlight the effectiveness of theseloss functions in the improvement of overall system ac-curacies for both MIR tasks. Additionally, directly com-paring the output from sequential time-steps in the multi-difference approach achieves the highest performance.) <|cite_end|>, research is still fairly new given the large data required to effectively produce a model. As annotating drums is still a fairly manual task, most datasets for ADT are relatively small in size and resource intensive to create. This has lead to new research into solving that problem, including unsupervised approaches <|cite_start|> (Reference: Deep Unsupervised Drum Transcription: We introduce DrummerNet, a drum transcription system that is trained in an unsupervised manner. DrummerNet does not require any ground-truth transcription and, with the data-scalability of deep neural networks, learns from a large unlabeled dataset. In DrummerNet, the target drum signal is first passed to a (trainable) transcriber, then reconstructed in a (fixed) synthesizer according to the transcription estimate. By training the system to minimize the distance between the input and the output audio signals, the transcriber learns to transcribe without ground truth transcription. Our experiment shows that DrummerNet performs favorably compared to many other recent drum transcription systems, both supervised and unsupervised.) <|cite_end|> <|cite_start|> (Reference: {From Labeled to Unlabeled Data – On the Data Challenge in Automatic Drum Transcription: Automatic Drum Transcription (ADT), like many other music information retrieval tasks, has made progress in the past years through the integration of machine learning and audio signal processing techniques. However, with the increasing popularity of data-hungry approaches such as deep learning, the insufficient amount of data becomes more and more a challenge that concerns the generality of the resulting models and the validity of the evaluation. To address this challenge in ADT, this paper first examines the existing labeled datasets and how representative they are of the research problem. Next, possibilities of using unlabeled data to improve general ADT systems are explored. Specifically, two paradigms that harness information from unlabeled data, namely feature learning and student-teacher learning, are applied to two major types of ADT systems. All systems are evaluated on four different drum datasets. The results highlight the necessity of more and larger annotated datasets and indicate the feasibility of exploiting unlabeled data for improving ADT systems.) <|cite_end|> and the creation of synthetic datasets <|cite_start|> (Reference: Deep Unsupervised Drum Transcription: We introduce DrummerNet, a drum transcription system that is trained in an unsupervised manner. DrummerNet does not require any ground-truth transcription and, with the data-scalability of deep neural networks, learns from a large unlabeled dataset. In DrummerNet, the target drum signal is first passed to a (trainable) transcriber, then reconstructed in a (fixed) synthesizer according to the transcription estimate. By training the system to minimize the distance between the input and the output audio signals, the transcriber learns to transcribe without ground truth transcription. Our experiment shows that DrummerNet performs favorably compared to many other recent drum transcription systems, both supervised and unsupervised.) <|cite_end|> <|cite_start|> (Reference: Towards Multi-Instrument Drum Transcription.: Automatic drum transcription, a subtask of the more general automatic music transcription, deals with extracting drum instrument note onsets from an audio source. Recently, progress in transcription performance has been made using non-negative matrix factorization as well as deep learning methods. However, these works primarily focus on transcribing three drum instruments only: snare drum, bass drum, and hi-hat. Yet, for many applications, the ability to transcribe more drum instruments which make up standard drum kits used in western popular music would be desirable. In this work, convolutional and convolutional recurrent neural networks are trained to transcribe a wider range of drum instruments. First, the shortcomings of publicly available datasets in this context are discussed. To overcome these limitations, a larger synthetic dataset is introduced. Then, methods to train models using the new dataset focusing on generalization to real world data are investigated. Finally, the trained models are evaluated on publicly available datasets and results are discussed. The contributions of this work comprise: (i.) a large-scale synthetic dataset for drum transcription, (ii.) first steps towards an automatic drum transcription system that supports a larger range of instruments by evaluating and discussing training setups and the impact of datasets in this context, and (iii.) a publicly available set of trained models for drum transcription. Additional materials are available at this http URL) <|cite_end|> <|cite_start|> (Reference: Increasing drum transcription vocabulary using data synthesis: ,) <|cite_end|> <|cite_start|> (Reference: An open-source drum transcription system for Pure Data and Max MSP: This paper presents a drum transcription algorithm adjusted to the constraints of real-time audio. We introduce an instance filtering (IF) method using sub-band onset detection, which improves the performance of a system having at its core a feature-based K-nearest neighbor classifier (KNN). The architecture proposed allows for adapting different parts of the algorithm for either bass drum, snare drum or hi-hat cymbals. The open-source system is implemented in the graphic programming languages Pure Data (PD) and Max MSP, and aims to work with a large variety of drum sets. We evaluated its performance on a database of audio samples generated from a well known collection of midi drum loops randomly matched with a diverse collection of drum sets. Both of the evaluation stages, testing and validation, show an improvement in the performance when using the instance filtering algorithm.) <|cite_end|>. Given the difficulty of ADT and the limited datasets available, the overwhelming majority of ADT research has focused on ADT with the classification of 3 primary drum hits: Kick Drum, Snare Drum, Hi-hat (KD, SN, HH) <|cite_start|> (Reference: Real-Time Transcription and Separation of Drum Recordings Based on NMF Decomposition.: This paper proposes a real-time capable method for transcribing and separating occurrences of single drum instruments in poly-phonic drum recordings. Both the detection and the decomposition are based on Non-Negative Matrix Factorization and can be implemented with very small systemic delay. We propose a simple modification to the update rules that allows to capture time-dynamic spectral characteristics of the involved drum sounds. The method can be applied in music production and music education software. Performance results with respect to drum transcription are presented and discussed. The evaluation data-set consisting of annotated drum recordings is published for use in further studies in the field.) <|cite_end|> <|cite_start|> (Reference: Drumkit transcription via convolutive NMF: Audio to midi software exists for transcribing the output of a multi-mic’ed drumkit. Such software requires that the drummer uses multiple microphones to capture a single stream of audio for each kit piece. This paper explores the first steps towards a system for transcribing a drum score based upon the input of a single mono microphone. Non-negative Matrix Factorisation is a widely re-searched source separation technique. We describe a system for transcribing drums using this technique presenting an improved gains update method. A good level of accuracy is achieved on on complex loops and there are indications the mis-transcriptions are for perceptually less important parts of the score.) <|cite_end|> <|cite_start|> (Reference: Drum transcription using partially fixed non-negative matrix factorization: In this paper, a drum transcription algorithm using partially fixed non-negative matrix factorization is presented. The proposed method allows users to identify percussive events in complex mixtures with a minimal training set. The algorithm decomposes the music signal into two parts: percussive part with pre-defined drum templates and harmonic part with undefined entries. The harmonic part is able to adapt to the music content, allowing the algorithm to work in polyphonic mixtures. Drum event times can be simply picked from the percussive activation matrix with onset detection. The system is efficient and robust even with a minimal training set. The recognition rates for the ENST dataset vary from 56.7 to 78.9% for three percussive instruments extracted from polyphonic music.) <|cite_end|> <|cite_start|> (Reference: Recurrent Neural Networks for Drum Transcription.: Music transcription is a core task in the field of music information retrieval. Transcribing the drum tracks of music pieces is a well-defined sub-task. The symbolic representation of a drum track contains much useful information about the piece, like meter, tempo, as well as various style and genre cues. This work introduces a novel approach for drum transcription using recurrent neural networks. We claim that recurrent neural networks can be trained to identify the onsets of percussive instruments based on general properties of their sound. Different architectures of recurrent neural networks are compared and evaluated using a well-known dataset. The outcomes are compared to results of a state-of-the-art approach on the same dataset. Furthermore, the ability of the networks to generalize is demonstrated using a second, independent dataset. The experiments yield promising results: while F-measures higher than state-of-the-art results are achieved, the networks are capable of generalizing reasonably well.) <|cite_end|> <|cite_start|> (Reference: Drum transcription from polyphonic music with recurrent neural networks: Automatic drum transcription methods aim at extracting a symbolic representation of notes played by a drum kit in audio recordings. For automatic music analysis, this task is of particular interest as such a transcript can be used to extract high level information about the piece, e.g., tempo, downbeat positions, meter, and genre cues. In this work, an approach to transcribe drums from polyphonic audio signals based on a recurrent neural network is presented. Deep learning techniques like dropout and data augmentation are applied to improve the generalization capabilities of the system. The method is evaluated using established reference datasets consisting of solo drum tracks as well as drums mixed with accompaniment. The results are compared to state-of-the-art approaches on the same datasets. The evaluation reveals that F-measure values higher than state of the art can be achieved using the proposed method.) <|cite_end|> <|cite_start|> (Reference: Automatic Drum Transcription using Bi-directional Recurrent Neural Networks.: Automatic drum transcription (ADT) systems attempt to generate a symbolic music notation for percussive instruments in audio recordings. Neural networks have already been shown to perform well in fields related to ADT such as source separation and onset detection due to their utilisation of time-series data in classification. We pro-pose the use of neural networks for ADT in order to exploit their ability to capture a complex configuration of features associated with individual or combined drum classes. In this paper we present a bi-directional recurrent neural network for offline detection of percussive onsets from specified drum classes and a recurrent neural network suitable for online operation. In both systems, a separate network is trained to identify onsets for each drum class under observation—that is, kick drum, snare drum, hi-hats, and combinations thereof. We perform four evaluations utilising the IDMT-SMT-Drums and ENST minus one datasets, which cover solo percussion and polyphonic audio respectively. The results demonstrate the effectiveness of the presented methods for solo percussion and a capacity for identifying snare drums, which are historically the most diffi-cult drum class to detect.) <|cite_end|> <|cite_start|> (Reference: Automatic Drum Transcription for Polyphonic Recordings using Soft Attention Mechanisms and Convolutional Neural Networks: Automatic drum transcription is the process of generating symbolic notation for percussion instruments within audio recordings. To date, recurrent neural network (RNN) systems have achieved the highest evaluation accuracies for both drum solo and polyphonic recordings, however the accuracies within a polyphonic context still remain relatively low. To improve accuracy for polyphonic recordings, we present two approaches to the ADT problem: First, to capture the dynamism of features in multiple time-step hidden layers, we propose the use of soft attention mechanisms (SA) and an alternative RNN configuration containing additional peripheral connections (PC). Second, to capture these same trends at the input level, we propose the use of a convolutional neural network (CNN), which uses a larger set of time-step features. In addition, we propose the use of a bidirectional recurrent neural network (BRNN) in the peak-picking stage. The proposed systems are evaluated along with two state-of-the-art ADT systems in five evaluation scenarios, including a newly-proposed evaluation methodology designed to assess the generalisability of ADT systems. The results indicate that all of the newly proposed systems achieve higher accuracies than the stateof- the-art RNN systems for polyphonic recordings and that the additional BRNN peak-picking stage offers slight improvement in certain contexts.) <|cite_end|>. A handful of datasets contain annotations beyond the 3 standard hits, however the set of drum hits is not standardized, with each dataset containing a varied collection of drum hits <|cite_start|> (Reference: Towards Multi-Instrument Drum Transcription.: Automatic drum transcription, a subtask of the more general automatic music transcription, deals with extracting drum instrument note onsets from an audio source. Recently, progress in transcription performance has been made using non-negative matrix factorization as well as deep learning methods. However, these works primarily focus on transcribing three drum instruments only: snare drum, bass drum, and hi-hat. Yet, for many applications, the ability to transcribe more drum instruments which make up standard drum kits used in western popular music would be desirable. In this work, convolutional and convolutional recurrent neural networks are trained to transcribe a wider range of drum instruments. First, the shortcomings of publicly available datasets in this context are discussed. To overcome these limitations, a larger synthetic dataset is introduced. Then, methods to train models using the new dataset focusing on generalization to real world data are investigated. Finally, the trained models are evaluated on publicly available datasets and results are discussed. The contributions of this work comprise: (i.) a large-scale synthetic dataset for drum transcription, (ii.) first steps towards an automatic drum transcription system that supports a larger range of instruments by evaluating and discussing training setups and the impact of datasets in this context, and (iii.) a publicly available set of trained models for drum transcription. Additional materials are available at this http URL) <|cite_end|> <|cite_start|> (Reference: Increasing drum transcription vocabulary using data synthesis: ,) <|cite_end|> <|cite_start|> (Reference: Further steps towards drum transcription of polyphonic music: This publication presents a new method for the detection and classification of un-pitched percussive instruments in real world musical signals. The derived information is an important pre-requisite for the creation of a musical score, i.e. automatic transcription, and for the automatic extraction of semantic meaningful meta-data, e.g. tempo and musical meter. The proposed method applies Independent Subspace Analysis using Non-Negative Independent Component Analysis and principles of Prior Subspace Analysis. An important extension of Prior Subspace Analysis is the identification of frequency subspaces of percussive instruments from the signal itself. The frequency subspaces serve as information for the detection of the percussive events and the subsequent classification of the occurring instruments. Results are reported on 40 manually transcribed test items.) <|cite_end|>. Velocity has sometimes been considered during ADT tasks. For example, in DrummerNet <|cite_start|> (Reference: Deep Unsupervised Drum Transcription: We introduce DrummerNet, a drum transcription system that is trained in an unsupervised manner. DrummerNet does not require any ground-truth transcription and, with the data-scalability of deep neural networks, learns from a large unlabeled dataset. In DrummerNet, the target drum signal is first passed to a (trainable) transcriber, then reconstructed in a (fixed) synthesizer according to the transcription estimate. By training the system to minimize the distance between the input and the output audio signals, the transcriber learns to transcribe without ground truth transcription. Our experiment shows that DrummerNet performs favorably compared to many other recent drum transcription systems, both supervised and unsupervised.) <|cite_end|>, velocity is used as a probability of hit for peak-picking. However, velocity is not predicted as part of overall model output. To the best of our knowledge, our work is the first model that directly predicts velocity values and evaluates the perceptual quality of resynthesized outputs. \begin{table}[ht] \centering \begin{tabular}{lcccc} \hline Dataset & Minutes & Kits & Human & Vel \\ \hline E-GMD & 26,670 & 43 & $\surd$ & $\surd$ \\ TMIDT & 15,540 & 57 & $\times$ & $\times$ \\ IDMT & 130 & 6 & $\times$ & $\times$ \\ ENST & 61 & 3 & $\surd$ & $\times$ \\ MDB Drums & 21 & $\approx$23 & $\surd$ & $\times$ \\ RBMA13 & 103 & $\approx$30 & $\surd$ & $\times$ \\ \hline \end{tabular} \caption{Comparison of public datasets for ADT, including whether they contain exclusively human performances and velocity annotations. The exact number of kits in MDB Drums and RBMA13 is unclear, but is unlikely to exceed the total number of tracks, which is 23 and 30 respectively. All datasets contain isolated drum tracks, with the exception of RBMA13.} \label{datasets} \end{table} <|paper_end|>
[ "<|reference_start|> {From Labeled to Unlabeled Data – On the Data\nChallenge in Automatic Drum Transcription: Automatic Drum Transcription (ADT), like many other music information retrieval tasks, has made progress in the past years through the integration of machine learning and audio signal processing techniques. However, with the increasing popularity of data-hungry approaches such as deep learning, the insufficient amount of data becomes more and more a challenge that concerns the generality of the resulting models and the validity of the evaluation. To address this challenge in ADT, this paper first examines the existing labeled datasets and how representative they are of the research problem. Next, possibilities of using unlabeled data to improve general ADT systems are explored. Specifically, two paradigms that harness information from unlabeled data, namely feature learning and student-teacher learning, are applied to two major types of ADT systems. All systems are evaluated on four different drum datasets. The results highlight the necessity of more and larger annotated datasets and indicate the feasibility of exploiting unlabeled data for improving ADT systems. <|reference_end|>", "<|reference_start|> Real-Time Transcription and Separation of Drum Recordings Based on NMF Decomposition.: This paper proposes a real-time capable method for transcribing and separating occurrences of single drum instruments in poly-phonic drum recordings. Both the detection and the decomposition are based on Non-Negative Matrix Factorization and can be implemented with very small systemic delay. We propose a simple modification to the update rules that allows to capture time-dynamic spectral characteristics of the involved drum sounds. The method can be applied in music production and music education software. Performance results with respect to drum transcription are presented and discussed. The evaluation data-set consisting of annotated drum recordings is published for use in further studies in the field. <|reference_end|>", "<|reference_start|> Drumkit transcription via convolutive NMF: Audio to midi software exists for transcribing the output of a multi-mic’ed drumkit. Such software requires that the drummer uses multiple microphones to capture a single stream of audio for each kit piece. This paper explores the first steps towards a system for transcribing a drum score based upon the input of a single mono microphone. Non-negative Matrix Factorisation is a widely re-searched source separation technique. We describe a system for transcribing drums using this technique presenting an improved gains update method. A good level of accuracy is achieved on on complex loops and there are indications the mis-transcriptions are for perceptually less important parts of the score. <|reference_end|>", "<|reference_start|> Automatic Drum Transcription for Polyphonic Recordings using Soft Attention Mechanisms and Convolutional Neural Networks: Automatic drum transcription is the process of generating symbolic notation for percussion instruments within audio recordings. To date, recurrent neural network (RNN) systems have achieved the highest evaluation accuracies for \nboth drum solo and polyphonic recordings, however the accuracies within a polyphonic context still remain relatively low. To improve accuracy for polyphonic recordings, we present two approaches to the ADT problem: First, to capture the dynamism of features in multiple time-step hidden \nlayers, we propose the use of soft attention mechanisms (SA) and an alternative RNN configuration containing additional peripheral connections (PC). Second, to capture these same trends at the input level, we propose the use of a convolutional neural network (CNN), which uses a larger set of time-step features. In addition, we propose the use of a bidirectional recurrent neural network (BRNN) in the peak-picking stage. The proposed systems are evaluated along with two state-of-the-art ADT systems in five \nevaluation scenarios, including a newly-proposed evaluation methodology designed to assess the generalisability of ADT systems. The results indicate that all of the newly proposed systems achieve higher accuracies than the stateof- the-art RNN systems for polyphonic recordings and that \nthe additional BRNN peak-picking stage offers slight improvement in certain contexts. <|reference_end|>" ]
[ 13, 18, 19, 24 ]
{"<|cite_7|>": "arxiv-138637", "<|multi_cite_1_1|>": "ss-1944206", "<|multi_cite_1_2|>": "arxiv-208795", "<|multi_cite_1_3|>": "ss-1467239", "<|multi_cite_1_4|>": "ss-1944207", "<|multi_cite_1_5|>": "ss-1944208", "<|multi_cite_1_6|>": "ss-1944209", "<|multi_cite_1_7|>": "ss-1944210", "<|multi_cite_2_1|>": "ss-1944206", "<|multi_cite_2_2|>": "arxiv-208795", "<|multi_cite_2_3|>": "ss-1467239", "<|multi_cite_2_4|>": "ss-1944208", "<|multi_cite_3_1|>": "arxiv-208795", "<|multi_cite_3_2|>": "ss-1944207", "<|multi_cite_4_1|>": "arxiv-208795", "<|multi_cite_4_2|>": "ss-1944206", "<|multi_cite_4_3|>": "ss-1467239", "<|multi_cite_4_4|>": "ss-1034552", "<|multi_cite_8_1|>": "ss-1800952", "<|multi_cite_8_2|>": "ss-1944211", "<|multi_cite_8_3|>": "ss-1412533", "<|multi_cite_8_4|>": "ss-1944212", "<|multi_cite_8_5|>": "ss-2559792", "<|multi_cite_8_6|>": "ss-1944213", "<|multi_cite_8_7|>": "ss-1412535", "<|multi_cite_9_1|>": "ss-1944206", "<|multi_cite_9_2|>": "ss-1467239", "<|multi_cite_9_3|>": "ss-2428421", "<|cite_5|>": "arxiv-208795"}
2311.10122
<|paper_start|> Title: Video-LLaVA: Learning United Visual Representation by Alignment Before Projection Abstract: Video-LLaVA: Learning United Visual Representation by Alignment Before Projection: The Large Vision-Language Model (LVLM) has enhanced the performance of various downstream tasks in visual-language understanding. Most existing approaches encode images and videos into separate feature spaces, which are then fed as inputs to large language models. However, due to the lack of unified tokenization for images and videos, namely misalignment before projection, it becomes challenging for a Large Language Model (LLM) to learn multi-modal interactions from several poor projection layers. In this work, we unify visual representation into the language feature space to advance the foundational LLM towards a unified LVLM. As a result, we establish a simple but robust LVLM baseline, Video-LLaVA, which learns from a mixed dataset of images and videos, mutually enhancing each other. Video-LLaVA achieves superior performances on a broad range of 9 image benchmarks across 5 image question-answering datasets and 4 image benchmark toolkits. Additionally, our Video-LLaVA also outperforms Video-ChatGPT by 5.8%, 9.9%, 18.6%, and 10.1% on MSRVTT, MSVD, TGIF, and ActivityNet, respectively. Notably, extensive experiments demonstrate that Video-LLaVA mutually benefits images and videos within a unified visual representation, outperforming models designed specifically for images or videos. We aim for this work to provide modest insights into the multi-modal inputs for the LLM. Code address: \href{https://github.com/PKU-YuanGroup/Video-LLaVA} Introduction \label{sec:intro} \begin{figure}[h] \centering \includegraphics[width=1.0\linewidth]{fig/sota.pdf} \caption{\textbf{Comparing Different LVLM Paradigms.} Video-LLaVA aligns images and videos before projection, allowing LLM to learn from a unified visual representation and endowing LLM with the ability to comprehend both images and videos simultaneously.} \label{fig:sota} \end{figure} Recently, LLMs have gained rapid popularity in the AI community, such as GPT-3.5, GPT-4 <|cite_start|> (Reference: GPT-4 Technical Report: We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4's performance based on models trained with no more than 1/1,000th the compute of GPT-4.) <|cite_end|>, PaLM <|cite_start|> (Reference: PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation: Self-supervised pre-training, such as BERT, MASS and BART, has emerged as a powerful technique for natural language understanding and generation. Existing pre-training techniques employ autoencoding and/or autoregressive objectives to train Transformer-based models by recovering original word tokens from corrupted text with some masked tokens. The training goals of existing techniques are often inconsistent with the goals of many language generation tasks, such as generative question answering and conversational response generation, for producing new text given context. This work presents PALM with a novel scheme that jointly pre-trains an autoencoding and autoregressive language model on a large unlabeled corpus, specifically designed for generating new text conditioned on context. The new scheme alleviates the mismatch introduced by the existing denoising scheme between pre-training and fine-tuning where generation is more than reconstructing original text. An extensive set of experiments show that PALM achieves new state-of-the-art results on a variety of language generation benchmarks covering generative question answering (Rank 1 on the official MARCO leaderboard), abstractive summarization on CNN/DailyMail as well as Gigaword, question generation on SQuAD, and conversational response generation on Cornell Movie Dialogues.) <|cite_end|> <|cite_start|> (Reference: PaLM 2 Technical Report: We introduce PaLM 2, a new state-of-the-art language model that has better multilingual and reasoning capabilities and is more compute-efficient than its predecessor PaLM. PaLM 2 is a Transformer-based model trained using a mixture of objectives. Through extensive evaluations on English and multilingual language, and reasoning tasks, we demonstrate that PaLM 2 has significantly improved quality on downstream tasks across different model sizes, while simultaneously exhibiting faster and more efficient inference compared to PaLM. This improved efficiency enables broader deployment while also allowing the model to respond faster, for a more natural pace of interaction. PaLM 2 demonstrates robust reasoning capabilities exemplified by large improvements over PaLM on BIG-Bench and other reasoning tasks. PaLM 2 exhibits stable performance on a suite of responsible AI evaluations, and enables inference-time control over toxicity without additional overhead or impact on other capabilities. Overall, PaLM 2 achieves state-of-the-art performance across a diverse set of tasks and capabilities. When discussing the PaLM 2 family, it is important to distinguish between pre-trained models (of various sizes), fine-tuned variants of these models, and the user-facing products that use these models. In particular, user-facing products typically include additional pre- and post-processing steps. Additionally, the underlying models may evolve over time. Therefore, one should not expect the performance of user-facing products to exactly match the results reported in this report.) <|cite_end|>, and BLOOM <|cite_start|> (Reference: BLOOM: A 176B-Parameter Open-Access Multilingual Language Model: Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.) <|cite_end|>. They rely on their powerful language comprehension abilities to follow human-provided instructions and provide corresponding responses. Typically, LLMs can only respond within the text input provided by the user, which is insufficient because human interaction with the world involves multiple channels, such as visual and textual. To this end, recent works <|cite_start|> (Reference: mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality: Large language models (LLMs) have demonstrated impressive zero-shot abilities on a variety of open-ended tasks, while recent research has also explored the use of LLMs for multi-modal generation. In this study, we introduce mPLUG-Owl, a novel training paradigm that equips LLMs with multi-modal abilities through modularized learning of foundation LLM, a visual knowledge module, and a visual abstractor module. This approach can support multiple modalities and facilitate diverse unimodal and multimodal abilities through modality collaboration. The training paradigm of mPLUG-Owl involves a two-stage method for aligning image and text, which learns visual knowledge with the assistance of LLM while maintaining and even improving the generation abilities of LLM. In the first stage, the visual knowledge module and abstractor module are trained with a frozen LLM module to align the image and text. In the second stage, language-only and multi-modal supervised datasets are used to jointly fine-tune a low-rank adaption (LoRA) module on LLM and the abstractor module by freezing the visual knowledge module. We carefully build a visually-related instruction evaluation set OwlEval. Experimental results show that our model outperforms existing multi-modal models, demonstrating mPLUG-Owl's impressive instruction and visual understanding ability, multi-turn conversation ability, and knowledge reasoning ability. Besides, we observe some unexpected and exciting abilities such as multi-image correlation and scene text understanding, which makes it possible to leverage it for harder real scenarios, such as vision-only document comprehension. Our code, pre-trained model, instruction-tuned models, and evaluation set are available at https://github.com/X-PLUG/mPLUG-Owl. The online demo is available at https://www.modelscope.cn/studios/damo/mPLUG-Owl.) <|cite_end|> <|cite_start|> (Reference: MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models: The recent GPT-4 has demonstrated extraordinary multi-modal abilities, such as directly generating websites from handwritten text and identifying humorous elements within images. These features are rarely observed in previous vision-language models. However, the technical details behind GPT-4 continue to remain undisclosed. We believe that the enhanced multi-modal generation capabilities of GPT-4 stem from the utilization of sophisticated large language models (LLM). To examine this phenomenon, we present MiniGPT-4, which aligns a frozen visual encoder with a frozen advanced LLM, Vicuna, using one projection layer. Our work, for the first time, uncovers that properly aligning the visual features with an advanced large language model can possess numerous advanced multi-modal abilities demonstrated by GPT-4, such as detailed image description generation and website creation from hand-drawn drafts. Furthermore, we also observe other emerging capabilities in MiniGPT-4, including writing stories and poems inspired by given images, teaching users how to cook based on food photos, and so on. In our experiment, we found that the model trained on short image caption pairs could produce unnatural language outputs (e.g., repetition and fragmentation). To address this problem, we curate a detailed image description dataset in the second stage to finetune the model, which consequently improves the model's generation reliability and overall usability. Our code, pre-trained model, and collected dataset are available at https://minigpt-4.github.io/.) <|cite_end|> <|cite_start|> (Reference: Flamingo: a Visual Language Model for Few-Shot Learning: Building models that can be rapidly adapted to novel tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research. We introduce Flamingo, a family of Visual Language Models (VLM) with this ability. We propose key architectural innovations to: (i) bridge powerful pretrained vision-only and language-only models, (ii) handle sequences of arbitrarily interleaved visual and textual data, and (iii) seamlessly ingest images or videos as inputs. Thanks to their flexibility, Flamingo models can be trained on large-scale multimodal web corpora containing arbitrarily interleaved text and images, which is key to endow them with in-context few-shot learning capabilities. We perform a thorough evaluation of our models, exploring and measuring their ability to rapidly adapt to a variety of image and video tasks. These include open-ended tasks such as visual question-answering, where the model is prompted with a question which it has to answer; captioning tasks, which evaluate the ability to describe a scene or an event; and close-ended tasks such as multiple-choice visual question-answering. For tasks lying anywhere on this spectrum, a single Flamingo model can achieve a new state of the art with few-shot learning, simply by prompting the model with task-specific examples. On numerous benchmarks, Flamingo outperforms models fine-tuned on thousands of times more task-specific data.) <|cite_end|> have mapped images into text-like tokens, enabling LLMs to emerge with the ability to comprehend images. Despite their effectiveness, empowering LLMs to understand videos is more challenging than image-only comprehension tasks. Nevertheless, recent work <|cite_start|> (Reference: Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models: Conversation agents fueled by Large Language Models (LLMs) are providing a new way to interact with visual data. While there have been initial attempts for image-based conversation models, this work addresses the under-explored field of \emph{video-based conversation} by introducing Video-ChatGPT. It is a multimodal model that merges a video-adapted visual encoder with an LLM. The resulting model is capable of understanding and generating detailed conversations about videos. We introduce a new dataset of 100,000 video-instruction pairs used to train Video-ChatGPT acquired via manual and semi-automated pipeline that is easily scalable and robust to label noise. We also develop a quantitative evaluation framework for video-based dialogue models to objectively analyze the strengths and weaknesses of video-based dialogue models. Code: https://github.com/mbzuai-oryx/Video-ChatGPT.) <|cite_end|> <|cite_start|> (Reference: VideoChat: Chat-Centric Video Understanding: In this paper, we initiate an attempt of developing an end-to-end chat-centric video understanding system, coined as VideoChat. It integrates video foundation models and large language models via a learnable neural interface, excelling in spatiotemporal reasoning, event localization, and causal relationship inference. To instructively tune this system, we build a video-centric instruction dataset, composed of thousands of videos associated with detailed descriptions and conversations. This dataset emphasizes spatiotemporal reasoning and captures causal relationships, providing a valuable asset for training our chat-centric video understanding system. Preliminary qualitative experiments demonstrate the potential of our system across a broad spectrum of video applications, which could serve as a simple prototype system for future research on chat-centric video understanding. Access our code and data at https://github.com/OpenGVLab/Ask-Anything) <|cite_end|> <|cite_start|> (Reference: Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding: We present Video-LLaMA a multi-modal framework that empowers Large Language Models (LLMs) with the capability of understanding both visual and auditory content in the video. Video-LLaMA bootstraps cross-modal training from the frozen pre-trained visual and audio encoders and the frozen LLMs. Unlike previous works that complement LLMs to process the visual or audio signals only, Video-LLaMA enables video comprehension by tackling two challenges: (1) capturing the temporal changes in visual scenes, (2) integrating audio-visual signals. To counter the first challenge, we propose a Video Q-former to assemble a pre-trained image encoder into our video encoder and introduce a video-to-text generation task to learn video-language correspondence. For the second challenge, we leverage ImageBind, a universal embedding model aligning multiple modalities, as the pre-trained audio encoder and introduce an Audio Q-former on top of ImageBind to learn reasonable auditory query embeddings for the LLM module. To align the output of both visual and audio encoders with LLM's embedding space, we first train Video-LLaMA on massive video/image-caption pairs and then tune our model with visual-instruction datasets of moderate amount but higher quality. We found Video-LLaMA shows the ability to perceive and comprehend video content and generate meaningful responses grounded in the visual and auditory information presented in the videos.) <|cite_end|> has made initial strides in enabling interactions between video and language. However, most current LVLMs <|cite_start|> (Reference: BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models: The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from off-the-shelf frozen pre-trained image encoders and frozen large language models. BLIP-2 bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. BLIP-2 achieves state-of-the-art performance on various vision-language tasks, despite having significantly fewer trainable parameters than existing methods. For example, our model outperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. We also demonstrate the model's emerging capabilities of zero-shot image-to-text generation that can follow natural language instructions.) <|cite_end|> <|cite_start|> (Reference: InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning: Large-scale pre-training and instruction tuning have been successful at creating general-purpose language models with broad competence. However, building general-purpose vision-language models is challenging due to the rich input distributions and task diversity resulting from the additional visual input. Although vision-language pretraining has been widely studied, vision-language instruction tuning remains under-explored. In this paper, we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pretrained BLIP-2 models. We gather 26 publicly available datasets, covering a wide variety of tasks and capabilities, and transform them into instruction tuning format. Additionally, we introduce an instruction-aware Query Transformer, which extracts informative features tailored to the given instruction. Trained on 13 held-in datasets, InstructBLIP attains state-of-the-art zero-shot performance across all 13 held-out datasets, substantially outperforming BLIP-2 and larger Flamingo models. Our models also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA questions with image contexts). Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over concurrent multimodal models. All InstructBLIP models are open-sourced at https://github.com/salesforce/LAVIS/tree/main/projects/instructblip.) <|cite_end|> <|cite_start|> (Reference: Valley: Video Assistant with Large Language model Enhanced abilitY: Large language models (LLMs), with their remarkable conversational capabilities, have demonstrated impressive performance across various applications and have emerged as formidable AI assistants. In view of this, it raises an intuitive question: Can we harness the power of LLMs to build multimodal AI assistants for visual applications? Recently, several multi-modal models have been developed for this purpose. They typically pre-train an adaptation module to align the semantics of the vision encoder and language model, followed by fine-tuning on instruction-following data. However, despite the success of this pipeline in image and language understanding, its effectiveness in joint video and language understanding has not been widely explored. In this paper, we aim to develop a novel multi-modal foundation model capable of comprehending video, image, and language within a general framework. To achieve this goal, we introduce Valley, a Video Assistant with Large Language model Enhanced abilitY. The Valley consists of a LLM, a temporal modeling module, a visual encoder, and a simple projection module designed to bridge visual and textual modes. To empower Valley with video comprehension and instruction-following capabilities, we construct a video instruction dataset and adopt a two-stage tuning procedure to train it. Specifically, we employ ChatGPT to facilitate the construction of task-oriented conversation data encompassing various tasks, including multi-shot captions, long video descriptions, action recognition, causal relationship inference, etc. Subsequently, we adopt a pre-training-then-instructions-tuned pipeline to align visual and textual modalities and improve the instruction-following capability of Valley. Qualitative experiments demonstrate that Valley has the potential to function as a highly effective video assistant that can make complex video understanding scenarios easy.) <|cite_end|> <|cite_start|> (Reference: Otter: A Multi-Modal Model with In-Context Instruction Tuning: Large language models (LLMs) have demonstrated significant universal capabilities as few/zero-shot learners in various tasks due to their pre-training on vast amounts of text data, as exemplified by GPT-3, which boosted to InstrctGPT and ChatGPT, effectively following natural language instructions to accomplish real-world tasks. In this paper, we propose to introduce instruction tuning into multi-modal models, motivated by the Flamingo model's upstream interleaved format pretraining dataset. We adopt a similar approach to construct our MultI-Modal In-Context Instruction Tuning (MIMIC-IT) dataset. We then introduce Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following ability and in-context learning. We also optimize OpenFlamingo's implementation for researchers, democratizing the required training resources from 1$\times$ A100 GPU to 4$\times$ RTX-3090 GPUs, and integrate both OpenFlamingo and Otter into Huggingface Transformers for more researchers to incorporate the models into their customized training and inference pipelines.) <|cite_end|> can primarily handle a single visual modality, either image-language or video-language. We compare different LVLM paradigms as shown in~\cref{fig:sota}, where VideoChat <|cite_start|> (Reference: VideoChat: Chat-Centric Video Understanding: In this paper, we initiate an attempt of developing an end-to-end chat-centric video understanding system, coined as VideoChat. It integrates video foundation models and large language models via a learnable neural interface, excelling in spatiotemporal reasoning, event localization, and causal relationship inference. To instructively tune this system, we build a video-centric instruction dataset, composed of thousands of videos associated with detailed descriptions and conversations. This dataset emphasizes spatiotemporal reasoning and captures causal relationships, providing a valuable asset for training our chat-centric video understanding system. Preliminary qualitative experiments demonstrate the potential of our system across a broad spectrum of video applications, which could serve as a simple prototype system for future research on chat-centric video understanding. Access our code and data at https://github.com/OpenGVLab/Ask-Anything) <|cite_end|> and Video-LLaMA <|cite_start|> (Reference: Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding: We present Video-LLaMA a multi-modal framework that empowers Large Language Models (LLMs) with the capability of understanding both visual and auditory content in the video. Video-LLaMA bootstraps cross-modal training from the frozen pre-trained visual and audio encoders and the frozen LLMs. Unlike previous works that complement LLMs to process the visual or audio signals only, Video-LLaMA enables video comprehension by tackling two challenges: (1) capturing the temporal changes in visual scenes, (2) integrating audio-visual signals. To counter the first challenge, we propose a Video Q-former to assemble a pre-trained image encoder into our video encoder and introduce a video-to-text generation task to learn video-language correspondence. For the second challenge, we leverage ImageBind, a universal embedding model aligning multiple modalities, as the pre-trained audio encoder and introduce an Audio Q-former on top of ImageBind to learn reasonable auditory query embeddings for the LLM module. To align the output of both visual and audio encoders with LLM's embedding space, we first train Video-LLaMA on massive video/image-caption pairs and then tune our model with visual-instruction datasets of moderate amount but higher quality. We found Video-LLaMA shows the ability to perceive and comprehend video content and generate meaningful responses grounded in the visual and auditory information presented in the videos.) <|cite_end|> utilize a share visual encoder to handle both images and videos. However, due to the inherent differences in the media types of images and videos, it is challenging to learn a unified representation, and the performance falls significantly behind that of the specialized video expert model, Video-ChatGPT. Therefore, X-LLM <|cite_start|> (Reference: X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages: Large language models (LLMs) have demonstrated remarkable language abilities. GPT-4, based on advanced LLMs, exhibits extraordinary multimodal capabilities beyond previous visual language models. We attribute this to the use of more advanced LLMs compared with previous multimodal models. Unfortunately, the model architecture and training strategies of GPT-4 are unknown. To endow LLMs with multimodal capabilities, we propose X-LLM, which converts Multi-modalities (images, speech, videos) into foreign languages using X2L interfaces and inputs them into a large Language model (ChatGLM). Specifically, X-LLM aligns multiple frozen single-modal encoders and a frozen LLM using X2L interfaces, where ``X'' denotes multi-modalities such as image, speech, and videos, and ``L'' denotes languages. X-LLM's training consists of three stages: (1) Converting Multimodal Information: The first stage trains each X2L interface to align with its respective single-modal encoder separately to convert multimodal information into languages. (2) Aligning X2L representations with the LLM: single-modal encoders are aligned with the LLM through X2L interfaces independently. (3) Integrating multiple modalities: all single-modal encoders are aligned with the LLM through X2L interfaces to integrate multimodal capabilities into the LLM. Our experiments show that X-LLM demonstrates impressive multimodel chat abilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and yields a 84.5\% relative score compared with GPT-4 on a synthetic multimodal instruction-following dataset. And we also conduct quantitative tests on using LLM for ASR and multimodal ASR, hoping to promote the era of LLM-based speech recognition.) <|cite_end|> and Macaw-LLM <|cite_start|> (Reference: Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration: Although instruction-tuned large language models (LLMs) have exhibited remarkable capabilities across various NLP tasks, their effectiveness on other data modalities beyond text has not been fully studied. In this work, we propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual, audio, and textual information. Macaw-LLM consists of three main components: a modality module for encoding multi-modal data, a cognitive module for harnessing pretrained LLMs, and an alignment module for harmonizing diverse representations. Our novel alignment module seamlessly bridges multi-modal features to textual features, simplifying the adaptation process from the modality modules to the cognitive module. In addition, we construct a large-scale multi-modal instruction dataset in terms of multi-turn dialogue, including 69K image instances and 50K video instances. We have made our data, code and model publicly available, which we hope can pave the way for future research in multi-modal LLMs and expand the capabilities of LLMs to handle diverse data modalities and address complex real-world scenarios.) <|cite_end|> allocate a modality-specific encoder for each modality, attempting to enable a LLM to comprehend images or videos through several projection layers. But their performances are inferior to dedicated video expert models such as Video-ChatGPT <|cite_start|> (Reference: Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models: Conversation agents fueled by Large Language Models (LLMs) are providing a new way to interact with visual data. While there have been initial attempts for image-based conversation models, this work addresses the under-explored field of \emph{video-based conversation} by introducing Video-ChatGPT. It is a multimodal model that merges a video-adapted visual encoder with an LLM. The resulting model is capable of understanding and generating detailed conversations about videos. We introduce a new dataset of 100,000 video-instruction pairs used to train Video-ChatGPT acquired via manual and semi-automated pipeline that is easily scalable and robust to label noise. We also develop a quantitative evaluation framework for video-based dialogue models to objectively analyze the strengths and weaknesses of video-based dialogue models. Code: https://github.com/mbzuai-oryx/Video-ChatGPT.) <|cite_end|>. We attribute this phenomenon to the lack of \textit{alignment before projection}. Because image features and video features reside in their own spaces, this poses a challenge for a LLM to learn their interactions from several poor projection layers. Some similar phenomenon such as \textit{alignment before fusion} has been discussed by ALBEF <|cite_start|> (Reference: Align before Fuse: Vision and Language Representation Learning with Momentum Distillation: Large-scale vision and language representation learning has shown promising improvements on various vision-language tasks. Most existing methods employ a transformer-based multimodal encoder to jointly model visual tokens (region-based image features) and word tokens. Because the visual tokens and word tokens are unaligned, it is challenging for the multimodal encoder to learn image-text interactions. In this paper, we introduce a contrastive loss to ALign the image and text representations BEfore Fusing (ALBEF) them through cross-modal attention, which enables more grounded vision and language representation learning. Unlike most existing methods, our method does not require bounding box annotations nor high-resolution images. In order to improve learning from noisy web data, we propose momentum distillation, a self-training method which learns from pseudo-targets produced by a momentum model. We provide a theoretical analysis of ALBEF from a mutual information maximization perspective, showing that different training tasks can be interpreted as different ways to generate views for an image-text pair. ALBEF achieves state-of-the-art performance on multiple downstream vision-language tasks. On image-text retrieval, ALBEF outperforms methods that are pre-trained on orders of magnitude larger datasets. On VQA and NLVR$^2$, ALBEF achieves absolute improvements of 2.37% and 3.84% compared to the state-of-the-art, while enjoying faster inference speed. Code and pre-trained models are available at https://github.com/salesforce/ALBEF/.) <|cite_end|> and ViLT <|cite_start|> (Reference: ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision: Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance. Our code and pre-trained weights are available at https://github.com/dandelin/vilt.) <|cite_end|> in multi-model models. More recently, ImageBind-LLM <|cite_start|> (Reference: ImageBind-LLM: Multi-modality Instruction Tuning: We present ImageBind-LLM, a multi-modality instruction tuning method of large language models (LLMs) via ImageBind. Existing works mainly focus on language and image instruction tuning, different from which, our ImageBind-LLM can respond to multi-modality conditions, including audio, 3D point clouds, video, and their embedding-space arithmetic by only image-text alignment training. During training, we adopt a learnable bind network to align the embedding space between LLaMA and ImageBind's image encoder. Then, the image features transformed by the bind network are added to word tokens of all layers in LLaMA, which progressively injects visual instructions via an attention-free and zero-initialized gating mechanism. Aided by the joint embedding of ImageBind, the simple image-text training enables our model to exhibit superior multi-modality instruction-following capabilities. During inference, the multi-modality inputs are fed into the corresponding ImageBind encoders, and processed by a proposed visual cache model for further cross-modal embedding enhancement. The training-free cache model retrieves from three million image features extracted by ImageBind, which effectively mitigates the training-inference modality discrepancy. Notably, with our approach, ImageBind-LLM can respond to instructions of diverse modalities and demonstrate significant language generation quality. Code is released at https://github.com/OpenGVLab/LLaMA-Adapter.) <|cite_end|> focuses on enabling the LLM to simultaneously process multiple modal inputs by pre-aligning each modality to a common feature space <|cite_start|> (Reference: ImageBind: One Embedding Space To Bind Them All: We present ImageBind, an approach to learn a joint embedding across six different modalities - images, text, audio, depth, thermal, and IMU data. We show that all combinations of paired data are not necessary to train such a joint embedding, and only image-paired data is sufficient to bind the modalities together. ImageBind can leverage recent large scale vision-language models, and extends their zero-shot capabilities to new modalities just by using their natural pairing with images. It enables novel emergent applications 'out-of-the-box' including cross-modal retrieval, composing modalities with arithmetic, cross-modal detection and generation. The emergent capabilities improve with the strength of the image encoder and we set a new state-of-the-art on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Finally, we show strong few-shot recognition results outperforming prior work, and that ImageBind serves as a new way to evaluate vision models for visual and non-visual tasks.) <|cite_end|>. Based on a large image-language model, ImageBind-LLM converts other modalities into the most similar image features by retrieving from a training-free image cached database. However, the indirect alignment approach of ImageBind-LLM may lead to performance degradation, and the LLM has no knowledge of actual video data. In this work, we introduce \textbf{Video-LLaVA}, a simple but powerful baseline for the LVLM simultaneously handling both images and videos. Specifically, As shown in~\cref{fig:sota}, Video-LLaVA initially aligns the representations of images and videos to a unified visual feature space. Since the visual representations are already aligned prior to projection, we employ a shared projection layer to map the unified visual representation for the LLM. To enhance computational efficiency, Video-LLaVA undergoes joint training of images and videos, achieving remarkable results with 1 training epoch. As a result, The proposed Video-LLaVA greatly enhances the ability of the LLM to simultaneously understand both images and videos. For image understanding, Video-LLaVA surpasses advanced LVLMs such as mPLUG-owl-7B and InstructBLIP-7B in 5 image benchmarks. Additionally, utilizing 4 benchmark toolkits for a more comprehensive evaluation, Video-LLaVA-7B even outperforms IDEFICS-80B by 6.4\% in MMBench. Moreover, similar trends can be observed in video understanding, where Video-LLaVA surpasses Video-ChatGPT by 5.8\%, 9.9\%, 18.6\%, and 10.1\% respectively on the MSVD, MSRVTT, TGIF, and ActivityNet video question-answering datasets. Extensive ablation experiments demonstrate that alignment before projection yields greater benefits. Additionally, joint training of images and videos can facilitate a unified visual representation in LLM comprehension. We summarize our primary contributions as follows: \begin{itemize} \item We introduce \textbf{Video-LLaVA}, a powerful LVLM baseline. During the training process, Video-LLaVA binds visual signals to the language feature space, unifying visual representations, and proposes a solution to align before projection. We enable an LLM to perform visual reasoning capabilities on both images and videos simultaneously. \item Extensive experiments demonstrate that a unified visual representation benefits LLMs in learning to simultaneously handle both images and videos, validating the complementarity of modalities, showcasing significant superiority when compared to models specifically designed for either images or videos. \end{itemize} Related Work \label{sec:related} \subsection{Large Language Models} When the well-known commercial model ChatGPT <|cite_start|> (Reference: GPT-4 Technical Report: We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4's performance based on models trained with no more than 1/1,000th the compute of GPT-4.) <|cite_end|> was introduced, the The AI community released open-source Large Language Models (LLMs) by instruction tuning and increasing model sizes. These include LLaMA <|cite_start|> (Reference: LLaMA: Open and Efficient Foundation Language Models: We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.) <|cite_end|>, Vicuna, Alpaca, and more recently, LLaMA 2 <|cite_start|> (Reference: Llama 2: Open Foundation and Fine-Tuned Chat Models: In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.) <|cite_end|>. These models are tuned with instruction sets to emulate conversations between humans and AI assistants. Furthermore, InstructGPT <|cite_start|> (Reference: Training language models to follow instructions with human feedback: Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent.) <|cite_end|> is trained based on GPT-3 <|cite_start|> (Reference: Language Models are Few-Shot Learners: Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.) <|cite_end|> with 175 billion parameters through aligning with human preferences. However, LLMs can only interact within text. In this work, we introduce Video-LLaVA, which builds upon the powerful reasoning capabilities of LLM to extend modality interactions to images and videos. \begin{table}[htbp] \setlength\tabcolsep{0.85mm} \caption{\textbf{Comparison between different Large Vision-Language Models.} For methods that treat LLMs as scheduler, they do not require pre-alignment and joint training.} \label{tab:lvlm} \centering \begin{tabular}{lcccc} \toprule \textbf{Methods} & \textbf{Image} & \textbf{Video} & \textbf{Pre-aligned} & \textbf{Joint} \\ \midrule \multicolumn{3}{l}{\textit{LLMs as scheduler}} \\ VisualChatGPT & \textcolor{green}{\ding{52}} & \textcolor{red}{\ding{55}} & - & - \\ HuggingGPT & \textcolor{green}{\ding{52}} & \textcolor{red}{\ding{55}} & - & - \\ MM-REACT & \textcolor{green}{\ding{52}} & \textcolor{green}{\ding{52}} & - & - \\ ViperGPT & \textcolor{green}{\ding{52}} & \textcolor{green}{\ding{52}} & - & - \\ \midrule \multicolumn{3}{l}{\textit{LLMs as decoder}} \\ Mini-GPT4 & \textcolor{green}{\ding{52}} & \textcolor{red}{\ding{55}} & - & \textcolor{red}{\ding{55}} \\ LLaVA & \textcolor{green}{\ding{52}} & \textcolor{red}{\ding{55}} & - & \textcolor{red}{\ding{55}} \\ Video-ChatGPT & \textcolor{red}{\ding{55}} & \textcolor{green}{\ding{52}} & - & \textcolor{red}{\ding{55}} \\ VideoChat & \textcolor{green}{\ding{52}} & \textcolor{green}{\ding{52}} & \textcolor{red}{\ding{55}} & \textcolor{green}{\ding{52}} \\ Video-LLaMA & \textcolor{green}{\ding{52}} & \textcolor{green}{\ding{52}} & \textcolor{red}{\ding{55}} & \textcolor{green}{\ding{52}} \\ ImageBind-LLM & \textcolor{green}{\ding{52}} & \textcolor{green}{\ding{52}} & \textcolor{green}{\ding{52}} & \textcolor{red}{\ding{55}} \\ \midrule \rowcolor{blue} \textbf{Video-LLaVA (Ours)} & \textcolor{green}{\ding{52}} & \textcolor{green}{\ding{52}} & \textcolor{green}{\ding{52}} & \textcolor{green}{\ding{52}} \\ \bottomrule \end{tabular} \end{table} \begin{figure*}[htbp] \centering \includegraphics[width=1.0\linewidth]{fig/Video-LLaVA.pdf} \caption{\textbf{Training framework and performance.} Video-LLaVA exhibits remarkable interactive capabilities between images and videos, despite the absence of image-video pairs in the dataset. (a) The Video-LLaVA framework demonstrates a data flow that generates corresponding responses based on input instructions. (b) Video-LLaVA achieves superior performances on a broad range of 15 datasets across image and video.} \label{fig:videollava} \end{figure*} \subsection{Large Vision-Language Models} When extending LLMs to multi-modal, especially involving images and videos, the main approaches can be categorized into two types in \cref{tab:lvlm}: \textit{i)} treating LLM as a scheduler, \textit{ii)} treating LLM as a decoder. \vspace{0.1cm} \noindent\textbf{LLMs as scheduler} In the scheduler-based methods, various visual models are treated as plug-and-play modules. LLM schedules them according to the specific visual task requirements, like the assembly of building blocks. Some of these methods focus on images, such as VisualChatGPT <|cite_start|> (Reference: Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models: ChatGPT is attracting a cross-field interest as it provides a language interface with remarkable conversational competency and reasoning capabilities across many domains. However, since ChatGPT is trained with languages, it is currently not capable of processing or generating images from the visual world. At the same time, Visual Foundation Models, such as Visual Transformers or Stable Diffusion, although showing great visual understanding and generation capabilities, they are only experts on specific tasks with one-round fixed inputs and outputs. To this end, We build a system called \textbf{Visual ChatGPT}, incorporating different Visual Foundation Models, to enable the user to interact with ChatGPT by 1) sending and receiving not only languages but also images 2) providing complex visual questions or visual editing instructions that require the collaboration of multiple AI models with multi-steps. 3) providing feedback and asking for corrected results. We design a series of prompts to inject the visual model information into ChatGPT, considering models of multiple inputs/outputs and models that require visual feedback. Experiments show that Visual ChatGPT opens the door to investigating the visual roles of ChatGPT with the help of Visual Foundation Models. Our system is publicly available at \url{https://github.com/microsoft/visual-chatgpt}.) <|cite_end|> and HuggingGPT <|cite_start|> (Reference: HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face: Solving complicated AI tasks with different domains and modalities is a key step toward artificial general intelligence. While there are numerous AI models available for various domains and modalities, they cannot handle complicated AI tasks autonomously. Considering large language models (LLMs) have exhibited exceptional abilities in language understanding, generation, interaction, and reasoning, we advocate that LLMs could act as a controller to manage existing AI models to solve complicated AI tasks, with language serving as a generic interface to empower this. Based on this philosophy, we present HuggingGPT, an LLM-powered agent that leverages LLMs (e.g., ChatGPT) to connect various AI models in machine learning communities (e.g., Hugging Face) to solve AI tasks. Specifically, we use ChatGPT to conduct task planning when receiving a user request, select models according to their function descriptions available in Hugging Face, execute each subtask with the selected AI model, and summarize the response according to the execution results. By leveraging the strong language capability of ChatGPT and abundant AI models in Hugging Face, HuggingGPT can tackle a wide range of sophisticated AI tasks spanning different modalities and domains and achieve impressive results in language, vision, speech, and other challenging tasks, which paves a new way towards the realization of artificial general intelligence.) <|cite_end|>, while MM-REACT <|cite_start|> (Reference: MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action: We propose MM-REACT, a system paradigm that integrates ChatGPT with a pool of vision experts to achieve multimodal reasoning and action. In this paper, we define and explore a comprehensive list of advanced vision tasks that are intriguing to solve, but may exceed the capabilities of existing vision and vision-language models. To achieve such advanced visual intelligence, MM-REACT introduces a textual prompt design that can represent text descriptions, textualized spatial coordinates, and aligned file names for dense visual signals such as images and videos. MM-REACT's prompt design allows language models to accept, associate, and process multimodal information, thereby facilitating the synergetic combination of ChatGPT and various vision experts. Zero-shot experiments demonstrate MM-REACT's effectiveness in addressing the specified capabilities of interests and its wide application in different scenarios that require advanced visual understanding. Furthermore, we discuss and compare MM-REACT's system paradigm with an alternative approach that extends language models for multimodal scenarios through joint finetuning. Code, demo, video, and visualization are available at https://multimodal-react.github.io/) <|cite_end|> and ViperGPT <|cite_start|> (Reference: ViperGPT: Visual Inference via Python Execution for Reasoning: Answering visual queries is a complex task that requires both visual processing and reasoning. End-to-end models, the dominant approach for this task, do not explicitly differentiate between the two, limiting interpretability and generalization. Learning modular programs presents a promising alternative, but has proven challenging due to the difficulty of learning both the programs and modules simultaneously. We introduce ViperGPT, a framework that leverages code-generation models to compose vision-and-language models into subroutines to produce a result for any query. ViperGPT utilizes a provided API to access the available modules, and composes them by generating Python code that is later executed. This simple approach requires no further training, and achieves state-of-the-art results across various complex visual tasks.) <|cite_end|> can also handle videos. A key characteristic of these scheduler-based LVLMs is that they do not require end-to-end training, hence eliminating the need for pre-alignment and joint training of each modality. \vspace{0.1cm} \noindent\textbf{LLMs as decoder} Regarding the approach of treating LLM as a decoder, this is our primary focus. MiniGPT-4 <|cite_start|> (Reference: MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models: The recent GPT-4 has demonstrated extraordinary multi-modal abilities, such as directly generating websites from handwritten text and identifying humorous elements within images. These features are rarely observed in previous vision-language models. However, the technical details behind GPT-4 continue to remain undisclosed. We believe that the enhanced multi-modal generation capabilities of GPT-4 stem from the utilization of sophisticated large language models (LLM). To examine this phenomenon, we present MiniGPT-4, which aligns a frozen visual encoder with a frozen advanced LLM, Vicuna, using one projection layer. Our work, for the first time, uncovers that properly aligning the visual features with an advanced large language model can possess numerous advanced multi-modal abilities demonstrated by GPT-4, such as detailed image description generation and website creation from hand-drawn drafts. Furthermore, we also observe other emerging capabilities in MiniGPT-4, including writing stories and poems inspired by given images, teaching users how to cook based on food photos, and so on. In our experiment, we found that the model trained on short image caption pairs could produce unnatural language outputs (e.g., repetition and fragmentation). To address this problem, we curate a detailed image description dataset in the second stage to finetune the model, which consequently improves the model's generation reliability and overall usability. Our code, pre-trained model, and collected dataset are available at https://minigpt-4.github.io/.) <|cite_end|> aligns image tokens to the input of the large language model through several linear projection layers. However, this alignment is weak and lacks feedback from human instructions. Subsequently, mPLUG-Owl <|cite_start|> (Reference: mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality: Large language models (LLMs) have demonstrated impressive zero-shot abilities on a variety of open-ended tasks, while recent research has also explored the use of LLMs for multi-modal generation. In this study, we introduce mPLUG-Owl, a novel training paradigm that equips LLMs with multi-modal abilities through modularized learning of foundation LLM, a visual knowledge module, and a visual abstractor module. This approach can support multiple modalities and facilitate diverse unimodal and multimodal abilities through modality collaboration. The training paradigm of mPLUG-Owl involves a two-stage method for aligning image and text, which learns visual knowledge with the assistance of LLM while maintaining and even improving the generation abilities of LLM. In the first stage, the visual knowledge module and abstractor module are trained with a frozen LLM module to align the image and text. In the second stage, language-only and multi-modal supervised datasets are used to jointly fine-tune a low-rank adaption (LoRA) module on LLM and the abstractor module by freezing the visual knowledge module. We carefully build a visually-related instruction evaluation set OwlEval. Experimental results show that our model outperforms existing multi-modal models, demonstrating mPLUG-Owl's impressive instruction and visual understanding ability, multi-turn conversation ability, and knowledge reasoning ability. Besides, we observe some unexpected and exciting abilities such as multi-image correlation and scene text understanding, which makes it possible to leverage it for harder real scenarios, such as vision-only document comprehension. Our code, pre-trained model, instruction-tuned models, and evaluation set are available at https://github.com/X-PLUG/mPLUG-Owl. The online demo is available at https://www.modelscope.cn/studios/damo/mPLUG-Owl.) <|cite_end|> adopts a two-stage training approach. In the first stage, images are aligned with language using an auto-regressive pretraining style, and the second stage involves instruction tuning through using a human instruction dataset. With the increasing scale of large language model backends, approaches such as InstructBLIP <|cite_start|> (Reference: InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning: Large-scale pre-training and instruction tuning have been successful at creating general-purpose language models with broad competence. However, building general-purpose vision-language models is challenging due to the rich input distributions and task diversity resulting from the additional visual input. Although vision-language pretraining has been widely studied, vision-language instruction tuning remains under-explored. In this paper, we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pretrained BLIP-2 models. We gather 26 publicly available datasets, covering a wide variety of tasks and capabilities, and transform them into instruction tuning format. Additionally, we introduce an instruction-aware Query Transformer, which extracts informative features tailored to the given instruction. Trained on 13 held-in datasets, InstructBLIP attains state-of-the-art zero-shot performance across all 13 held-out datasets, substantially outperforming BLIP-2 and larger Flamingo models. Our models also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA questions with image contexts). Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over concurrent multimodal models. All InstructBLIP models are open-sourced at https://github.com/salesforce/LAVIS/tree/main/projects/instructblip.) <|cite_end|> and LLaVA <|cite_start|> (Reference: Visual Instruction Tuning: Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field. In this paper, we present the first attempt to use language-only GPT-4 to generate multimodal language-image instruction-following data. By instruction tuning on such generated data, we introduce LLaVA: Large Language and Vision Assistant, an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding.Our early experiments show that LLaVA demonstrates impressive multimodel chat abilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal instruction-following dataset. When fine-tuned on Science QA, the synergy of LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make GPT-4 generated visual instruction tuning data, our model and code base publicly available.) <|cite_end|> <|cite_start|> (Reference: Improved Baselines with Visual Instruction Tuning: Large multimodal models (LMM) have recently shown encouraging progress with visual instruction tuning. In this note, we show that the fully-connected vision-language cross-modal connector in LLaVA is surprisingly powerful and data-efficient. With simple modifications to LLaVA, namely, using CLIP-ViT-L-336px with an MLP projection and adding academic-task-oriented VQA data with simple response formatting prompts, we establish stronger baselines that achieve state-of-the-art across 11 benchmarks. Our final 13B checkpoint uses merely 1.2M publicly available data, and finishes full training in ~1 day on a single 8-A100 node. We hope this can make state-of-the-art LMM research more accessible. Code and model will be publicly available.) <|cite_end|> collecte the larger human instruction datasets to train a larger LVLMs (\eg 13B parameters). Each answer of instruction datasets strictly follow to the given instructions. Then they undergo end-to-end training using human instruction datasets, enabling the LLM with visual reasoning capabilities. Moreover, Video-ChatGPT <|cite_start|> (Reference: Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models: Conversation agents fueled by Large Language Models (LLMs) are providing a new way to interact with visual data. While there have been initial attempts for image-based conversation models, this work addresses the under-explored field of \emph{video-based conversation} by introducing Video-ChatGPT. It is a multimodal model that merges a video-adapted visual encoder with an LLM. The resulting model is capable of understanding and generating detailed conversations about videos. We introduce a new dataset of 100,000 video-instruction pairs used to train Video-ChatGPT acquired via manual and semi-automated pipeline that is easily scalable and robust to label noise. We also develop a quantitative evaluation framework for video-based dialogue models to objectively analyze the strengths and weaknesses of video-based dialogue models. Code: https://github.com/mbzuai-oryx/Video-ChatGPT.) <|cite_end|> design a 100k video instruction dataset, successfully empowering LLMs to comprehend videos. VideoChat <|cite_start|> (Reference: VideoChat: Chat-Centric Video Understanding: In this paper, we initiate an attempt of developing an end-to-end chat-centric video understanding system, coined as VideoChat. It integrates video foundation models and large language models via a learnable neural interface, excelling in spatiotemporal reasoning, event localization, and causal relationship inference. To instructively tune this system, we build a video-centric instruction dataset, composed of thousands of videos associated with detailed descriptions and conversations. This dataset emphasizes spatiotemporal reasoning and captures causal relationships, providing a valuable asset for training our chat-centric video understanding system. Preliminary qualitative experiments demonstrate the potential of our system across a broad spectrum of video applications, which could serve as a simple prototype system for future research on chat-centric video understanding. Access our code and data at https://github.com/OpenGVLab/Ask-Anything) <|cite_end|> and Video-LLaMA <|cite_start|> (Reference: Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding: We present Video-LLaMA a multi-modal framework that empowers Large Language Models (LLMs) with the capability of understanding both visual and auditory content in the video. Video-LLaMA bootstraps cross-modal training from the frozen pre-trained visual and audio encoders and the frozen LLMs. Unlike previous works that complement LLMs to process the visual or audio signals only, Video-LLaMA enables video comprehension by tackling two challenges: (1) capturing the temporal changes in visual scenes, (2) integrating audio-visual signals. To counter the first challenge, we propose a Video Q-former to assemble a pre-trained image encoder into our video encoder and introduce a video-to-text generation task to learn video-language correspondence. For the second challenge, we leverage ImageBind, a universal embedding model aligning multiple modalities, as the pre-trained audio encoder and introduce an Audio Q-former on top of ImageBind to learn reasonable auditory query embeddings for the LLM module. To align the output of both visual and audio encoders with LLM's embedding space, we first train Video-LLaMA on massive video/image-caption pairs and then tune our model with visual-instruction datasets of moderate amount but higher quality. We found Video-LLaMA shows the ability to perceive and comprehend video content and generate meaningful responses grounded in the visual and auditory information presented in the videos.) <|cite_end|> achieve this by conducting joint training, allowing LLMs to simultaneously handle images and videos. Expanding LLMs to additional visual modalities typically requires pre-alignment, as seen in LLaMA-Adapter <|cite_start|> (Reference: LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention: We present LLaMA-Adapter, a lightweight adaption method to efficiently fine-tune LLaMA into an instruction-following model. Using 52K self-instruct demonstrations, LLaMA-Adapter only introduces 1.2M learnable parameters upon the frozen LLaMA 7B model, and costs less than one hour for fine-tuning on 8 A100 GPUs. Specifically, we adopt a set of learnable adaption prompts, and prepend them to the word tokens at higher transformer layers. Then, a zero-initialized attention mechanism with zero gating is proposed, which adaptively injects the new instructional cues into LLaMA, while effectively preserves its pre-trained knowledge. With our efficient training, LLaMA-Adapter can generate high-quality responses, comparable to Alpaca with fully fine-tuned 7B parameters. Besides language commands, our approach can be simply extended to multi-modal instructions for learning image-conditioned LLaMA model, which achieves superior reasoning performance on ScienceQA and COCO Caption benchmarks. Furthermore, we also evaluate the zero-initialized attention mechanism for fine-tuning other pre-trained models (ViT, RoBERTa) on traditional vision and language tasks, demonstrating the superior generalization capacity of our approach. Code is released at https://github.com/OpenGVLab/LLaMA-Adapter.) <|cite_end|> <|cite_start|> (Reference: LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model: How to efficiently transform large language models (LLMs) into instruction followers is recently a popular research direction, while training LLM for multi-modal reasoning remains less explored. Although the recent LLaMA-Adapter demonstrates the potential to handle visual inputs with LLMs, it still cannot generalize well to open-ended visual instructions and lags behind GPT-4. In this paper, we present LLaMA-Adapter V2, a parameter-efficient visual instruction model. Specifically, we first augment LLaMA-Adapter by unlocking more learnable parameters (e.g., norm, bias and scale), which distribute the instruction-following ability across the entire LLaMA model besides adapters. Secondly, we propose an early fusion strategy to feed visual tokens only into the early LLM layers, contributing to better visual knowledge incorporation. Thirdly, a joint training paradigm of image-text pairs and instruction-following data is introduced by optimizing disjoint groups of learnable parameters. This strategy effectively alleviates the interference between the two tasks of image-text alignment and instruction following and achieves strong multi-modal reasoning with only a small-scale image-text and instruction dataset. During inference, we incorporate additional expert models (e.g. captioning/OCR systems) into LLaMA-Adapter to further enhance its image understanding capability without incurring training costs. Compared to the original LLaMA-Adapter, our LLaMA-Adapter V2 can perform open-ended multi-modal instructions by merely introducing 14M parameters over LLaMA. The newly designed framework also exhibits stronger language-only instruction-following capabilities and even excels in chat interactions. Our code and models are available at https://github.com/ZrrSkywalker/LLaMA-Adapter.) <|cite_end|> and ImageBind-LLM <|cite_start|> (Reference: ImageBind-LLM: Multi-modality Instruction Tuning: We present ImageBind-LLM, a multi-modality instruction tuning method of large language models (LLMs) via ImageBind. Existing works mainly focus on language and image instruction tuning, different from which, our ImageBind-LLM can respond to multi-modality conditions, including audio, 3D point clouds, video, and their embedding-space arithmetic by only image-text alignment training. During training, we adopt a learnable bind network to align the embedding space between LLaMA and ImageBind's image encoder. Then, the image features transformed by the bind network are added to word tokens of all layers in LLaMA, which progressively injects visual instructions via an attention-free and zero-initialized gating mechanism. Aided by the joint embedding of ImageBind, the simple image-text training enables our model to exhibit superior multi-modality instruction-following capabilities. During inference, the multi-modality inputs are fed into the corresponding ImageBind encoders, and processed by a proposed visual cache model for further cross-modal embedding enhancement. The training-free cache model retrieves from three million image features extracted by ImageBind, which effectively mitigates the training-inference modality discrepancy. Notably, with our approach, ImageBind-LLM can respond to instructions of diverse modalities and demonstrate significant language generation quality. Code is released at https://github.com/OpenGVLab/LLaMA-Adapter.) <|cite_end|>. They bind other modalities to the image space through ImageBind's <|cite_start|> (Reference: ImageBind: One Embedding Space To Bind Them All: We present ImageBind, an approach to learn a joint embedding across six different modalities - images, text, audio, depth, thermal, and IMU data. We show that all combinations of paired data are not necessary to train such a joint embedding, and only image-paired data is sufficient to bind the modalities together. ImageBind can leverage recent large scale vision-language models, and extends their zero-shot capabilities to new modalities just by using their natural pairing with images. It enables novel emergent applications 'out-of-the-box' including cross-modal retrieval, composing modalities with arithmetic, cross-modal detection and generation. The emergent capabilities improve with the strength of the image encoder and we set a new state-of-the-art on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Finally, we show strong few-shot recognition results outperforming prior work, and that ImageBind serves as a new way to evaluate vision models for visual and non-visual tasks.) <|cite_end|> modality encoder. These models have demonstrated that a unified feature space is advantageous for enhancing LLM's multi-modal reasoning capabilities. Distinguished from prior work, Video-LLaVA not only pre-aligns image and video features but also conducts joint training of images and videos, facilitating LLMs in learning multi-modal reasoning capabilities from a unified visual representation. <|paper_end|>
[ "<|reference_start|> mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality: Large language models (LLMs) have demonstrated impressive zero-shot abilities on a variety of open-ended tasks, while recent research has also explored the use of LLMs for multi-modal generation. In this study, we introduce mPLUG-Owl, a novel training paradigm that equips LLMs with multi-modal abilities through modularized learning of foundation LLM, a visual knowledge module, and a visual abstractor module. This approach can support multiple modalities and facilitate diverse unimodal and multimodal abilities through modality collaboration. The training paradigm of mPLUG-Owl involves a two-stage method for aligning image and text, which learns visual knowledge with the assistance of LLM while maintaining and even improving the generation abilities of LLM. In the first stage, the visual knowledge module and abstractor module are trained with a frozen LLM module to align the image and text. In the second stage, language-only and multi-modal supervised datasets are used to jointly fine-tune a low-rank adaption (LoRA) module on LLM and the abstractor module by freezing the visual knowledge module. We carefully build a visually-related instruction evaluation set OwlEval. Experimental results show that our model outperforms existing multi-modal models, demonstrating mPLUG-Owl's impressive instruction and visual understanding ability, multi-turn conversation ability, and knowledge reasoning ability. Besides, we observe some unexpected and exciting abilities such as multi-image correlation and scene text understanding, which makes it possible to leverage it for harder real scenarios, such as vision-only document comprehension. Our code, pre-trained model, instruction-tuned models, and evaluation set are available at https://github.com/X-PLUG/mPLUG-Owl. The online demo is available at https://www.modelscope.cn/studios/damo/mPLUG-Owl. <|reference_end|>", "<|reference_start|> ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision: Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance. Our code and pre-trained weights are available at https://github.com/dandelin/vilt. <|reference_end|>", "<|reference_start|> ViperGPT: Visual Inference via Python Execution for Reasoning: Answering visual queries is a complex task that requires both visual processing and reasoning. End-to-end models, the dominant approach for this task, do not explicitly differentiate between the two, limiting interpretability and generalization. Learning modular programs presents a promising alternative, but has proven challenging due to the difficulty of learning both the programs and modules simultaneously. We introduce ViperGPT, a framework that leverages code-generation models to compose vision-and-language models into subroutines to produce a result for any query. ViperGPT utilizes a provided API to access the available modules, and composes them by generating Python code that is later executed. This simple approach requires no further training, and achieves state-of-the-art results across various complex visual tasks. <|reference_end|>", "<|reference_start|> VideoChat: Chat-Centric Video Understanding: In this paper, we initiate an attempt of developing an end-to-end chat-centric video understanding system, coined as VideoChat. It integrates video foundation models and large language models via a learnable neural interface, excelling in spatiotemporal reasoning, event localization, and causal relationship inference. To instructively tune this system, we build a video-centric instruction dataset, composed of thousands of videos associated with detailed descriptions and conversations. This dataset emphasizes spatiotemporal reasoning and captures causal relationships, providing a valuable asset for training our chat-centric video understanding system. Preliminary qualitative experiments demonstrate the potential of our system across a broad spectrum of video applications, which could serve as a simple prototype system for future research on chat-centric video understanding. Access our code and data at https://github.com/OpenGVLab/Ask-Anything <|reference_end|>" ]
[ 4, 20, 31, 38 ]
{"<|cite_1|>": "arxiv-489148", "<|multi_cite_2_1|>": "arxiv-259518", "<|multi_cite_2_2|>": "arxiv-505787", "<|cite_3|>": "arxiv-460885", "<|multi_cite_4_1|>": "arxiv-500417", "<|multi_cite_4_2|>": "arxiv-498672", "<|multi_cite_4_3|>": "arxiv-416418", "<|multi_cite_5_1|>": "arxiv-514178", "<|multi_cite_5_2|>": "arxiv-503867", "<|multi_cite_5_3|>": "arxiv-512852", "<|multi_cite_6_1|>": "arxiv-477561", "<|multi_cite_6_2|>": "arxiv-503928", "<|multi_cite_6_3|>": "arxiv-515093", "<|multi_cite_6_4|>": "arxiv-502566", "<|cite_7|>": "arxiv-503867", "<|cite_8|>": "arxiv-512852", "<|cite_9|>": "arxiv-502779", "<|cite_10|>": "arxiv-516010", "<|cite_11|>": "arxiv-514178", "<|cite_12|>": "arxiv-355417", "<|cite_13|>": "arxiv-319372", "<|cite_14|>": "arxiv-537498", "<|cite_15|>": "arxiv-503527", "<|cite_16|>": "arxiv-489148", "<|cite_17|>": "arxiv-484616", "<|cite_20|>": "arxiv-524224", "<|cite_21|>": "arxiv-403294", "<|cite_22|>": "arxiv-268228", "<|cite_23|>": "arxiv-487187", "<|cite_24|>": "ss-683281", "<|cite_25|>": "arxiv-490441", "<|cite_26|>": "arxiv-488841", "<|cite_27|>": "arxiv-498672", "<|cite_28|>": "arxiv-500417", "<|cite_29|>": "arxiv-503928", "<|multi_cite_30_1|>": "arxiv-497716", "<|multi_cite_30_2|>": "arxiv-546041", "<|cite_31|>": "arxiv-514178", "<|cite_32|>": "arxiv-503867", "<|cite_33|>": "arxiv-512852", "<|multi_cite_34_1|>": "arxiv-492810", "<|multi_cite_34_2|>": "arxiv-500827", "<|cite_35|>": "arxiv-537498", "<|cite_36|>": "arxiv-503527"}
2408.17237
<|paper_start|> Title: A nonlinear elasticity model in computer vision Abstract: A nonlinear elasticity model in computer vision: The purpose of this paper is to analyze a nonlinear elasticity model previously introduced by the authors for comparing two images, regarded as bounded open subsets of $\R^n$ together with associated vector-valued intensity maps. Optimal transformations between the images are sought as minimisers of an integral functional among orientation-preserving homeomorphisms. The existence of minimisers is proved under natural coercivity and polyconvexity conditions, assuming only that the intensity functions are bounded measurable. Variants of the existence theorem are also proved, first under the constraint that finite sets of landmark points in the two images are mapped one to the other, and second when one image is to be compared to an unknown part of another. The question is studied as to whether for images related by a linear mapping the unique minimizer is given by that linear mapping. For a natural class of functional integrands an example is given guaranteeing that this property holds for pairs of images in which the second is a scaling of the first by a constant factor. However for the property to hold for arbitrary pairs of linearly related images it is shown that the integrand has to depend on the gradient of the transformation as a convex function of its determinant alone. This suggests a new model in which the integrand depends also on second derivatives of the transformation, and an example is given for which both existence of minimizers is assured and the above property holds for all pairs of linearly related images. Introduction The purpose of this paper is to analyze a nonlinear elasticity model introduced in <|cite_start|> (Reference: Image comparison and scaling via nonlinear elasticity: A nonlinear elasticity model for comparing images is formulated and analyzed, in which optimal transformations between images are sought as minimizers of an integral functional. The existence of minimizers in a suitable class of homeomorphisms between image domains is established under natural hypotheses. We investigate whether for linearly related images the minimization algorithm delivers the linear transformation as the unique minimizer.) <|cite_end|> for comparing two images $P_1=(\Omega_1,c_1),\; P_2=(\Omega_2,c_2)$, regarded as bounded Lipschitz domains $\om_1,\om_2$ in $\R^n$ with corresponding intensity maps $c_1:\om_1\to\R^m, c_2:\om_2\to\R^m$. The model is based on an integral functional \be \label{0} I_{P_1,P_2}(y)=\int_{\Omega_1}\psi(c_1(x), c_2(y(x)),Dy(x)) \,\dx, \ee depending on $c_1,c_2$ and a map $y:\om_1\to\om_2$ with gradient $Dy$, whose minimisers give optimal transformations between images. The admissible transformations $y$ between the images are assumed to be orientation-preserving homeomorphisms with $y(\om_1)=\om_2$, and are not required to satisfy other boundary conditions. We study in particular whether $\psi$ can be chosen so that when $P_2$ is related to $P_1$ by a linear mapping the unique minimizer of $I_{P_1,P_2}$ is given by that linear mapping. There are various different approaches to image comparison, for example using optimal transport, flow of diffeomorphisms (metamorphosis) <|cite_start|> (Reference: Shapes and Diffeomorphisms: ) <|cite_end|> and machine learning. Approaches based on linear elasticity are also often used (see, for example, <|cite_start|> (Reference: Numerical methods for image registration: 1. Introduction 2. The Human Neuroscanning Project 3. The mathematical setting I PARAMETRIC IMAGE REGISTRATION 4. Landmark based registration 5. Principal axes based registration 6. Optimal linear registration 7. Summarizing parametric image registration II NON-PARAMETRIC IMAGE REGISTRATION 8. Non-parametric image registration 9. Elastic registration 10. Fluid registration 11. Diffusion registration 12. Curvature registration 13. Concluding remarks) <|cite_end|>). The use of nonlinear elasticity is less common (a fairly complete list of papers is <|cite_start|> (Reference: Variational Methods in Image Matching and Motion Extraction: ) <|cite_end|> <|cite_start|> (Reference: An Elasticity-Based Covariance Analysis of Shapes: ) <|cite_end|> <|cite_start|> (Reference: A hyperelastic regularization energy for image registration: Image registration is one of the most challenging problems in image processing, where ill-posedness arises due to noisy data as well as nonuniqueness, and hence the choice of regularization is crucial. This paper presents hyperelasticity as a regularizer and introduces a new and stable numerical implementation. On one hand, hyperelastic registration is an appropriate model for large and highly nonlinear deformations, for which a linear elastic model needs to fail. On the other hand, the hyperelastic regularizer yields very regular and diffeomorphic transformations. While hyperelasticity might be considered as just an additional outstanding regularization option for some applications, it becomes inevitable for applications involving higher order distance measures like mass-preserving registration. The paper gives a short introduction to image registration and hyperelasticity. The hyperelastic image registration problem is phrased in a variational setting, and an existence proof is provided. The focus of th...) <|cite_end|> <|cite_start|> (Reference: A variational model dedicated to joint segmentation, registration, and atlas generation for shape analysis: In medical image analysis, constructing an atlas, i.e. a mean representative of an ensemble of images, is a critical task for practitioners to estimate variability of shapes inside a population, and to characterise and understand how structural shape changes have an impact on health. This involves identifying significant shape constituents of a set of images, a process called segmentation, and mapping this group of images to an unknown mean image, a task called registration, making a statistical analysis of the image population possible. To achieve this goal, we propose treating these operations jointly to leverage their positive mutual influence, in a hyperelasticity setting, by viewing the shapes to be matched as Ogden materials. The approach is complemented by novel hard constraints on the $L^\infty$ norm of both the Jacobian and its inverse, ensuring that the deformation is a bi-Lipschitz homeomorphism. Segmentation is based on the Potts model, which allows for a partition into more than two regions, i.e. more than one shape. The connection to the registration problem is ensured by the dissimilarity measure that aims to align the segmented shapes. A representation of the deformation field in a linear space equipped with a scalar product is then computed in order to perform a geometry-driven Principal Component Analysis (PCA) and to extract the main modes of variations inside the image population. Theoretical results emphasizing the mathematical soundness of the model are provided, among which existence of minimisers, analysis of a numerical method of resolution, asymptotic results and a PCA analysis, as well as numerical simulations demonstrating the ability of the modeling to produce an atlas exhibiting sharp edges, high contrast and a consistent shape.) <|cite_end|> <|cite_start|> (Reference: Shape-Aware Matching of Implicit Surfaces Based on Thin Shell Energies: ) <|cite_end|> <|cite_start|> (Reference: Symmetry and scaling limits for matching of implicit surfaces based on thin shell energies: In a recent paper by Iglesias, Rumpf and Scherzer (Found. Comput. Math. 18(4), 2018) a variational model for deformations matching a pair of shapes given as level set functions was proposed. Its main feature is the presence of anisotropic energies active only in a narrow band around the hypersurfaces that resemble the behavior of elastic shells. In this work we consider some extensions and further analysis of that model. First, we present a symmetric energy functional such that given two particular shapes, it assigns the same energy to any given deformation as to its inverse when the roles of the shapes are interchanged, and introduce the adequate parameter scaling to recover a surface problem when the width of the narrow band vanishes. Then, we obtain existence of minimizing deformations for the symmetric energy in classes of bi-Sobolev homeomorphisms for small enough widths, and prove a $\Gamma$-convergence result for the corresponding non-symmetric energies as the width tends to zero. Finally, numerical results on realistic shape matching applications demonstrating the effect of the symmetric energy are presented.) <|cite_end|> <|cite_start|> (Reference: Nonlinear Elasticity Registration and Sobolev Gradients: ) <|cite_end|> <|cite_start|> (Reference: Joint segmentation/registration model by shape alignment via weighted total variation minimization and nonlinear elasticity: This paper falls within the scope of joint segmentation-registration using nonlinear elasticity principles. Because Saint Venant--Kirchhoff materials are the simplest hyperelastic materials (hyperelasticity being a suitable framework when dealing with large and nonlinear deformations), we propose viewing the shapes to be matched as such materials. Then we introduce a variational model combining a measure of dissimilarity based on weighted total variation and a regularizer based on the stored energy function of a Saint Venant--Kirchhoff material. Adding a weighted total variation--based criterion enables us to align the edges of the objects even when the modalities are different. We derive a relaxed problem associated to the initial one for which we are able to provide a result of existence of minimizers. A description and analysis of a numerical method of resolution based on a decoupling principle is then provided including a theoretical result of $\Gamma$-convergence. Applications are illustrated in acad...) <|cite_end|> <|cite_start|> (Reference: Topology preservation for image-registration-related deformation fields: In this paper, we address the issue of designing a theoretically well-motivated and computationally efficient method ensuring topology preservation on image-registration-related deformation fields. The model is motivated by a mathematical characterization of topology preservation for a deformation field mapping two subsets of Z2, namely, positivity of the four approximations to the Jacobian determinant of the deformation on a square patch. The first step of the proposed algorithm thus consists in correcting the gradient vector field of the deformation (that does not comply with the topology preservation criteria) at the discrete level in order to fulfill this positivity condition. Once this step is achieved, it thus remains to reconstruct the deformation field, given its full set of discrete gradient vectors. We propose to decompose the reconstruction problem into independent problems of smaller dimensions, yielding a natural parallelization of the computations and enabling us to reduce drastically the computational time (up to 80 in some applications). For each subdomain, a functional minimization problem under Lagrange interpolation constraints is introduced and its well-posedness is studied: existence/uniqueness of the solution, characterization of the solution, convergence of the method when the number of data increases to infinity, discretization with the Finite Element Method and discussion on the properties of the matrix involved in the linear system. Numerical simulations based on OpenMP parallelization and MKL multi-threading demonstrating the ability of the model to handle large deformations (contrary to classical methods) and the interest of having decomposed the problem into smaller ones are provided.) <|cite_end|> <|cite_start|> (Reference: A Hyperelastic Two-Scale Optimization Model for Shape Matching: We suggest a novel shape matching algorithm for three-dimensional surface meshes of disk or sphere topology. The method is based on the physical theory of nonlinear elasticity and can hence handle large rotations and deformations. Deformation boundary conditions that supplement the underlying equations are usually unknown. Given an initial guess, these are optimized such that the mechanical boundary forces that are responsible for the deformation are of a simple nature. We show a heuristic way to approximate the nonlinear optimization problem by a sequence of convex problems using finite elements. The deformation cost, i.e, the forces, is measured on a coarse scale while ICP-like matching is done on the fine scale. We demonstrate the plausibility of our algorithm on examples taken from different datasets.) <|cite_end|> <|cite_start|> (Reference: A joint segmentation/registration model based on a nonlocal characterization of weighted total variation and nonlocal shape descriptors: Segmentation and registration are cornerstone steps of many imaging situations: while segmentation aims to identify relevant constituents of an image for visualization or quantitative analysis, registration consists of mapping salient features of an image onto the corresponding ones in another. Instead of treating these tasks linearly one after another, so without correlating them, we propose a unified variational model, in a hyperelasticity setting, processing these two operations simultaneously. The dissimilarity measure relates local and global (or region-based) information, since it relies on weighted total variation and nonlocal shape descriptors inspired by the piecewise constant Mumford--Shah model. Theoretical results emphasizing the mathematical and practical soundness of the model are provided, including existence of minimizers, connection with the segmentation step, nonlocal characterization of weighted seminorms, asymptotic results, and $\Gamma$-convergence properties. A preliminary version of...) <|cite_end|>) but offers significant advantages over linear elasticity because it allows for large deformations and respects rotational invariance. Our model is closely related to that of Droske \& Rumpf, Rumpf <|cite_start|> (Reference: Variational Methods in Image Matching and Motion Extraction: ) <|cite_end|> and Rumpf \& Wirth <|cite_start|> (Reference: An Elasticity-Based Covariance Analysis of Shapes: ) <|cite_end|>, and like them (as also done in <|cite_start|> (Reference: A hyperelastic regularization energy for image registration: Image registration is one of the most challenging problems in image processing, where ill-posedness arises due to noisy data as well as nonuniqueness, and hence the choice of regularization is crucial. This paper presents hyperelasticity as a regularizer and introduces a new and stable numerical implementation. On one hand, hyperelastic registration is an appropriate model for large and highly nonlinear deformations, for which a linear elastic model needs to fail. On the other hand, the hyperelastic regularizer yields very regular and diffeomorphic transformations. While hyperelasticity might be considered as just an additional outstanding regularization option for some applications, it becomes inevitable for applications involving higher order distance measures like mass-preserving registration. The paper gives a short introduction to image registration and hyperelasticity. The hyperelastic image registration problem is phrased in a variational setting, and an existence proof is provided. The focus of th...) <|cite_end|> <|cite_start|> (Reference: A variational model dedicated to joint segmentation, registration, and atlas generation for shape analysis: In medical image analysis, constructing an atlas, i.e. a mean representative of an ensemble of images, is a critical task for practitioners to estimate variability of shapes inside a population, and to characterise and understand how structural shape changes have an impact on health. This involves identifying significant shape constituents of a set of images, a process called segmentation, and mapping this group of images to an unknown mean image, a task called registration, making a statistical analysis of the image population possible. To achieve this goal, we propose treating these operations jointly to leverage their positive mutual influence, in a hyperelasticity setting, by viewing the shapes to be matched as Ogden materials. The approach is complemented by novel hard constraints on the $L^\infty$ norm of both the Jacobian and its inverse, ensuring that the deformation is a bi-Lipschitz homeomorphism. Segmentation is based on the Potts model, which allows for a partition into more than two regions, i.e. more than one shape. The connection to the registration problem is ensured by the dissimilarity measure that aims to align the segmented shapes. A representation of the deformation field in a linear space equipped with a scalar product is then computed in order to perform a geometry-driven Principal Component Analysis (PCA) and to extract the main modes of variations inside the image population. Theoretical results emphasizing the mathematical soundness of the model are provided, among which existence of minimisers, analysis of a numerical method of resolution, asymptotic results and a PCA analysis, as well as numerical simulations demonstrating the ability of the modeling to produce an atlas exhibiting sharp edges, high contrast and a consistent shape.) <|cite_end|> <|cite_start|> (Reference: Shape-Aware Matching of Implicit Surfaces Based on Thin Shell Energies: ) <|cite_end|> <|cite_start|> (Reference: Symmetry and scaling limits for matching of implicit surfaces based on thin shell energies: In a recent paper by Iglesias, Rumpf and Scherzer (Found. Comput. Math. 18(4), 2018) a variational model for deformations matching a pair of shapes given as level set functions was proposed. Its main feature is the presence of anisotropic energies active only in a narrow band around the hypersurfaces that resemble the behavior of elastic shells. In this work we consider some extensions and further analysis of that model. First, we present a symmetric energy functional such that given two particular shapes, it assigns the same energy to any given deformation as to its inverse when the roles of the shapes are interchanged, and introduce the adequate parameter scaling to recover a surface problem when the width of the narrow band vanishes. Then, we obtain existence of minimizing deformations for the symmetric energy in classes of bi-Sobolev homeomorphisms for small enough widths, and prove a $\Gamma$-convergence result for the corresponding non-symmetric energies as the width tends to zero. Finally, numerical results on realistic shape matching applications demonstrating the effect of the symmetric energy are presented.) <|cite_end|>) we adapt the existence theory for polyconvex energies in nonlinear elasticity in <|cite_start|> (Reference: Convexity conditions and existence theorems in nonlinear elasticity: ) <|cite_end|> to our situation. Our model is briefly reviewed in Section \ref{nle}, in which invariance conditions on $\psi$ from <|cite_start|> (Reference: Image comparison and scaling via nonlinear elasticity: A nonlinear elasticity model for comparing images is formulated and analyzed, in which optimal transformations between images are sought as minimizers of an integral functional. The existence of minimizers in a suitable class of homeomorphisms between image domains is established under natural hypotheses. We investigate whether for linearly related images the minimization algorithm delivers the linear transformation as the unique minimizer.) <|cite_end|> are recalled. In Section \ref{existence} the existence of an absolute minimizer for \eqref{0} is established (see Theorem \ref{exthm}) under continuity, polyconvexity and coercivity conditions on the integrand $\psi$, assuming only that the intensity functions $c_1,c_2$ are in $L^\infty$. The coercivity condition is weaker than that assumed in <|cite_start|> (Reference: Image comparison and scaling via nonlinear elasticity: A nonlinear elasticity model for comparing images is formulated and analyzed, in which optimal transformations between images are sought as minimizers of an integral functional. The existence of minimizers in a suitable class of homeomorphisms between image domains is established under natural hypotheses. We investigate whether for linearly related images the minimization algorithm delivers the linear transformation as the unique minimizer.) <|cite_end|> in that, making use of a recent result, we can allow $n^{\rm th}$ power growth of $\psi$ rather than $p^{\rm th}$ power growth for $p>n$. We remark (see Section \ref{metric}) that the minimization problem can be used to indirectly define a metric on images. Two variants of the existence theorem are given. In the first (Theorem \ref{thm:landmark}) existence of a minimiser is proved under the constraint that a finite number of distinct landmark points in $\om_1$ are mapped to given distinct points in $\om_2$. In the second (Theorem \ref{part}) existence of a minimiser is proved in the set of orientation-preserving homeomorphisms $y:\om_1\to a+A\om_1\subset\om_2$, where $a\in\R^n$ and $A$ belongs to a relatively closed subset of $M^{n\times n}_+$, corresponding to matching a template image $P_1=(\om_1,c_1)$ with part of $P_2=(\om_2,c_2)$ allowing for changes of scale and orientation. In Section \ref{scaling} we consider the case when two images are related by a linear transformation, and ask for which $\psi$ the minimization algorithm delivers this linear transformation as the unique minimiser. We consider $\psi$ of the form \be \label{} \psi(c_1,c_2,A)=\Psi(A)+(1+\det A)|c_1-c_2|^2. \ee We first show that $\Psi$ can be chosen so that $\psi$ satisfies both the invariance conditions in Section \ref{nle} and the hypotheses of Theorem \ref{exthm}, and such that, under a mild nondegeneracy condition, for any pair of images $(P_1,P_2)$ related by a uniform scaling the unique minimiser of $I_{P_1,P_2}$ is given by that magnification. However, for the functional to deliver as a minimiser the linear transformation between {\it any} linearly related pair of images, we show (see Theorem \ref{generalM}) that $\Psi(A)=h(\det A)$ for some convex $h$. The statement of Theorem \ref{generalM} improves the corresponding statement in <|cite_start|> (Reference: Image comparison and scaling via nonlinear elasticity: A nonlinear elasticity model for comparing images is formulated and analyzed, in which optimal transformations between images are sought as minimizers of an integral functional. The existence of minimizers in a suitable class of homeomorphisms between image domains is established under natural hypotheses. We investigate whether for linearly related images the minimization algorithm delivers the linear transformation as the unique minimizer.) <|cite_end|> by assuming only that $\Psi$ is continuous and by weakening the regularity requirements on the boundary, and is proved by constructing suitable tangential variations. Theorem \ref{generalM} implies in particular that in order for the minimisation algorithm to deliver the linear transformation between linearly related images, $\psi$ cannot be coercive, rendering existence problematic. However by adding a suitable dependence on the second derivatives $D^2y$ we can retain this property as well as existence, and this is proved for an example in Theorem \ref{Dtwo}. For a discussion of issues related to numerical implementation of the minimisation problems discussed in this paper see <|cite_start|> (Reference: Image comparison and scaling via nonlinear elasticity: A nonlinear elasticity model for comparing images is formulated and analyzed, in which optimal transformations between images are sought as minimizers of an integral functional. The existence of minimizers in a suitable class of homeomorphisms between image domains is established under natural hypotheses. We investigate whether for linearly related images the minimization algorithm delivers the linear transformation as the unique minimizer.) <|cite_end|>.\\ <|paper_end|>
[ "<|reference_start|> Shape-Aware Matching of Implicit Surfaces Based on Thin Shell Energies: <|reference_end|>", "<|reference_start|> Variational Methods in Image Matching and Motion Extraction: <|reference_end|>", "<|reference_start|> A variational model dedicated to joint segmentation, registration, and atlas generation for shape analysis: In medical image analysis, constructing an atlas, i.e. a mean representative of an ensemble of images, is a critical task for practitioners to estimate variability of shapes inside a population, and to characterise and understand how structural shape changes have an impact on health. This involves identifying significant shape constituents of a set of images, a process called segmentation, and mapping this group of images to an unknown mean image, a task called registration, making a statistical analysis of the image population possible. To achieve this goal, we propose treating these operations jointly to leverage their positive mutual influence, in a hyperelasticity setting, by viewing the shapes to be matched as Ogden materials. \nThe approach is complemented by novel hard constraints on the $L^\\infty$ norm of both the Jacobian and its inverse, ensuring that the deformation is a bi-Lipschitz homeomorphism. Segmentation is based on the Potts model, which allows for a partition into more than two regions, i.e. more than one shape. The connection to the registration problem is ensured by the dissimilarity measure that aims to align the segmented shapes. A representation of the deformation field in a linear space equipped with a scalar product is then computed in order to perform a geometry-driven Principal Component Analysis (PCA) and to extract the main modes of variations inside the image population. Theoretical results emphasizing the mathematical soundness of the model are provided, among which existence of minimisers, analysis of a numerical method of resolution, asymptotic results and a PCA analysis, as well as numerical simulations demonstrating the ability of the modeling to produce an atlas exhibiting sharp edges, high contrast and a consistent shape. <|reference_end|>", "<|reference_start|> Image comparison and scaling via nonlinear elasticity: A nonlinear elasticity model for comparing images is formulated and analyzed, in which optimal transformations between images are sought as minimizers of an integral functional. The existence of minimizers in a suitable class of homeomorphisms between image domains is established under natural hypotheses. We investigate whether for linearly related images the minimization algorithm delivers the linear transformation as the unique minimizer. <|reference_end|>" ]
[ 7, 14, 17, 21 ]
{"<|cite_1|>": "arxiv-489797", "<|cite_3|>": "ss-1257426", "<|cite_4|>": "ss-1521053", "<|multi_cite_5_2|>": "ss-2448601", "<|multi_cite_5_3|>": "ss-2297780", "<|multi_cite_5_4|>": "ss-1286042", "<|multi_cite_5_5|>": "ss-2153256", "<|multi_cite_5_6|>": "ss-681292", "<|multi_cite_5_7|>": "arxiv-288083", "<|multi_cite_5_8|>": "ss-2448602", "<|multi_cite_5_9|>": "ss-2297775", "<|multi_cite_5_10|>": "ss-1080743", "<|multi_cite_5_11|>": "arxiv-81684", "<|multi_cite_5_12|>": "ss-2297772", "<|cite_7|>": "ss-2448601", "<|cite_8|>": "ss-2297780", "<|multi_cite_9_1|>": "ss-1286042", "<|multi_cite_9_2|>": "ss-2153256", "<|multi_cite_9_3|>": "ss-681292", "<|multi_cite_9_4|>": "arxiv-288083", "<|cite_10|>": "ss-2384062", "<|cite_11|>": "arxiv-489797", "<|cite_12|>": "arxiv-489797", "<|cite_14|>": "arxiv-489797", "<|cite_15|>": "arxiv-489797"}
2003.08537
<|paper_start|> Title: HOSVD-Based Algorithm for Weighted Tensor Completion Abstract: HOSVD-Based Algorithm for Weighted Tensor Completion: Matrix completion, the problem of completing missing entries in a data matrix with low dimensional structure (such as rank), has seen many fruitful approaches and analyses. Tensor completion is the tensor analog, that attempts to impute missing tensor entries from similar low-rank type assumptions. In this paper, we study the tensor completion problem when the sampling pattern is deterministic and possibly non-uniform. We first propose an efficient weighted HOSVD algorithm for recovery of the underlying low-rank tensor from noisy observations and then derive the error bounds under a properly weighted metric. Additionally, the efficiency and accuracy of our algorithm are both tested using synthetic and real datasets in numerical simulations. Introduction In many data-rich domains such as computer vision, neuroscience, and social networks, tensors have emerged as a powerful paradigm for handling the data deluge. In recent years, tensor analysis has gained more and more attention. To a certain degree, tensors can be viewed as the generalization of matrices to higher dimensions, and thus multiple questions from matrix analysis extend naturally to tensors. Similar to matrix decomposition, the problem of tensor decomposition (decomposing an input tensor into several less complex components) has been widely studied both in theory and application (see e.g., <|cite_start|> (Reference: The Expression of a Tensor or a Polyadic as a Sum of Products: ) <|cite_end|> <|cite_start|> (Reference: {Tensor Decompositions and Applications: This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.) <|cite_end|> <|cite_start|> (Reference: Extension of PCA to Higher Order Data Structures: An Introduction to Tensors, Tensor Decompositions, and Tensor PCA: The widespread use of multisensor technology and the emergence of big data sets have brought the necessity to develop more versatile tools to represent higher order data with multiple aspects and high dimensionality. Data in the form of multidimensional arrays, also referred to as tensors, arise in a variety of applications including chemometrics, hyperspectral imaging, high-resolution videos, neuroimaging, biometrics, and social network analysis. Early multiway data analysis approaches reformatted such tensor data as large vectors or matrices and then resorted to dimensionality reduction methods developed for classical two-way analysis such as principal component analysis (PCA). However, one cannot discover hidden components within multiway data using conventional PCA. To this end, tensor decomposition methods which are flexible in the choice of the constraints and that extract more general latent components have been proposed. In this paper, we review the major tensor decomposition methods with a focus on problems targeted by classical PCA. In particular, we present tensor methods that aim to solve three important challenges typically addressed by PCA: dimensionality reduction, i.e., low-rank tensor approximation; supervised learning, i.e., learning linear subspaces for feature extraction; and robust low-rank tensor recovery. We also provide experimental results to compare different tensor models for both dimensionality reduction and supervised learning applications.) <|cite_end|>). Thus far, the problem of low-rank tensor completion, which aims to complete missing or unobserved entries of a low-rank tensor, is one of the most actively studied problems (see e.g., <|cite_start|> (Reference: Uncovering the spatio-temporal dynamics of memes in the presence of incomplete information: Modeling, understanding, and predicting the spatio-temporal dynamics of online memes are important tasks, with ramifications on location-based services, social media search, targeted advertising and content delivery networks. However, the raw data revealing these dynamics are often incomplete and error-prone; for example, API limitations and data sampling policies can lead to an incomplete (and often biased) perspective on these dynamics. Hence, in this paper, we investigate new methods for uncovering the full (underlying) distribution through a novel spatio-temporal dynamics recovery framework which models the latent relationships among locations, memes, and times. By integrating these hidden relationships into a tensor-based recovery framework -- called AirCP -- we find that high-quality models of meme spread can be built with access to only a fraction of the full data. Experimental results on both synthetic and real-world Twitter hashtag data demonstrate the promising performance of the proposed framework: an average improvement of over 27% in recovering the spatio-temporal dynamics of hashtags versus five state-of-the-art alternatives.) <|cite_end|> <|cite_start|> (Reference: {Tensor completion for estimating missing values in visual data: In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependant relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC and HaLRTC the former is more efficient to obtain a low accuracy solution and the latter is preferred if a high-accuracy solution is desired.) <|cite_end|> <|cite_start|> (Reference: Factor matrix trace norm minimization for low-rank tensor completion: Most existing low-n-rank minimization algorithms for tensor completion suffer from high computational cost due to involving multiple singular value decompositions (SVDs) at each iteration. To address this issue, we propose a novel factor matrix trace norm minimization method for tensor completion problems. Based on the CANDECOMP/PARAFAC (CP) decomposition, we first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode-n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, which leads to a convex combination problem of much smaller scale matrix nuclear norm minimization. Finally, we develop an efficient alternating direction method of multipliers (ADMM) scheme to solve the proposed problem. Experimental results on both synthetic and real-world data validate the effectiveness of our approach. Moreover, our method is significantly faster than the state-of-the-art approaches and scales well to handle large datasets.) <|cite_end|> <|cite_start|> (Reference: Tensor Completion Algorithms in Big Data Analytics: Tensor completion is a problem of filling the missing or unobserved entries of partially observed tensors. Due to the multidimensional character of tensors in describing complex datasets, tensor completion algorithms and their applications have received wide attention and achievement in areas like data mining, computer vision, signal processing, and neuroscience. In this survey, we provide a modern overview of recent advances in tensor completion algorithms from the perspective of big data analytics characterized by diverse variety, large volume, and high velocity. We characterize these advances from four perspectives: general tensor completion algorithms, tensor completion with auxiliary information (variety), scalable tensor completion algorithms (volume), and dynamic tensor completion algorithms (velocity). Further, we identify several tensor completion applications on real-world data-driven problems and present some common experimental frameworks popularized in the literature. Our goal is to summarize these popular methods and introduce them to researchers and practitioners for promoting future research and applications. We conclude with a discussion of key challenges and promising research directions in this community for future exploration.) <|cite_end|>). It is noteworthy that, as caused by various unpredictable or unavoidable reasons, multidimensional datasets are commonly raw and incomplete, and thus often only a small subset of entries of tensors are available. It is, therefore, natural to address the above issue using tensor completion in modern data-driven applications, in which data are naturally represented as a tensor, such as image/video inpainting <|cite_start|> (Reference: Low-rank tensor completion by Riemannian optimization: ) <|cite_end|> <|cite_start|> (Reference: {Tensor completion for estimating missing values in visual data: In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependant relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC and HaLRTC the former is more efficient to obtain a low accuracy solution and the latter is preferred if a high-accuracy solution is desired.) <|cite_end|>, link-prediction <|cite_start|> (Reference: Link prediction in heterogeneous data via generalized coupled tensor factorization: ) <|cite_end|>, and recommendation systems <|cite_start|> (Reference: Tag recommendations based on tensor dimensionality reduction: Social tagging is the process by which many users add metadata in the form of keywords, to annotate and categorize information items (songs, pictures, web links, products etc.). Collaborative tagging systems recommend tags to users based on what tags other users have used for the same items, aiming to develop a common consensus about which tags best describe an item. However, they fail to provide appropriate tag recommendations, because: (i) users may have different interests for an information item and (ii) information items may have multiple facets. In contrast to the current tag recommendation algorithms, our approach develops a unified framework to model the three types of entities that exist in a social tagging system: users, items and tags. These data is represented by a 3-order tensor, on which latent semantic analysis and dimensionality reduction is performed using the Higher Order Singular Value Decomposition (HOSVD) technique. We perform experimental comparison of the proposed method against two state-of-the-art tag recommendations algorithms with two real data sets (Last.fm and BibSonomy). Our results show significant improvements in terms of effectiveness measured through recall/precision.) <|cite_end|>, to name a few. In the past few decades, the matrix completion problem, which is a special case of tensor completion, has been extensively studied. In matrix completion, there are mature algorithms <|cite_start|> (Reference: A Singular Value Thresholding Algorithm for Matrix Completion: This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices $\{\boldsymbol{X}^k,\boldsymbol{Y}^k\}$, and at each step mainly performs a soft-thresholding operation on the singular values of the matrix $\boldsymbol{Y}^k$. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates $\{\boldsymbol{X}^k\}$ is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which $1,000\times1,000$ matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for $\ell_1$ minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.) <|cite_end|>, theoretical foundations <|cite_start|> (Reference: Matrix Completion via Max-Norm Constrained Optimization: Matrix completion has been well studied under the uniform sampling model and the trace-norm regularized methods perform well both theoretically and numerically in such a setting. However, the uniform sampling model is unrealistic for a range of applications and the standard trace-norm relaxation can behave very poorly when the underlying sampling scheme is non-uniform. In this paper we propose and analyze a max-norm constrained empirical risk minimization method for noisy matrix completion under a general sampling model. The optimal rate of convergence is established under the Frobenius norm loss in the context of approximately low-rank matrix reconstruction. It is shown that the max-norm constrained method is minimax rate-optimal and yields a unified and robust approximate recovery guarantee, with respect to the sampling distributions. The computational effectiveness of this method is also discussed, based on first-order algorithms for solving convex optimizations involving max-norm regularization.) <|cite_end|> <|cite_start|> (Reference: Matrix Completion With Noise: On the heels of compressed sensing, a remarkable new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of its entries, and comes up in many areas of science and engineering including collaborative filtering, machine learning, control, remote sensing, and computer vision to name a few. This paper surveys the novel literature on matrix completion, which shows that under some suitable conditions, one can recover an unknown low-rank matrix from a nearly minimal set of entries by solving a simple convex optimization problem, namely, nuclear-norm minimization subject to data constraints. Further, this paper introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise. A typical result is that one can recover an unknown n x n matrix of low rank r from just about nr log^2 n noisy samples with an error which is proportional to the noise level. We present numerical results which complement our quantitative analysis and show that, in practice, nuclear norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples. Some analogies between matrix completion and compressed sensing are discussed throughout.) <|cite_end|> <|cite_start|> (Reference: Exact Matrix Completion via Convex Optimization: We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m >= C n^{1.2} r log n for some positive numerical constant C, then with very high probability, most n by n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.) <|cite_end|> and various applications <|cite_start|> (Reference: Uncovering Shared Structures in Multiclass Classification: This paper suggests a method for multiclass learning with many classes by simultaneously learning shared characteristics common to the classes, and predictors for the classes in terms of these characteristics. We cast this as a convex optimization problem, using trace-norm regularization and study gradient-based optimization both for the linear case and the kernelized setting.) <|cite_end|> <|cite_start|> (Reference: Accelerated Structured Alternating Projections for Robust Spectrally Sparse Signal Recovery: Consider a spectrally sparse signal $\boldsymbol{x}$ that consists of $r$ complex sinusoids with or without damping. We study the robust recovery problem for the spectrally sparse signal under the fully observed setting, which is about recovering $\boldsymbol{x}$ and a sparse corruption vector $\boldsymbol{s}$ from their sum $\boldsymbol{z}=\boldsymbol{x}+\boldsymbol{s}$. In this paper, we exploit the low-rank property of the Hankel matrix formed by $\boldsymbol{x}$, and formulate the problem as the robust recovery of a corrupted low-rank Hankel matrix. We develop a highly efficient non-convex algorithm, coined Accelerated Structured Alternating Projections (ASAP). The high computational efficiency and low space complexity of ASAP are achieved by fast computations involving structured matrices, and a subspace projection method for accelerated low-rank approximation. Theoretical recovery guarantee with a linear convergence rate has been established for ASAP, under some mild assumptions on $\boldsymbol{x}$ and $\boldsymbol{s}$. Empirical performance comparisons on both synthetic and real-world data confirm the advantages of ASAP, in terms of computational efficiency and robustness aspects.) <|cite_end|> <|cite_start|> (Reference: Rank Aggregation via Nuclear Norm Minimization: The process of rank aggregation is intimately intertwined with the structure of skew-symmetric matrices. We apply recent advances in the theory and algorithms of matrix completion to skew-symmetric matrices. This combination of ideas produces a new method for ranking a set of items. The essence of our idea is that a rank aggregation describes a partially filled skew-symmetric matrix. We extend an algorithm for matrix completion to handle skew-symmetric data and use that to extract ranks for each item. Our algorithm applies to both pairwise comparison and rating data. Because it is based on matrix completion, it is robust to both noise and incomplete data. We show a formal recovery result for the noiseless case and present a detailed study of the algorithm on synthetic data and Netflix ratings.) <|cite_end|> <|cite_start|> (Reference: Interior-Point Method for Nuclear Norm Approximation with Application to System Identification: The nuclear norm (sum of singular values) of a matrix is often used in convex heuristics for rank minimization problems in control, signal processing, and statistics. Such heuristics can be viewed as extensions of $\ell_1$-norm minimization techniques for cardinality minimization and sparse signal estimation. In this paper we consider the problem of minimizing the nuclear norm of an affine matrix-valued function. This problem can be formulated as a semidefinite program, but the reformulation requires large auxiliary matrix variables, and is expensive to solve by general-purpose interior-point solvers. We show that problem structure in the semidefinite programming formulation can be exploited to develop more efficient implementations of interior-point methods. In the fast implementation, the cost per iteration is reduced to a quartic function of the problem dimensions and is comparable to the cost of solving the approximation problem in the Frobenius norm. In the second part of the paper, the nuclear norm approximation algorithm is applied to system identification. A variant of a simple subspace algorithm is presented in which low-rank matrix approximations are computed via nuclear norm minimization instead of the singular value decomposition. This has the important advantage of preserving linear matrix structure in the low-rank approximation. The method is shown to perform well on publicly available benchmark data.) <|cite_end|> that pave the way for solving the tensor completion problem in high-order tensors. Recently, \mbox{Foucart et al. <|cite_start|> (Reference: Weighted matrix completion from non-random, non-uniform sampling patterns: We study the matrix completion problem when the observation pattern is deterministic and possibly non-uniform. We propose a simple and efficient debiased projection scheme for recovery from noisy observations and analyze the error under a suitable weighted metric. We introduce a simple function of the weight matrix and the sampling pattern that governs the accuracy of the recovered matrix. We derive theoretical guarantees that upper bound the recovery error and nearly matching lower bounds that showcase optimality in several regimes. Our numerical experiments demonstrate the computational efficiency and accuracy of our approach, and show that debiasing is essential when using non-uniform sampling patterns.) <|cite_end|>} proposed a simple algorithm for matrix completion for general deterministic sampling patterns, and raised the following questions: given a deterministic sampling pattern $\Omega$ and corresponding (possibly noisy) observations of the matrix entries, what type of recovery error can we expect? In what metric? How can we efficiently implement recovery? These were investigated in <|cite_start|> (Reference: Weighted matrix completion from non-random, non-uniform sampling patterns: We study the matrix completion problem when the observation pattern is deterministic and possibly non-uniform. We propose a simple and efficient debiased projection scheme for recovery from noisy observations and analyze the error under a suitable weighted metric. We introduce a simple function of the weight matrix and the sampling pattern that governs the accuracy of the recovered matrix. We derive theoretical guarantees that upper bound the recovery error and nearly matching lower bounds that showcase optimality in several regimes. Our numerical experiments demonstrate the computational efficiency and accuracy of our approach, and show that debiasing is essential when using non-uniform sampling patterns.) <|cite_end|> by introducing an appropriate \textit{weighted} error metric for matrix recovery of the form $\|H\hadam(\widehat{M}-M)\|_F$, where $M$ is the true underlying low-rank matrix, $\widehat{M}$ refers to the recovered matrix, and $H$ is a best rank-1 matrix approximation for the sampling pattern $\Omega$. In this regard, similar questions arise for the problem of tensor completion with deterministic sampling patterns. Unfortunately, as is often the case, moving from the matrix setting to the tensor setting presents non-trivial challenges, and notions such as \textit{rank} and SVD need to be re-defined and re-evaluated. We address these extensions for the completion problem here. Motivated by the matrix case, we propose an appropriate \textit{weighted} error metric for tensor recovery of the form $\|\mathcal{H}\hadam(\widehat{\mathcal{T}}-\mathcal{T})\|_F$, where $\mathcal{T}$ is the true underlying low-rank tensor, $\widehat{\mathcal{T}}$ is the recovered tensor, and $\mathcal{H}$ is an appropriate weight tensor. For the existing work, the error is only limited to the form $\|\widehat{\mathcal{T}}-\mathcal{T}\|_F$, which corresponds to the case that all the entries of $\mathcal{H}$ are 1, where $\mathcal{H}$ can be considered to be a {CP} rank-1 tensor. It motivates us to rephrase the questions mentioned above as follows. {\bf Main questions. } Given a sampling pattern $\Omega$, and noisy observations $\mathcal{T}+\mathcal{Z}$ on $\Omega$, for what rank-one weight tensor $\mathcal{H}$ can we efficiently find a tensor $\widehat{\mathcal{T}}$ so that \mbox{$\|\mathcal{H}\hadam(\widehat{\mathcal{T}}-\mathcal{T})\|_F$} is small compared to $\left\|\mathcal{H}\right\|_F$? And how can we efficiently find such weight tensor $\mathcal{H}$, or determine that a fixed $\mathcal{H}$ has this property? \subsection{Contributions} Our main goal is to provide an algorithmic tool, theoretical analysis, and numerical results that address the above questions. In this paper, we propose a simple weighted Higher Order Singular Value Decomposition (HOSVD) method. Before we implement the weighted HOSVD algorithm, we first appropriately approximate the sampling pattern $\Omega$ with a rank one tensor $\mathcal{H}$. We can achieve high accuracy if $\|\mathcal{H}-\mathcal{H}^{(-1)}\hadam\boldsymbol{1}_{\Omega}\|_F$ is small, where $\mathcal{H}^{(-1)}$ denotes the element-wise inverse. Finally, we present empirical results on synthetic and real datasets. The simulation results show that when the sampling pattern is non-uniform, the use of weights in the weighted HOSVD algorithm is essential, and the results of the weighted HOSVD algorithm can provide a very good initialization for the total variation minimization algorithm which can dramatically reduce the iterative steps without lose the accuracy. {In doing so, we extend the weighted matrix completion results of <|cite_start|> (Reference: Weighted matrix completion from non-random, non-uniform sampling patterns: We study the matrix completion problem when the observation pattern is deterministic and possibly non-uniform. We propose a simple and efficient debiased projection scheme for recovery from noisy observations and analyze the error under a suitable weighted metric. We introduce a simple function of the weight matrix and the sampling pattern that governs the accuracy of the recovered matrix. We derive theoretical guarantees that upper bound the recovery error and nearly matching lower bounds that showcase optimality in several regimes. Our numerical experiments demonstrate the computational efficiency and accuracy of our approach, and show that debiasing is essential when using non-uniform sampling patterns.) <|cite_end|> to the tensor setting.} \subsection{Organization} The paper is organized as follows. In Section \ref{section:tcp}, we give a brief review of related work and concepts for tensor analysis, instantiate notations, and state the tensor completion problem under study. Our main results are stated in Section \ref{sec:results} and the proofs are provided in Appendices \ref{section:proofFgub} and \ref{section:proofFHOSVD}. The numerical results are provided and discussed in Section \ref{section:simulations}. <|paper_end|>
[ "<|reference_start|> The Expression of a Tensor or a Polyadic as a Sum of Products: <|reference_end|>", "<|reference_start|> {Tensor completion for estimating missing values in visual data: In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependant relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC and HaLRTC the former is more efficient to obtain a low accuracy solution and the latter is preferred if a high-accuracy solution is desired. <|reference_end|>", "<|reference_start|> Link prediction in heterogeneous data via generalized coupled tensor factorization: <|reference_end|>", "<|reference_start|> Matrix Completion With Noise: On the heels of compressed sensing, a remarkable new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of its entries, and comes up in many areas of science and engineering including collaborative filtering, machine learning, control, remote sensing, and computer vision to name a few. This paper surveys the novel literature on matrix completion, which shows that under some suitable conditions, one can recover an unknown low-rank matrix from a nearly minimal set of entries by solving a simple convex optimization problem, namely, nuclear-norm minimization subject to data constraints. Further, this paper introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise. A typical result is that one can recover an unknown n x n matrix of low rank r from just about nr log^2 n noisy samples with an error which is proportional to the noise level. We present numerical results which complement our quantitative analysis and show that, in practice, nuclear norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples. Some analogies between matrix completion and compressed sensing are discussed throughout. <|reference_end|>" ]
[ 0, 4, 9, 13 ]
{"<|multi_cite_1_1|>": "ss-987323", "<|multi_cite_1_2|>": "ss-1356700", "<|multi_cite_1_3|>": "ss-1256055", "<|multi_cite_2_1|>": "ss-2276938", "<|multi_cite_2_2|>": "ss-945603", "<|multi_cite_2_3|>": "ss-2276939", "<|multi_cite_2_4|>": "arxiv-141417", "<|multi_cite_3_1|>": "ss-1233215", "<|multi_cite_3_2|>": "ss-945603", "<|cite_4|>": "ss-910781", "<|cite_5|>": "ss-1272250", "<|cite_6|>": "ss-1522476", "<|multi_cite_7_1|>": "arxiv-42501", "<|multi_cite_7_2|>": "arxiv-6795", "<|multi_cite_7_3|>": "arxiv-3881", "<|multi_cite_8_1|>": "ss-1489674", "<|multi_cite_8_2|>": "arxiv-228631", "<|multi_cite_8_3|>": "arxiv-19554", "<|multi_cite_8_4|>": "ss-889411", "<|cite_9|>": "arxiv-231652", "<|cite_10|>": "arxiv-231652", "<|cite_11|>": "arxiv-231652"}
2110.02279
<|paper_start|> Title: Turing approximations, toric isometric embeddings & manifold convolutions Abstract: Turing approximations, toric isometric embeddings & manifold convolutions: Convolutions are fundamental elements in deep learning architectures. Here, we present a theoretical framework for combining extrinsic and intrinsic approaches to manifold convolution through isometric embeddings into tori. In this way, we define a convolution operator for a manifold of arbitrary topology and dimension. We also explain geometric and topological conditions that make some local definitions of convolutions which rely on translating filters along geodesic paths on a manifold, computationally intractable. A result of Alan Turing from 1938 underscores the need for such a toric isometric embedding approach to achieve a global definition of convolution on computable, finite metric space approximations to a smooth manifold. Introduction In Convolutional Neural Networks (CNNs) <|cite_start|> (Reference: Deep Learning in Neural Networks: An Overview: In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.) <|cite_end|> <|cite_start|> (Reference: gradient-based learning applied to document recognition: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.) <|cite_end|>, the convolution operations allow for the application of a given filter will to each part of a data file (typically images). Then, as the image moves with translations, activations in each network layer respond with similar translations. This {\it equivariance} property, together with pooling, allows each neuron to express the influence of nearby neurons while training. A guiding principle of deep learning is the manifold distribution hypothesis <|cite_start|> (Reference: Deep Learning: Deep learning (DL) is a high dimensional data reduction technique for constructing high-dimensional predictors in input-output models. DL is a form of machine learning that uses hierarchical layers of latent features. In this article, we review the state-of-the-art of deep learning from a modeling and algorithmic perspective. We provide a list of successful areas of applications in Artificial Intelligence (AI), Image Processing, Robotics and Automation. Deep learning is predictive in its nature rather then inferential and can be viewed as a black-box methodology for high-dimensional function estimation.) <|cite_end|>. It posits that high-dimensional data concentrate close to a (nonlinear) lower-dimensional manifold. The field of manifold learning has been proliferating. Recall that given data (such as cloud points in some ${\bf R}^n$) it is possible to construct a manifold of certain smoothness fitted to them <|cite_start|> (Reference: Fitting a Putative Manifold to Noisy Data: In the present work, we give a solution to the following question from manifold learning. Suppose data belonging to a high dimensional Euclidean space is drawn independently, identically distributed from a measure supported on a low dimensional twice differentiable embedded manifoldM, and corrupted by a small amount of gaussian noise. How can we produce a manifoldMo whose Hausdorff distance to M is small and whose reach is not much smaller than the reach of M?) <|cite_end|>. Descriptions of such estimators that approximate these data and are manifolds with bounded reach can be consulted <|cite_start|> (Reference: {Testing the Manifold Hypothesis: The hypothesis that high dimensional data tend to lie in the vicinity of a low dimensional manifold is the basis of manifold learning. The goal of this paper is to develop an algorithm (with accompanying complexity guarantees) for fitting a manifold to an unknown probability distribution supported in a separable Hilbert space, only using i.i.d samples from that distribution. More precisely, our setting is the following. Suppose that data are drawn independently at random from a probability distribution $P$ supported on the unit ball of a separable Hilbert space $H$. Let $G(d, V, \tau)$ be the set of submanifolds of the unit ball of $H$ whose volume is at most $V$ and reach (which is the supremum of all $r$ such that any point at a distance less than $r$ has a unique nearest point on the manifold) is at least $\tau$. Let $L(M, P)$ denote mean-squared distance of a random point from the probability distribution $P$ to $M$. We obtain an algorithm that tests the manifold hypothesis in the following sense. The algorithm takes i.i.d random samples from $P$ as input, and determines which of the following two is true (at least one must be): (a) There exists $M \in G(d, CV, \frac{\tau}{C})$ such that $L(M, P) \leq C \epsilon.$ (b) There exists no $M \in G(d, V/C, C\tau)$ such that $L(M, P) \leq \frac{\epsilon}{C}.$ The answer is correct with probability at least $1-\delta$.) <|cite_end|>. A review of the development of manifold learning is available <|cite_start|> (Reference: Manifold Learning Theory and Applications: Trained to extract actionable information from large volumes of high-dimensional data, engineers and scientists often have trouble isolating meaningful low-dimensional structures hidden in their high-dimensional observations. Manifold learning, a groundbreaking technique designed to tackle these issues of dimensionality reduction, finds widespread application in machine learning, neural networks, pattern recognition, image processing, and computer vision. Filling a void in the literature, Manifold Learning Theory and Applications incorporates state-of-the-art techniques in manifold learning with a solid theoretical and practical treatment of the subject. Comprehensive in its coverage, this pioneering work explores this novel modality from algorithm creation to successful implementationoffering examples of applications in medical, biometrics, multimedia, and computer vision. Emphasizing implementation, it highlights the various permutations of manifold learning in industry including manifold optimization, large scale manifold learning, semidefinite programming for embedding, manifold models for signal acquisition, compression and processing, and multi scale manifold. Beginning with an introduction to manifold learning theories and applications, the book includes discussions on the relevance to nonlinear dimensionality reduction, clustering, graph-based subspace learning, spectral learning and embedding, extensions, and multi-manifold modeling. It synergizes cross-domain knowledge for interdisciplinary instructions, offers a rich set of specialized topics contributed by expert professionals and researchers from a variety of fields. Finally, the book discusses specific algorithms and methodologies using case studies to apply manifold learning for real-world problems.) <|cite_end|>. The choice of data representation strongly affects the performance of machine learning algorithms. In recent years there has been an increasing interest in extending CNNs to arbitrary, non-euclidean manifolds. A significant challenge has been finding a rigorous definition of convolution on manifolds because addition/subtraction is generally not defined for every manifold. We propose a new way to define convolutions on manifolds by first isometrically embedding the manifold into a high dimensional torus and then extending a continuous function from the isometric image of the manifold to the target torus. Then, the extended function's convolutions on the torus define the convolution of the original pair of functions. This new definition of convolution is global and works for compact Riemannian manifolds of any dimension. Informally, we highlight that by first embedding a manifold isometrically into a higher dimensional Euclidean space and fixing a box around it, translations along the axes of this box then permit the definition of a convolution operator. Imagine that the manifold is inside a unit cube, periodically copied along all axes in all directions, and then standard CNNs can be defined on top. Carrying this process out with rigor requires specific control of geometric quantities, which we explain below. A toric isometric embedding (TIE) provides a geometric context where discretizations of the ambient space can take into account the intrinsic symmetries of the original manifold completely. The advantage of working with isometric (or even almost isometric) embeddings is that they provide the best of intrinsic and extrinsic worlds. The isometric property preserves the intrinsic geometry, while the global toric coordinates of the embedding allow for convolutions, and in general Fourier analysis, to be carried out. Assume the compact smooth connected manifold $M$ is embedded isometrically into the $n$--dimensional torus $T^n$. A function $f$ on $M$ extends to a function $\bar{f}$ on $T^{n}$ (see Lemma \ref{lem-conv-mfds}). Let $k$ be a kernel on $M$, and likewise write $\bar{k}$ for its extension in $T^{n}$. Our main contribution is the following Theorem/Definition: \begin{Theorem} \label{thm-mfds-r-treps} A global convolution operator can be defined on closed orientable smooth manifolds using toric isometric embeddings (TIE). \begin{Definition}[TIE convolution on manifolds]\label{def-TIE-conv} A convolution operator between two functions $f$ and $k$ in $M$, called the TIE convolution and denoted by $f\bowtie k$ can be defined as follows:\[ (f \bowtie k)(x) := \int_{{\bf T}^n}\bar{f}(y)\bar{k}(x-y)dy \] \end{Definition} \end{Theorem} Observe that the definition of $f\bowtie k$ is subordinate to an embedding of $M$ into $T^n$. In turn, its computational complexity will depend on the embedding dimension $n$. This approach permits the definition of CNNs on datasets whose elements are smooth manifolds for a fixed embedding method. A discretized version of the TIE-convolution is readily available, as we are now working on a torus (equation (\ref{eqn:disc-3d-conv}) shows one example in 3D). A notable consequence of Theorem \ref{def-TIE-conv} for the field of {\it geometry processing} is that shapes in 3D space admit a 3D TIE convolution $f\bowtie k$, which can be {\bf globally} defined on meshes, 3D point-clouds, and voxel representations, all with arbitrary topology. These representations can are just the embedding of the data into $3$--dimensional space. Moreover, in dimension 3, the explicit computation of the {\em reach} of an embedding of a surface into ${\bf R}^3$ can be achieved using the medial axis. As an example, consider the canonical embedding of the sphere $S^2$ in ${\bf R}^3$, realized by unit norm vectors. In this case, the coordinate functions are eigenmaps, and they define an embedding. Similarly, a collection of $n$ eigenmaps can define embeddings of a smooth orientable $d$--manifold into ${\bf R}^n$. These approaches are just illustrative examples because, in practice, we want the embeddings to be isometric. Performing a convolution using a bump function has the effect of blurring an image or a shape. In this work, we are using such a convolution with a geometrically controlled bump function to increase the dimension where we work, using an isometric embedding, which permits a global convolution to be defined. In practice, we need to fix an embedding dimension. There are various available strategies for finding isometric embeddings. The embedding target dimension $n$ affects the performance of the neural networks that use a TIE convolution, as the number of weights in a CNN grows polynomially with $n$. An area of opportunity for improvement here lies in finding the optimal embedding dimension for a given dataset or learning task. Historically, the problem of finding an isometric embedding of a smooth closed Riemannian manifold was first solved by John Nash <|cite_start|> (Reference: C 1 Isometric Imbeddings: ) <|cite_end|> <|cite_start|> (Reference: The imbedding problem for Riemannian manifolds: ) <|cite_end|>. Other valuable strategies realize embeddings into $\ell^2$ <|cite_start|> (Reference: Embedding Riemannian manifolds by their heat kernel: ) <|cite_end|> and recent advances improve on these ideas using heat kernels <|cite_start|> (Reference: Embeddings of Riemannian manifolds with heat kernels and eigenfunctions: We show that any closed n‐dimensional Riemannian manifold can be embedded by a map constructed from heat kernels at a certain time from a finite number of points. Both this time and this number can be bounded in terms of the dimension, a lower bound on the Ricci curvature, the injectivity radius, and the volume. It follows that the manifold can be embedded by a finite number of eigenfunctions of the Laplace operator. Again, this number only depends on the geometric bounds and the dimension. In addition, both maps can be made arbitrarily close to an isometry. In the appendix, we derive quantitative estimates of the harmonic radius, so that the estimates on the number of eigenfunctions or heat kernels needed can be made quantitative as well. © 2016 Wiley Periodicals, Inc.) <|cite_end|> <|cite_start|> (Reference: Embeddings of Riemannian manifolds with finite eigenvector fields of connection Laplacian: ) <|cite_end|>, and eigenvector fields of the connection Laplacian <|cite_start|> (Reference: Isometric embeddings via heat kernel: We combine the heat kernel embedding and Gunther’s implicit function theorem to obtain isometric embeddings of compact Einstein manifolds into Euclidean spaces. As the heat ‡ow time t ! 0+, the second fundamental form of the embedded images has certain normal form, and the mean curvature vectors converge to (large) constant length. These embeddings are canonical in the sense that they are constructed by the eigenfunctions of the Laplacian and intrinsic perturbations.) <|cite_end|>. The task of finding isometric embeddings has been implemented using eigenvalues of the Laplace-Beltrami operator <|cite_start|> (Reference: {Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering: Drawing on the correspondence between the graph Laplacian, the Laplace-Beltrami operator on a manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in a higher dimensional space. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering. Several applications are considered.) <|cite_end|>, using KDE and local PCA <|cite_start|> (Reference: Manifold Learning Using Kernel Density Estimation and Local Principal Components Analysis: We consider the problem of recovering a $d-$dimensional manifold $\mathcal{M} \subset \mathbb{R}^n$ when provided with noiseless samples from $\mathcal{M}$. There are many algorithms (e.g., Isomap) that are used in practice to fit manifolds and thus reduce the dimensionality of a given data set. Ideally, the estimate $\mathcal{M}_\mathrm{put}$ of $\mathcal{M}$ should be an actual manifold of a certain smoothness; furthermore, $\mathcal{M}_\mathrm{put}$ should be arbitrarily close to $\mathcal{M}$ in Hausdorff distance given a large enough sample. Generally speaking, existing manifold learning algorithms do not meet these criteria. Fefferman, Mitter, and Narayanan (2016) have developed an algorithm whose output is provably a manifold. The key idea is to define an approximate squared-distance function (asdf) to $\mathcal{M}$. Then, $\mathcal{M}_\mathrm{put}$ is given by the set of points where the gradient of the asdf is orthogonal to the subspace spanned by the largest $n - d$ eigenvectors of the Hessian of the asdf. As long as the asdf meets certain regularity conditions, $\mathcal{M}_\mathrm{put}$ is a manifold that is arbitrarily close in Hausdorff distance to $\mathcal{M}$. In this paper, we define two asdfs that can be calculated from the data and show that they meet the required regularity conditions. The first asdf is based on kernel density estimation, and the second is based on estimation of tangent spaces using local principal components analysis.) <|cite_end|>, or by strengthening Whitney embeddings to produce almost isometric embeddings <|cite_start|> (Reference: Reconstruction and Interpolation of Manifolds. I: The Geometric Whitney Problem: ) <|cite_end|> (among plenty of others). Using a Nash type embedding, the embedding dimension $n$ grows quadratically in $d = \dim M$ <|cite_start|> (Reference: Distance preserving embeddings for general $n$-dimensional manifolds: Low dimensional embeddings of manifold data have gained popularity in the last decade. However, a systematic finite sample analysis of manifold embedding algorithms largely eludes researchers. Here we present two algorithms that embed a general n-dimensionalmanifold into Rd (where d only depends on some key manifold properties such as its intrinsic dimension, volume and curvature) that guarantee to approximately preserve all interpoint geodesic distances.) <|cite_end|>. Compared to other, sometimes local, methods of defining convolutions, our approach works efficiently on a manifold $M$ of arbitrary topology. Indeed, translating a filter between points depends on moving between the points and then making sense of how the filter changes. Let $g$ be a smooth Riemannian metric on $M$. When considering geodesics between the points for this task, this strategy requires taking an average over such possible geodesics. In practice, some have proposed using regions where there is a unique geodesic between any two points. This approach is well defined locally, in a chart, but not generally, because a single chart may not cover the entire manifold $M$. Thus, the problem of moving a filter has to consider how the filter changes as it moves along different geodesic paths, and therefore, this includes having to average over the possible geodesics. The study of the function $C(x,y,\ell)$ that counts the number of geodesics of length at most $\ell$ between $x$ and $y$ has a long history---started by Serre---and it is known to have profound connections to the topology and geometry of the underlying manifold <|cite_start|> (Reference: Quasi–geodesic flows: ) <|cite_end|>. Recall that an algorithm is {\it efficient} if it runs in polynomial time. Topological restrictions to efficient algorithms for computing $C(x,y,\ell)$ on surfaces and $3$--manifolds first come in the form of the growth type of the {\it fundamental group}. In particular, for surfaces and $3$--manifolds, we have the following results: \begin{Theorem}\label{Thm:sface-intractable-filters} Let $\Sigma $ be a compact orientable connected surface of genus$>1$. Then for any smooth Riemannian metric $g$ on $\Sigma$ the strategy of averaging filters translated over geodesics between pairs of points $x$ and $y$ on $\Sigma$ is not efficient. \end{Theorem} \begin{Theorem}\label{Thm:3mfds-intractable-filters} Let $Y$ be a smooth Riemannian $3$-manifold that is neither homeomorphic to a geometric manifold modelled on one of the Thurston geometries ${\bf S}^3, {\bf S}^2\times {\bf R}, {\bf E}^3, {\bf Nil}$, or homeomorphic to a connected sum $L(2,1)\# L(2,1)$ of a lens space $L(2,1)$ whose fundamental group has order $2$, with itself. Then for any smooth Riemannian metric $g$ on $Y$, the strategy of averaging filters translated over geodesics between pairs of points $x$ and $y$ on $Y$ is not efficient. \end{Theorem} These obstructions highlight the merits of TIE convolutions over other methods. The manifolds left out by Theorem \ref{Thm:3mfds-intractable-filters} are precisely those whose fundamental group has polynomial growth. Thus, in principle, there could be efficient algorithms for computing the geodesic path counting function on these manifolds. In terms of homology a well known result of M. Gromov <|cite_start|> (Reference: Homotopical effects of dilatation: 1.1. Geometrical and topological complexity. Let V and W be Riemannian manifolds, and X a space of mappings V-^W. For instance, X may consist of all smooth maps, or may be the space of imbeddings or immersions. We ask how to estimate a measure of the "topological complexity" of an x € X by geometry of x. We measure geometrical complexity of x by a positive functional F: X -^ R+, say, by the dilatation of x or by an integral characteristic like the Dirichlet functional. The topological complexity of x may be measured by its degree (when the degree makes sense) or another numerical invariant. The Morse theory suggests a different point of view. We take the levels Xλ C X, Xλ = F~\[0, λ]),,λ e R+ and compare the numerical invariants of Xλ (say the number of components or the sum of all Betti numbers) with λ. When λ —• oo, the first asymptotic term of the topological complexity of Xλ is often independent of the particular choice of metrics in V and W (but depends, of course, on the particular type of F), and we come to a pure topological problem: how to express this asymptotic topology of Xλ in terms of usual invariants? When we study the asymptotic distribution of the critical values of F, what we need first is the asymptotic behavior of the Betti numbers bi(Xλ), /, λ —> oo .) <|cite_end|> (see also <|cite_start|> (Reference: Quasi–geodesic flows: ) <|cite_end|>) bounds $C(x,y,\ell)$ by the Betti numbers of $M$. Even when the fundamental group is trivial, rational homotopy theory has established when the function $C(x,y,\ell)$ grows exponentially in $\ell$, because it is also bounded below by the growth of the rational homotopy groups of $M$ <|cite_start|> (Reference: Rational homotopy theory: For i ≥ 1 they are indeed groups, for i ≥ 2 even abelian groups, which carry a lot of information about the homotopy type of X. However, even for spaces which are easy to define (like spheres), they can be very hard to compute. Even in low dimensions it is difficult to see a clear pattern among the homotopy groups of spheres; especially the torsion shows a seemingly wild behaviour. This suggests that in a first step it might be a good idea to ignore the torsion in the homotopy groups and to just consider the rational homotopy groups (the homotopy groups tensored with Q).) <|cite_end|>. For example already in dimension four, the complex plane blown up at three points, ${\bf C}{\rm P} \# 3 \overline{{\bf C}{\rm P}}$, is simply connected and the geodesic counting function $C(x,y,\ell)$ of any smooth Riemannian metric on ${\bf C}{\rm P} \# 3 \overline{{\bf C}{\rm P}}$ has exponential growth. Thus rendering strategies for defining convolutions that rely on translating along geodesics intractable on such a manifold. This phenomenon is explained rigorously by our next result. Recall that a simply connected manifold $M$ is said to be {\it rationally elliptic} if the total rational homotopy $\pi_{\ast}(M)\otimes {\bf Q}$ is finite-dimensional. This means $\pi_{k}(M)\otimes {\bf Q} = 0$ for all $k>k_0$ for some positive integer $k_0$. The manifold $M$ is called {\it rationally hyperbolic} if it is not rationally elliptic. \begin{Theorem}\label{Thm:rat-hyp-intractable-filters} Let $M$ be a smooth, closed, simply connected, rationally hyperbolic $n$--manifold, $n\geq 4$. Then for any smooth Riemannian metric $g$ on $M$ the strategy of averaging filters translated over geodesics between pairs of points $x$ and $y$ on $M$ is not efficient. \end{Theorem} This last result leads to the question of how prevalent rationally hyperbolic manifolds are within all manifolds. This kind of intractability is generic for the choice of a Riemannian metric, in the following sense: \begin{Theorem}\label{Thm:htop-intractable-filters} Let $M$ be a closed, smooth, $n$--manifold, $n\geq 2$. Then, the strategy of averaging filters translated over geodesics between pairs of points $x$ and $y$ in $M$ is not efficient for a set of $C^\infty$ Riemannian metrics $g$ on $M$ that is open and dense in the space of $C^\infty$ Riemannian metrics equipped with the $C^\infty$ topology. \end{Theorem} The proof uses a profound result by Contreras, which guarantees that Riemannian metrics of positive topological entropy are generic for any smooth manifold <|cite_start|> (Reference: Geodesic flows with positive topological entropy, twist maps and hyperbolicity: We prove a perturbation lemma for the derivative of geodesic flows in high dimension. This implies that a C 2 generic riemannian metric has a nontrivial hyperbolic basic set in its geodesic flow.) <|cite_end|>. These theorems exhibit the potential intractability of some local strategies' topological and dynamical properties to define convolutions. Moreover, they highlight the question of when a manifold admits a metric with $h=0$. In the case of $T^2$, geometric descriptions of such metrics use bands that bound lifts of geodesics to the universal covering <|cite_start|> (Reference: Characterization of Geodesic Flows on T2 with and without Positive Topological Entropy: ) <|cite_end|>. In general, however, this challenging research direction is very much wide open. To end positively, we will now recall a result of Alan Turing, which demonstrates that the only computationally reasonable approach to defining global convolutions uses tori, such as in the TIE-convolution \ref{def-TIE-conv} we defined above. First, let us go back to the manifold hypothesis and assume that we have data $D$ approximated by a connected smooth manifold $M$. Observe that in order to define a global, computable, convolution operator on $M$, we must also assume the following: \begin{enumerate} \item A global group operation can be defined on $M$, in such a way that a convolution may be defined. \item $M$ can be approximated by a finite metric space $S$ (so that it is computable). \end{enumerate} The first property implies that $M$ is a Lie group. As early as 1938 Turing knew that a Lie group that can be approximated by a finite metric space is compact and Abelian \cite[Theorem 2]{Turing38}. Therefore, if we assume both conditions hold, we have shown that our connected $n$--manifold $M$ is a torus $T^{n}$: \begin{Corollary}[Turing approximations] A connected $n$--manifold that admits a global convolution operation finitely approximable by a finite metric space is an $n$--torus. \end{Corollary} Section \ref{sec-reach} reviews the notion of reach and recalls the use of manifolds to approximate data. Section \ref{sec-amenable} explains the notions of group growth and growth of geodesic counting functions, used in the obstructions to efficient strategies mentioned above. Section \ref{sec-toricconv} discusses convolutions on tori. The proofs are found in section \ref{sec-proofs}, and section \ref{sec-conclusions} contains conclusions and suggestions of future work. <|paper_end|>
[ "<|reference_start|> Deep Learning in Neural Networks: An Overview: In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks. <|reference_end|>", "<|reference_start|> gradient-based learning applied to document recognition: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day. <|reference_end|>", "<|reference_start|> Manifold Learning Theory and Applications: Trained to extract actionable information from large volumes of high-dimensional data, engineers and scientists often have trouble isolating meaningful low-dimensional structures hidden in their high-dimensional observations. Manifold learning, a groundbreaking technique designed to tackle these issues of dimensionality reduction, finds widespread application in machine learning, neural networks, pattern recognition, image processing, and computer vision. Filling a void in the literature, Manifold Learning Theory and Applications incorporates state-of-the-art techniques in manifold learning with a solid theoretical and practical treatment of the subject. Comprehensive in its coverage, this pioneering work explores this novel modality from algorithm creation to successful implementationoffering examples of applications in medical, biometrics, multimedia, and computer vision. Emphasizing implementation, it highlights the various permutations of manifold learning in industry including manifold optimization, large scale manifold learning, semidefinite programming for embedding, manifold models for signal acquisition, compression and processing, and multi scale manifold. Beginning with an introduction to manifold learning theories and applications, the book includes discussions on the relevance to nonlinear dimensionality reduction, clustering, graph-based subspace learning, spectral learning and embedding, extensions, and multi-manifold modeling. It synergizes cross-domain knowledge for interdisciplinary instructions, offers a rich set of specialized topics contributed by expert professionals and researchers from a variety of fields. Finally, the book discusses specific algorithms and methodologies using case studies to apply manifold learning for real-world problems. <|reference_end|>", "<|reference_start|> Embeddings of Riemannian manifolds with finite eigenvector fields of connection Laplacian: <|reference_end|>" ]
[ 0, 1, 5, 10 ]
{"<|multi_cite_1_1|>": "arxiv-60238", "<|multi_cite_1_2|>": "ss-1056505", "<|cite_2|>": "arxiv-166644", "<|cite_3|>": "ss-1461854", "<|cite_4|>": "ss-711998", "<|cite_5|>": "ss-1450402", "<|multi_cite_6_1|>": "ss-981785", "<|multi_cite_6_2|>": "ss-1375691", "<|cite_7|>": "ss-1644241", "<|multi_cite_8_1|>": "ss-1091941", "<|multi_cite_8_2|>": "ss-2555000", "<|cite_9|>": "ss-2555001", "<|cite_10|>": "ss-1209469", "<|cite_11|>": "ss-1461853", "<|cite_12|>": "ss-912894", "<|cite_13|>": "ss-2555002", "<|cite_15|>": "ss-2555003", "<|cite_16|>": "ss-2555004", "<|cite_17|>": "ss-2555003", "<|cite_18|>": "ss-2555005", "<|cite_19|>": "ss-2555006", "<|cite_20|>": "ss-2555007"}
2010.11635
<|paper_start|> Title: Continual Learning in Low-rank Orthogonal Subspaces Abstract: Continual Learning in Low-rank Orthogonal Subspaces: In continual learning (CL), a learner is faced with a sequence of tasks, arriving one after the other, and the goal is to remember all the tasks once the continual learning experience is finished. The prior art in CL uses episodic memory, parameter regularization or extensible network structures to reduce interference among tasks, but in the end, all the approaches learn different tasks in a joint vector space. We believe this invariably leads to interference among different tasks. We propose to learn tasks in different (low-rank) vector subspaces that are kept orthogonal to each other in order to minimize interference. Further, to keep the gradients of different tasks coming from these subspaces orthogonal to each other, we learn isometric mappings by posing network training as an optimization problem over the Stiefel manifold. To the best of our understanding, we report, for the first time, strong results over experience-replay baseline with and without memory on standard classification benchmarks in continual learning. The code is made publicly available. Introduction \label{sec:intro} \vskip -0.1in In continual learning, a learner experiences a sequence of tasks with the objective to remember all or most of the observed tasks to speed up transfer of knowledge to future tasks. Learning from a diverse sequence of tasks is useful as it allows for the deployment of machine learning models that can quickly adapt to changes in the environment by leveraging past experiences. Contrary to the standard supervised learning setting, where only a single task is available, and where the learner can make several passes over the dataset of the task, the sequential arrival of multiple tasks poses unique challenges for continual learning. The chief one among which is catastrophic forgetting <|cite_start|> (Reference: Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem: ) <|cite_end|>, whereby the global update of model parameters on the present task interfere with the learned representations of past tasks. This results in the model forgetting the previously acquired knowledge. In neural networks, to reduce the deterioration of accumulated knowledge, existing approaches modify the network training broadly in three different ways. First, \emph{regularization-based} approaches <|cite_start|> (Reference: Overcoming catastrophic forgetting in neural networks: The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Neural networks are not, in general, capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks which they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on the MNIST hand written digit dataset and by learning several Atari 2600 games sequentially.) <|cite_end|> <|cite_start|> (Reference: Continual Learning Through Synaptic Intelligence: While deep learning has led to remarkable advances across diverse applications, it struggles in domains where the data distribution changes over the course of learning. In stark contrast, biological neural networks continually adapt to changing domains, possibly by leveraging complex molecular machinery to solve many tasks simultaneously. In this study, we introduce intelligent synapses that bring some of this biological complexity into artificial neural networks. Each synapse accumulates task relevant information over time, and exploits this information to rapidly store new memories without forgetting old ones. We evaluate our approach on continual learning of classification tasks, and show that it dramatically reduces forgetting while maintaining computational efficiency.) <|cite_end|> <|cite_start|> (Reference: Memory Aware Synapses: Learning what (not) to forget: Humans can learn in a continuous manner. Old rarely utilized knowledge can be overwritten by new incoming information while important, frequently used knowledge is prevented from being erased. In artificial learning systems, lifelong learning so far has focused mainly on accumulating knowledge over tasks and overcoming catastrophic forgetting. In this paper, we argue that, given the limited model capacity and the unlimited new information to be learned, knowledge has to be preserved or erased selectively. Inspired by neuroplasticity, we propose a novel approach for lifelong learning, coined Memory Aware Synapses (MAS). It computes the importance of the parameters of a neural network in an unsupervised and online manner. Given a new sample which is fed to the network, MAS accumulates an importance measure for each parameter of the network, based on how sensitive the predicted output function is to a change in this parameter. When learning a new task, changes to important parameters can then be penalized, effectively preventing important knowledge related to previous tasks from being overwritten. Further, we show an interesting connection between a local version of our method and Hebb's rule,which is a model for the learning process in the brain. We test our method on a sequence of object recognition tasks and on the challenging problem of learning an embedding for predicting $<$subject, predicate, object$>$ triplets. We show state-of-the-art performance and, for the first time, the ability to adapt the importance of the parameters based on unlabeled data towards what the network needs (not) to forget, which may vary depending on test conditions.) <|cite_end|> <|cite_start|> (Reference: Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence: Incremental learning (IL) has received a lot of attention recently, however, the literature lacks a precise problem definition, proper evaluation settings, and metrics tailored specifically for the IL problem. One of the main objectives of this work is to fill these gaps so as to provide a common ground for better understanding of IL. The main challenge for an IL algorithm is to update the classifier whilst preserving existing knowledge. We observe that, in addition to forgetting, a known issue while preserving knowledge, IL also suffers from a problem we call intransigence, inability of a model to update its knowledge. We introduce two metrics to quantify forgetting and intransigence that allow us to understand, analyse, and gain better insights into the behaviour of IL algorithms. We present RWalk, a generalization of EWC++ (our efficient version of EWC [Kirkpatrick2016EWC]) and Path Integral [Zenke2017Continual] with a theoretically grounded KL-divergence based perspective. We provide a thorough analysis of various IL algorithms on MNIST and CIFAR-100 datasets. In these experiments, RWalk obtains superior results in terms of accuracy, and also provides a better trade-off between forgetting and intransigence.) <|cite_end|> <|cite_start|> (Reference: Variational Continual Learning: This paper develops variational continual learning (VCL), a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks. The framework can successfully train both deep discriminative models and deep generative models in complex continual learning settings where existing tasks evolve over time and entirely new tasks emerge. Experimental results show that VCL outperforms state-of-the-art continual learning methods on a variety of tasks, avoiding catastrophic forgetting in a fully automatic way.) <|cite_end|> reduce the drift in network parameters that were important for solving previous tasks. Second, \emph{modular} approaches <|cite_start|> (Reference: Progressive Neural Networks: Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.) <|cite_end|> <|cite_start|> (Reference: Lifelong Learning with Dynamically Expandable Networks: We propose a novel deep network architecture for lifelong learning which we refer to as Dynamically Expandable Network (DEN), that can dynamically decide its network capacity as it trains on a sequence of tasks, to learn a compact overlapping knowledge sharing structure among tasks. DEN is efficiently trained in an online manner by performing selective retraining, dynamically expands network capacity upon arrival of each task with only the necessary number of units, and effectively prevents semantic drift by splitting/duplicating units and timestamping them. We validate DEN on multiple public datasets under lifelong learning scenarios, on which it not only significantly outperforms existing lifelong learning methods for deep networks, but also achieves the same level of performance as the batch counterparts with substantially fewer number of parameters. Further, the obtained network fine-tuned on all tasks obtained significantly better performance over the batch models, which shows that it can be used to estimate the optimal network structure even when all tasks are available in the first place.) <|cite_end|> add network components as new tasks arrive. These approaches rely on the knowledge of correct module selection at test time. Third, and perhaps the strongest, \emph{memory-based} approaches <|cite_start|> (Reference: Gradient episodic memory for continuum learning: One major obstacle towards artificial intelligence is the poor ability of models to quickly solve new problems, without forgetting previously acquired knowledge. To better understand this issue, we study the problem of learning over a continuum of data , where the model observes, once and one by one, examples concerning an ordered sequence of tasks. First, we propose a set of metrics to evaluate models learning over a continuum of data. These metrics characterize models not only by their test accuracy, but also in terms of their ability to transfer knowledge across tasks. Second, we propose a model to learn over continuums of data, called Gradient of Episodic Memory (GEM), which alleviates forgetting while allowing beneficial transfer of knowledge to previous tasks. Our experiments on variants of the MNIST and CIFAR-100 datasets demonstrate the strong performance of GEM when compared to the state-of-the-art.) <|cite_end|> <|cite_start|> (Reference: Memory Efficient Experience Replay for Streaming Learning: In supervised machine learning, an agent is typically trained once and then deployed. While this works well for static settings, robots often operate in changing environments and must quickly learn new things from data streams. In this paradigm, known as streaming learning, a learner is trained online, in a single pass, from a data stream that cannot be assumed to be independent and identically distributed (iid). Streaming learning will cause conventional deep neural networks (DNNs) to fail for two reasons: 1) they need multiple passes through the entire dataset; and 2) non-iid data will cause catastrophic forgetting. An old fix to both of these issues is rehearsal. To learn a new example, rehearsal mixes it with previous examples, and then this mixture is used to update the DNN. Full rehearsal is slow and memory intensive because it stores all previously observed examples, and its effectiveness for preventing catastrophic forgetting has not been studied in modern DNNs. Here, we describe the ExStream algorithm for memory efficient rehearsal and compare it to alternatives. We find that full rehearsal can eliminate catastrophic forgetting in a variety of streaming learning settings, with ExStream performing well using far less memory and computation.) <|cite_end|> <|cite_start|> (Reference: Selective Experience Replay for Lifelong Learning: Deep reinforcement learning has emerged as a powerful tool for a variety of learning tasks, however deep nets typically exhibit forgetting when learning multiple tasks in sequence. To mitigate forgetting, we propose an experience replay process that augments the standard FIFO buffer and selectively stores experiences in a long-term memory. We explore four strategies for selecting which experiences will be stored: favoring surprise, favoring reward, matching the global training distribution, and maximizing coverage of the state space. We show that distribution matching successfully prevents catastrophic forgetting, and is consistently the best approach on all domains tested. While distribution matching has better and more consistent performance, we identify one case in which coverage maximization is beneficial - when tasks that receive less trained are more important. Overall, our results show that selective experience replay, when suitable selection algorithms are employed, can prevent catastrophic forgetting.) <|cite_end|> <|cite_start|> (Reference: Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference: Lack of performance when it comes to continual learning over non-stationary distributions of data remains a major challenge in scaling neural network learning to more human realistic settings. In this work we propose a new conceptualization of the continual learning problem in terms of a temporally symmetric trade-off between transfer and interference that can be optimized by enforcing gradient alignment across examples. We then propose a new algorithm, Meta-Experience Replay (MER), that directly exploits this view by combining experience replay with optimization based meta-learning. This method learns parameters that make interference based on future gradients less likely and transfer based on future gradients more likely. We conduct experiments across continual lifelong supervised learning benchmarks and non-stationary reinforcement learning environments demonstrating that our approach consistently outperforms recently proposed baselines for continual learning. Our experiments show that the gap between the performance of MER and baseline algorithms grows both as the environment gets more non-stationary and as the fraction of the total experiences stored gets smaller.) <|cite_end|>, maintain a small replay buffer, called episodic memory, and mitigate catastrophic forgetting by replaying the data in the buffer along with the new task data. One common feature among all the three categories is that, in the end, all the tasks are learned in the same vector space where a vector space is associated with the output of a hidden layer of the network. We believe this restriction invariably leads to forgetting of past tasks. In this work, we propose to learn different tasks in different vector subspaces. We require these subspaces to be orthogonal to each other in order to prevent the learning of a task from interfering catastrophically with previous tasks. More specifically, for a point in the vector space in $\mathbb{R}^m$, typically the second last layer of the network, we project each task to a low-dimensional subspace by a task-specific projection matrix $P \in \mathbb{R}^{m \times m}$, whose rank is $r$, where $r \ll m$. The projection matrices are generated offline such that for different tasks they are mutually orthogonal. This simple projection in the second last layer reduces the forgetting considerably in the shallower networks -- the average accuracy increases by up to $13$\% and forgetting drops by up to $66$\% compared to the strongest experience replay baseline <|cite_start|> (Reference: Continual Learning with Tiny Episodic Memories: Learning with less supervision is a major challenge in artificial intelligence. One sensible approach to decrease the amount of supervision is to leverage prior experience and transfer knowledge from tasks seen in the past. However, a necessary condition for a successful transfer is the ability to remember how to perform previous tasks. The Continual Learning (CL) setting, whereby an agent learns from a stream of tasks without seeing any example twice, is an ideal framework to investigate how to accrue such knowledge. In this work, we consider supervised learning tasks and methods that leverage a very small episodic memory for continual learning. Through an extensive empirical analysis across four benchmark datasets adapted to CL, we observe that a very simple baseline, which jointly trains on both examples from the current task as well as examples stored in the memory, outperforms state-of-the-art CL approaches with and without episodic memory. Surprisingly, repeated learning over tiny episodic memories does not harm generalization on past tasks, as joint training on data from subsequent tasks acts like a data dependent regularizer. We discuss and evaluate different approaches to write into the memory. Most notably, reservoir sampling works remarkably well across the board, except when the memory size is extremely small. In this case, writing strategies that guarantee an equal representation of all classes work better. Overall, these methods should be considered as a strong baseline candidate when benchmarking new CL approaches) <|cite_end|> in a three-layer network. However, in deeper networks, the backpropagation of gradients from the different projections of the second last layer do not remain orthogonal to each other in the earlier layers resulting in interference in those layers. To reduce the interference, we use the fact that a gradient on an earlier layer is a transformed version of the gradient received at the projected layer -- where the transformation is linear and consists of the product of the weight matrix and the diagonal Jacobian matrix of the non-linearity of the layers in between. Reducing interference then requires this transformation to be an inner-product preserving transformation, such that, if two vectors are orthogonal at the projected layer, they remain close to orthogonal after the transformation. This is equivalent to learning orthonormal weight matrices -- a well-studied problem of learning on Stiefel manifolds <|cite_start|> (Reference: Optimization algorithms on matrix manifolds: Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists.) <|cite_end|> <|cite_start|> (Reference: Stochastic gradient descent on Riemannian manifolds: Stochastic gradient descent is a simple approach to find the local minima of a cost function whose evaluations are corrupted by noise. In this paper, we develop a procedure extending stochastic gradient descent algorithms to the case where the function is defined on a Riemannian manifold. We prove that, as in the Euclidian case, the gradient descent algorithm converges to a critical point of the cost function. The algorithm has numerous potential applications, and is illustrated here by four examples. In particular a novel gossip algorithm on the set of covariance matrices is derived and tested numerically.) <|cite_end|>. Our approach, dubbed \ours{}, generates two projected orthogonal vectors (gradients) -- one for the current task and another for one of the previous tasks whose data is stored in a tiny replay buffer -- and updates the network weights such that the weights remain on a Stiefel manifold. We visually describe our approach in Fig.~\ref{fig:method}. For the same amount of episodic memory, \ours{}, improves upon the strong experience replay baseline by $8$\% in average accuracy and $50$\% in forgetting on deeper networks. \begin{figure} \centering \includegraphics[scale=0.5]{Figs/method.pdf} \caption{\emph{\small \ours{}. Each blob, with the three ellipses, represents a vector space and its subspaces at a certain layer. The projection operator in the layer $L$ keeps the subspaces orthogonal (no overlap). The overlap in the intermediate layers is minimized when the weight matrices are learned on the Stiefel manifold.}} \label{fig:method} \vskip -0.1in \end{figure} Related Work \label{sec: background} In this section, we describe the continual learning setup followed by necessary preliminaries for our approach. \subsection{Continual Learning Setup} \label{sec:setup} We assume a continual learner experiencing a stream of data triplets $(x_i, y_i, t_i)$ containing an input $x_i$, a target $y_i$, and a task identifier $t_i \in \mathcal{T} = \{1, \ldots, T\}$. Each input-target pair $(x_i, y_i) \in \mathcal{X} \times \mathcal{Y}_{t_i}$ is an identical and independently distributed example drawn from some unknown distribution $P_{t_i}(X, Y)$, representing the $t_i$-th learning task. We assume that the tasks are experienced in order $t_i \leq t_j$ for all $i \leq j$, and the learner cannot store any but a few samples from $P_{t_i}$ in a tiny replay buffer $\epsmem_i$. Under this setup, our goal is to estimate a predictor $f = (w \circ \Phi) : \mathcal{X} \times \mathcal{T} \to \mathcal{Y}$, composed of a feature extractor $\Phi_{\Theta} : \mathcal{X} \to \mathcal{H}$, which is an $L$-layer feed-forward neural network parameterized by $\Theta=\{W_l\}_{l=1}^L$, and a classifier $w_{\theta} : \mathcal{H} \to \mathcal{Y}$, that minimizes the multi-task error \begin{equation} \frac{1}{T} \sum_{t=1}^T \mathbb{E}_{(x, y) \sim P_{t}}\left[\, \ell(f(x, t), y) \,\right], \label{eq:multitask} \end{equation} where $\mathcal{H} \in \mathbb{R}^m$ is an inner product space, $\mathcal{Y} = \cup_{t \in \mathcal{T}} \mathcal{Y}_{t}$, and $\ell : \mathcal{Y} \times \mathcal{Y} \to \mathbb{R}_{\ge 0}$ is a loss function. To further comply with the strict sequential setting, similar to prior work <|cite_start|> (Reference: Gradient episodic memory for continuum learning: One major obstacle towards artificial intelligence is the poor ability of models to quickly solve new problems, without forgetting previously acquired knowledge. To better understand this issue, we study the problem of learning over a continuum of data , where the model observes, once and one by one, examples concerning an ordered sequence of tasks. First, we propose a set of metrics to evaluate models learning over a continuum of data. These metrics characterize models not only by their test accuracy, but also in terms of their ability to transfer knowledge across tasks. Second, we propose a model to learn over continuums of data, called Gradient of Episodic Memory (GEM), which alleviates forgetting while allowing beneficial transfer of knowledge to previous tasks. Our experiments on variants of the MNIST and CIFAR-100 datasets demonstrate the strong performance of GEM when compared to the state-of-the-art.) <|cite_end|> <|cite_start|> (Reference: Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference: Lack of performance when it comes to continual learning over non-stationary distributions of data remains a major challenge in scaling neural network learning to more human realistic settings. In this work we propose a new conceptualization of the continual learning problem in terms of a temporally symmetric trade-off between transfer and interference that can be optimized by enforcing gradient alignment across examples. We then propose a new algorithm, Meta-Experience Replay (MER), that directly exploits this view by combining experience replay with optimization based meta-learning. This method learns parameters that make interference based on future gradients less likely and transfer based on future gradients more likely. We conduct experiments across continual lifelong supervised learning benchmarks and non-stationary reinforcement learning environments demonstrating that our approach consistently outperforms recently proposed baselines for continual learning. Our experiments show that the gap between the performance of MER and baseline algorithms grows both as the environment gets more non-stationary and as the fraction of the total experiences stored gets smaller.) <|cite_end|>, we consider streams of data that are \emph{experienced only once}. We only focus on classification tasks where either the input or output distribution changes over time. We assume that a task descriptor, identifying the correct classification head, is given at both train and test times. \subsubsection*{Metrics} Once the continual learning experience is finished, we measure two statistics to evaluate the quality of the algorithm: \emph{average accuracy}, and \emph{average maximum forgetting}. First, the average accuracy is defined as \begin{equation} \text{Accuracy} = \frac{1}{T} \sum_{j=1}^T a_{T, j}, \label{eq:accuracy} \end{equation} where $a_{i,j}$ denotes the test accuracy on task $j$ after the model has finished experiencing task $i$. Second, the average maximum forgetting is defined as \begin{equation} \text{Forgetting} = \frac{1}{T-1} \sum_{j=1}^{T-1} \max_{l \in \{1, \ldots, T-1\}} (a_{l, j} - a_{T, j}), \label{eq:forgetting} \end{equation} that is, the decrease in performance for each of the tasks between their peak accuracy and their accuracy after the continual learning experience is finished. \subsection{Preliminaries} \label{sec:prelim} Let the inner product in $\mathcal{H}$ be denoted by $\innp{\cdot}{\cdot}$, and $v$ be an element of $\mathcal{H}$. A matrix $O \in \mathbb{R}^{m \times r}$, where $r \ll m$, parameterizes an $m \times m$ dimensional orthogonal projection matrix $P$, given by $P = O(O^{\top}O)^{-1}O^{\top}$, where $\textrm{rank}(P)=r$. A vector $u=Pv$, will be the projection of $v$ in a subspace $\mathcal{U} \subset \mathcal{H}$ with $\textrm{dim}(\mathcal{U})=r$. Furthermore, if the columns of $O$ are assumed to be orthonormal, then the projection matrix is simplified to $P = OO^{\top}$. \begin{definition}[Orthogonal Subspace] Subspaces $\mathcal{U}$ and $\mathcal{W}$ of a vector space $\mathcal{H}$ are orthogonal if $$\innp{u}{w}=0, \quad \forall u \in \mathcal{U}, w \in \mathcal{W}.$$ \label{def:orthog_subspace} \end{definition} \begin{definition}[Isometry] A linear transformation $T: \mathcal{V} \to \mathcal{V}$ is called an isometry if it is distance preserving \ie $$\|T(v) - T(w)\| = \|v - w\|, \quad \forall v, w \in \mathcal{V}.$$ \label{def:isometry} \end{definition} A linear transformation that preserves distance must preserve angles and vice versa. We record this in the following theorem. \begin{theorem} $T$ is an isometry iff it preserves inner products. \end{theorem} The proof is fairly standard and given in Appendix Appendix~\ref{sec:isometry_proof}. \begin{corollary} \label{cor:iso_compos} If $T_1$ and $T_2$ are two isometries then their composition $T_1 \circ T_2$ is also an isometry. \end{corollary} An \emph{orthogonal matrix} preserves inner products and therefore acts as an isometry of Euclidean space. Enforcing orthogonality\footnote{Note, an orthogonal matrix is always square. However, the matrices we consider can be nonsquare. In this work, the orthogonal matrix is used in the sense of $W^{\top}W = \mathbf{I}$.} during network training corresponds to solving the following constrained optimization problem: \begin{align} & \min_{\theta, \Theta=\{W_l, b_l\}_{l=1}^L} \ell(f(x, t), y), \nonumber \\ & \textrm{s.t.} \quad W_l^{\top} W_l = \mathbf{I}, \quad \forall l \in \{1,\cdots,L\}, \end{align} where $\mathbf{I}$ is an identity matrix of appropriate dimensions. The solution set of the above problem is a valid Riemannian manifold when the inner product is defined. It is called the Stiefel manifold, defined as $\bar{\mathcal{M}}_l = \{ W_l \in \mathbb{R}^{n_l \times n_{l-1}} | W_l^{\top}W_l = \mathbf{I} \}$, where $n_l$ is the number of neurons in layer $l$, and it is assumed that $n_l \geq n_{l-1}$. For most of the neural network architectures, this assumption holds. For a convolutional layer $W_l \in \mathbb{R}^{c_{out} \times c_{in} \times h \times w}$, we reshape it to $W_l \in \mathbb{R}^{c_{out} \times (c_{in}\cdot h \cdot w)}$. The optimization of a differentiable cost function over a Stiefel manifold has been extensively studied in literature <|cite_start|> (Reference: Optimization algorithms on matrix manifolds: Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists.) <|cite_end|> <|cite_start|> (Reference: Stochastic gradient descent on Riemannian manifolds: Stochastic gradient descent is a simple approach to find the local minima of a cost function whose evaluations are corrupted by noise. In this paper, we develop a procedure extending stochastic gradient descent algorithms to the case where the function is defined on a Riemannian manifold. We prove that, as in the Euclidian case, the gradient descent algorithm converges to a critical point of the cost function. The algorithm has numerous potential applications, and is illustrated here by four examples. In particular a novel gossip algorithm on the set of covariance matrices is derived and tested numerically.) <|cite_end|>. Here, we briefly summarize the two main steps of the optimization process and refer the reader to <|cite_start|> (Reference: Optimization algorithms on matrix manifolds: Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists.) <|cite_end|> for further details. For a given point $W_l$ on the Stiefel manifold, let $\mathcal{T}_{W_l}$ represent the tangent space at that point. Further, let $g_l$, a matrix, be the gradient of the loss function with respect to $W_l$. The first step of optimization projects $g_l$ to $\mathcal{T}_{W_l}$ using a closed form $Proj_{\mathcal{T}_{W_l}}(g_l) = AW_l$, where `$A$' is a skew-symmetric matrix given by (see Appendix~\ref{sec:tangent_action} for the derivation): \begin{equation} \label{eq:tangent_cipp} A=g_{l}W_l^{\top} - W_l g_l^{\top}. \end{equation} Once the gradient projection in the tangent space is found, the second step is to generate a descent curve of the loss function in the manifold. The Cayley transform defines one such curve using a parameter $\tau \geq 0$, specifying the length of the curve, and a skew-symmetric matrix $U$ <|cite_start|> (Reference: Learning algorithms utilizing quasi-geodesic flows on the Stiefel manifold: ) <|cite_end|>: \begin{equation} \label{eq:cayley} Y(\tau) = \Big(I + \frac{\tau}{2}U \Big)^{-1} \Big(I - \frac{\tau}{2}U \Big) W_l, \end{equation} It can be seen that the curve stays on the Stiefel manifold \ie $Y(\tau)^{\top}Y(\tau) = \mathbf{I}$ and $Y(0) = W_l$, and that its tangent vector at $\tau = 0$ is $Y^{\prime}(0) = -UW_l$. By setting $U=A=g_{l}W_l^{\top} - W_l g_l^{\top}$, the curve will be a descent curve for the loss function. <|cite_start|> (Reference: Efficient Riemannian Optimization on the Stiefel Manifold via the Cayley Transform: Strictly enforcing orthonormality constraints on parameter matrices has been shown advantageous in deep learning. This amounts to Riemannian optimization on the Stiefel manifold, which, however, is computationally expensive. To address this challenge, we present two main contributions: (1) A new efficient retraction map based on an iterative Cayley transform for optimization updates, and (2) An implicit vector transport mechanism based on the combination of a projection of the momentum and the Cayley transform on the Stiefel manifold. We specify two new optimization algorithms: Cayley SGD with momentum, and Cayley ADAM on the Stiefel manifold. Convergence of Cayley SGD is theoretically analyzed. Our experiments for CNN training demonstrate that both algorithms: (a) Use less running time per iteration relative to existing approaches that enforce orthonormality of CNN parameters; and (b) Achieve faster convergence rates than the baseline SGD and ADAM algorithms without compromising the performance of the CNN. Cayley SGD and Cayley ADAM are also shown to reduce the training time for optimizing the unitary transition matrices in RNNs.) <|cite_end|> showed that one can bypass the expensive matrix inversion in \eqref{eq:cayley} by following the fixed-point iteration of the Cayley transform, \begin{equation} \label{eq:iterative_cayley} Y(\tau) = W_l - \frac{\tau}{2} A (W_l + Y(\tau)). \end{equation} <|cite_start|> (Reference: Efficient Riemannian Optimization on the Stiefel Manifold via the Cayley Transform: Strictly enforcing orthonormality constraints on parameter matrices has been shown advantageous in deep learning. This amounts to Riemannian optimization on the Stiefel manifold, which, however, is computationally expensive. To address this challenge, we present two main contributions: (1) A new efficient retraction map based on an iterative Cayley transform for optimization updates, and (2) An implicit vector transport mechanism based on the combination of a projection of the momentum and the Cayley transform on the Stiefel manifold. We specify two new optimization algorithms: Cayley SGD with momentum, and Cayley ADAM on the Stiefel manifold. Convergence of Cayley SGD is theoretically analyzed. Our experiments for CNN training demonstrate that both algorithms: (a) Use less running time per iteration relative to existing approaches that enforce orthonormality of CNN parameters; and (b) Achieve faster convergence rates than the baseline SGD and ADAM algorithms without compromising the performance of the CNN. Cayley SGD and Cayley ADAM are also shown to reduce the training time for optimizing the unitary transition matrices in RNNs.) <|cite_end|> further showed that under some mild continuity assumptions~\eqref{eq:iterative_cayley} converges to the closed form~\eqref{eq:cayley} faster than other approximation algorithms. The overall optimization on Stiefel manifold is shown in \ref{fig:stiefel_optim}. \begin{figure} \centering \includegraphics[scale=0.4]{Figs/stiefel.pdf} \caption[\ours: Update in Stiefel manifold]{Gradient computed at a given point ($W_t$) in the manifold is first projected to the tangent plane. There exists a closed form for this step. This projected gradient is then retracted to a point in the manifold giving the final update $W_{t+1}$.} \label{fig:stiefel_optim} \end{figure} <|paper_end|>
[ "<|reference_start|> Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem: <|reference_end|>", "<|reference_start|> Continual Learning Through Synaptic Intelligence: While deep learning has led to remarkable advances across diverse applications, it struggles in domains where the data distribution changes over the course of learning. In stark contrast, biological neural networks continually adapt to changing domains, possibly by leveraging complex molecular machinery to solve many tasks simultaneously. In this study, we introduce intelligent synapses that bring some of this biological complexity into artificial neural networks. Each synapse accumulates task relevant information over time, and exploits this information to rapidly store new memories without forgetting old ones. We evaluate our approach on continual learning of classification tasks, and show that it dramatically reduces forgetting while maintaining computational efficiency. <|reference_end|>", "<|reference_start|> Selective Experience Replay for Lifelong Learning: Deep reinforcement learning has emerged as a powerful tool for a variety of learning tasks, however deep nets typically exhibit forgetting when learning multiple tasks in sequence. To mitigate forgetting, we propose an experience replay process that augments the standard FIFO buffer and selectively stores experiences in a long-term memory. We explore four strategies for selecting which experiences will be stored: favoring surprise, favoring reward, matching the global training distribution, and maximizing coverage of the state space. We show that distribution matching successfully prevents catastrophic forgetting, and is consistently the best approach on all domains tested. While distribution matching has better and more consistent performance, we identify one case in which coverage maximization is beneficial - when tasks that receive less trained are more important. Overall, our results show that selective experience replay, when suitable selection algorithms are employed, can prevent catastrophic forgetting. <|reference_end|>", "<|reference_start|> Stochastic gradient descent on Riemannian manifolds: Stochastic gradient descent is a simple approach to find the local minima of a cost function whose evaluations are corrupted by noise. In this paper, we develop a procedure extending stochastic gradient descent algorithms to the case where the function is defined on a Riemannian manifold. We prove that, as in the Euclidian case, the gradient descent algorithm converges to a critical point of the cost function. The algorithm has numerous potential applications, and is illustrated here by four examples. In particular a novel gossip algorithm on the set of covariance matrices is derived and tested numerically. <|reference_end|>" ]
[ 0, 2, 10, 14 ]
{"<|cite_1|>": "ss-1198060", "<|multi_cite_2_1|>": "arxiv-111666", "<|multi_cite_2_2|>": "arxiv-118897", "<|multi_cite_2_3|>": "arxiv-141313", "<|multi_cite_2_4|>": "arxiv-146744", "<|multi_cite_2_5|>": "arxiv-138501", "<|multi_cite_3_1|>": "arxiv-100147", "<|multi_cite_3_2|>": "arxiv-131196", "<|multi_cite_4_1|>": "ss-2279248", "<|multi_cite_4_2|>": "arxiv-172872", "<|multi_cite_4_3|>": "arxiv-149940", "<|multi_cite_4_4|>": "arxiv-177910", "<|cite_5|>": "ss-707305", "<|multi_cite_6_1|>": "ss-1261283", "<|multi_cite_6_2|>": "arxiv-26452", "<|multi_cite_7_1|>": "ss-2279248", "<|multi_cite_7_2|>": "arxiv-177910", "<|multi_cite_8_1|>": "ss-1261283", "<|multi_cite_8_2|>": "arxiv-26452", "<|cite_10|>": "ss-1261283", "<|cite_9|>": "ss-1465136", "<|cite_11|>": "arxiv-246309", "<|cite_12|>": "arxiv-246309"}
2401.15896
<|paper_start|> Title: M2-Encoder: Advancing Bilingual Image-Text Understanding by Large-scale Efficient Pretraining Abstract: M2-Encoder: Advancing Bilingual Image-Text Understanding by Large-scale Efficient Pretraining: Vision-language foundation models like CLIP have revolutionized the field of artificial intelligence. Nevertheless, VLM models supporting multi-language, e.g., in both Chinese and English, have lagged due to the relative scarcity of large-scale pretraining datasets. Toward this end, we introduce a comprehensive bilingual (Chinese-English) dataset BM-6B with over 6 billion image-text pairs, aimed at enhancing multimodal foundation models to well understand images in both languages. To handle such a scale of dataset, we propose a novel grouped aggregation approach for image-text contrastive loss computation, which reduces the communication overhead and GPU memory demands significantly, facilitating a 60% increase in training speed. We pretrain a series of bilingual image-text foundation models with an enhanced fine-grained understanding ability on BM-6B, the resulting models, dubbed as $M^2$-Encoders (pronounced "M-Square"), set new benchmarks in both languages for multimodal retrieval and classification tasks. Notably, Our largest $M^2$-Encoder-10B model has achieved top-1 accuracies of 88.5% on ImageNet and 80.7% on ImageNet-CN under a zero-shot classification setting, surpassing previously reported SoTA methods by 2.2% and 21.1%, respectively. The $M^2$-Encoder series represents one of the most comprehensive bilingual image-text foundation models to date, so we are making it available to the research community for further exploration and development. Introduction Vision-language foundation models, such as CLIP <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|>, are typically developed through contrastive learning by aligning image-text pairs on large-scale unsupervised or weakly supervised datasets, establishing them as fundamental components of artificial intelligence. Benefiting from their robust visual and textual representation abilities and exceptional zero-shot transferability, they are widely used in modern large-scale multimodal models, where they serve key roles in visual understanding <|cite_start|> (Reference: mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality: Large language models (LLMs) have demonstrated impressive zero-shot abilities on a variety of open-ended tasks, while recent research has also explored the use of LLMs for multi-modal generation. In this study, we introduce mPLUG-Owl, a novel training paradigm that equips LLMs with multi-modal abilities through modularized learning of foundation LLM, a visual knowledge module, and a visual abstractor module. This approach can support multiple modalities and facilitate diverse unimodal and multimodal abilities through modality collaboration. The training paradigm of mPLUG-Owl involves a two-stage method for aligning image and text, which learns visual knowledge with the assistance of LLM while maintaining and even improving the generation abilities of LLM. In the first stage, the visual knowledge module and abstractor module are trained with a frozen LLM module to align the image and text. In the second stage, language-only and multi-modal supervised datasets are used to jointly fine-tune a low-rank adaption (LoRA) module on LLM and the abstractor module by freezing the visual knowledge module. We carefully build a visually-related instruction evaluation set OwlEval. Experimental results show that our model outperforms existing multi-modal models, demonstrating mPLUG-Owl's impressive instruction and visual understanding ability, multi-turn conversation ability, and knowledge reasoning ability. Besides, we observe some unexpected and exciting abilities such as multi-image correlation and scene text understanding, which makes it possible to leverage it for harder real scenarios, such as vision-only document comprehension. Our code, pre-trained model, instruction-tuned models, and evaluation set are available at https://github.com/X-PLUG/mPLUG-Owl. The online demo is available at https://www.modelscope.cn/studios/damo/mPLUG-Owl.) <|cite_end|> <|cite_start|> (Reference: BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models: The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from off-the-shelf frozen pre-trained image encoders and frozen large language models. BLIP-2 bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. BLIP-2 achieves state-of-the-art performance on various vision-language tasks, despite having significantly fewer trainable parameters than existing methods. For example, our model outperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. We also demonstrate the model's emerging capabilities of zero-shot image-to-text generation that can follow natural language instructions.) <|cite_end|> <|cite_start|> (Reference: MultiModal-GPT: A Vision and Language Model for Dialogue with Humans: We present a vision and language model named MultiModal-GPT to conduct multi-round dialogue with humans. MultiModal-GPT can follow various instructions from humans, such as generating a detailed caption, counting the number of interested objects, and answering general questions from users. MultiModal-GPT is parameter-efficiently fine-tuned from OpenFlamingo, with Low-rank Adapter (LoRA) added both in the cross-attention part and the self-attention part of the language model. We first construct instruction templates with vision and language data for multi-modality instruction tuning to make the model understand and follow human instructions. We find the quality of training data is vital for the dialogue performance, where few data containing short answers can lead the model to respond shortly to any instructions. To further enhance the ability to chat with humans of the MultiModal-GPT, we utilize language-only instruction-following data to train the MultiModal-GPT jointly. The joint training of language-only and visual-language instructions with the \emph{same} instruction template effectively improves dialogue performance. Various demos show the ability of continuous dialogue of MultiModal-GPT with humans. Code, dataset, and demo are at https://github.com/open-mmlab/Multimodal-GPT) <|cite_end|> <|cite_start|> (Reference: MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models: The recent GPT-4 has demonstrated extraordinary multi-modal abilities, such as directly generating websites from handwritten text and identifying humorous elements within images. These features are rarely observed in previous vision-language models. However, the technical details behind GPT-4 continue to remain undisclosed. We believe that the enhanced multi-modal generation capabilities of GPT-4 stem from the utilization of sophisticated large language models (LLM). To examine this phenomenon, we present MiniGPT-4, which aligns a frozen visual encoder with a frozen advanced LLM, Vicuna, using one projection layer. Our work, for the first time, uncovers that properly aligning the visual features with an advanced large language model can possess numerous advanced multi-modal abilities demonstrated by GPT-4, such as detailed image description generation and website creation from hand-drawn drafts. Furthermore, we also observe other emerging capabilities in MiniGPT-4, including writing stories and poems inspired by given images, teaching users how to cook based on food photos, and so on. In our experiment, we found that the model trained on short image caption pairs could produce unnatural language outputs (e.g., repetition and fragmentation). To address this problem, we curate a detailed image description dataset in the second stage to finetune the model, which consequently improves the model's generation reliability and overall usability. Our code, pre-trained model, and collected dataset are available at https://minigpt-4.github.io/.) <|cite_end|> <|cite_start|> (Reference: InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning: Large-scale pre-training and instruction tuning have been successful at creating general-purpose language models with broad competence. However, building general-purpose vision-language models is challenging due to the rich input distributions and task diversity resulting from the additional visual input. Although vision-language pretraining has been widely studied, vision-language instruction tuning remains under-explored. In this paper, we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pretrained BLIP-2 models. We gather 26 publicly available datasets, covering a wide variety of tasks and capabilities, and transform them into instruction tuning format. Additionally, we introduce an instruction-aware Query Transformer, which extracts informative features tailored to the given instruction. Trained on 13 held-in datasets, InstructBLIP attains state-of-the-art zero-shot performance across all 13 held-out datasets, substantially outperforming BLIP-2 and larger Flamingo models. Our models also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA questions with image contexts). Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over concurrent multimodal models. All InstructBLIP models are open-sourced at https://github.com/salesforce/LAVIS/tree/main/projects/instructblip.) <|cite_end|> <|cite_start|> (Reference: Visual Instruction Tuning: Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field. In this paper, we present the first attempt to use language-only GPT-4 to generate multimodal language-image instruction-following data. By instruction tuning on such generated data, we introduce LLaVA: Large Language and Vision Assistant, an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding.Our early experiments show that LLaVA demonstrates impressive multimodel chat abilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal instruction-following dataset. When fine-tuned on Science QA, the synergy of LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make GPT-4 generated visual instruction tuning data, our model and code base publicly available.) <|cite_end|> <|cite_start|> (Reference: Qwen-VL: A frontier large vision-language model with versatile abilities: We introduce the Qwen-VL series, a set of large-scale vision-language models designed to perceive and understand both text and images. Comprising Qwen-VL and Qwen-VL-Chat, these models exhibit remarkable performance in tasks like image captioning, question answering, visual localization, and flexible interaction. The evaluation covers a wide range of tasks including zero-shot captioning, visual or document visual question answering, and grounding. We demonstrate the Qwen-VL outperforms existing Large Vision Language Models (LVLMs). We present their architecture, training, capabilities, and performance, highlighting their contributions to advancing multimodal artificial intelligence. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL .) <|cite_end|>, and cross-modal alignment and generation <|cite_start|> (Reference: Hierarchical Text-Conditional Image Generation with CLIP Latents: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples.) <|cite_end|> <|cite_start|> (Reference: Zero-Shot Text-to-Image Generation: Text-to-image generation has traditionally focused on finding better modeling assumptions for training on a fixed dataset. These assumptions might involve complex architectures, auxiliary losses, or side information such as object part labels or segmentation masks supplied during training. We describe a simple approach for this task based on a transformer that autoregressively models the text and image tokens as a single stream of data. With sufficient data and scale, our approach is competitive with previous domain-specific models when evaluated in a zero-shot fashion.) <|cite_end|> <|cite_start|> (Reference: High-Resolution Image Synthesis with Latent Diffusion Models: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion .) <|cite_end|>. \begin{figure}[tp] \centering \includegraphics[width=1\linewidth]{effect.png} \caption{An overview of existing multimodal models on zero-shot classification and retrieval performance. The top-1 accuracy on (a) ImageNet-CN and (b) ImageNet. The retrieval MR on (c) Flicker30K-CN and (d) Flicker30K. Our $M^2$-Encoders excel compared to models with a similar number of parameters.} \label{fig:effect} \end{figure} The performance of image-text foundational models relies heavily on large-scale image-text datasets. However, there lack of a large-scale image-text dataset in Chinese comparable to LAION2B-EN <|cite_start|> (Reference: LAION-5B: An open large-scale dataset for training next generation image-text models: Groundbreaking language-vision architectures like CLIP and DALL-E proved the utility of training on large amounts of noisy image-text data, without relying on expensive accurate labels used in standard vision unimodal supervised learning. The resulting models showed capabilities of strong text-guided image generation and transfer to downstream tasks, while performing remarkably at zero-shot classification with noteworthy out-of-distribution robustness. Since then, large-scale language-vision models like ALIGN, BASIC, GLIDE, Flamingo and Imagen made further improvements. Studying the training and capabilities of such models requires datasets containing billions of image-text pairs. Until now, no datasets of this size have been made openly available for the broader research community. To address this problem and democratize research on large-scale multi-modal models, we present LAION-5B - a dataset consisting of 5.85 billion CLIP-filtered image-text pairs, of which 2.32B contain English language. We show successful replication and fine-tuning of foundational models like CLIP, GLIDE and Stable Diffusion using the dataset, and discuss further experiments enabled with an openly available dataset of this scale. Additionally we provide several nearest neighbor indices, an improved web-interface for dataset exploration and subset generation, and detection scores for watermark, NSFW, and toxic content detection. Announcement page https://laion.ai/laion-5b-a-new-era-of-open-large-scale-multi-modal-datasets/) <|cite_end|>, which might have hindered the performance of Chinese multimodal foundational models and their real-world applications. Our work aims to narrow this gap in data scales. Toward this end, we curate image-text pairs collected from public datasets and legally sourced web content, techniques such as translation into Chinese, data cleaning to remove noise, and data augmentation to enhance variability were implemented as part of our methodology, resulting in a large-scale dataset comprising over 3 billion Chinese image-text pairs, a volume that is even larger than datasets such as LAION2B-EN. To the best of our knowledge, this collection constitutes the largest Chinese image-text dataset available to date. By integrating this corpus with English publicly available datasets(eg. LAION2B-EN, COYO-700M, Datacomp-1B <|cite_start|> (Reference: DataComp: In search of the next generation of multimodal datasets: Multimodal datasets are a critical component in recent breakthroughs such as Stable Diffusion and GPT-4, yet their design does not receive the same research attention as model architectures or training algorithms. To address this shortcoming in the ML ecosystem, we introduce DataComp, a testbed for dataset experiments centered around a new candidate pool of 12.8 billion image-text pairs from Common Crawl. Participants in our benchmark design new filtering techniques or curate new data sources and then evaluate their new dataset by running our standardized CLIP training code and testing the resulting model on 38 downstream test sets. Our benchmark consists of multiple compute scales spanning four orders of magnitude, which enables the study of scaling trends and makes the benchmark accessible to researchers with varying resources. Our baseline experiments show that the DataComp workflow leads to better training sets. In particular, our best baseline, DataComp-1B, enables training a CLIP ViT-L/14 from scratch to 79.2% zero-shot accuracy on ImageNet, outperforming OpenAI's CLIP ViT-L/14 by 3.7 percentage points while using the same training procedure and compute. We release DataComp and all accompanying code at www.datacomp.ai.) <|cite_end|>) and accounting for potential overlaps, we have constructed a high-quality bilingual dataset dubbed as BM-6B(BM represents bilingual multi-modality) that includes nearly 6 billion unique image-text pairs. The construction of this dataset provides a critical foundation for developing advanced bilingual multimodal models catering to both Chinese and English languages. Training on such a massive dataset necessitates a substantial increase in computational resources. The conventional image-text contrastive (ITC) loss calculation requires gathering image-text representations from all computing nodes in a distributed system. This leads to significant communication overhead and risk of GPU memory depletion (out-of-memory errors) in large-scale training scenarios. To overcome this challenge, we design a new grouped aggregation strategy dubbed Grouped-ITC with batch accumulation(abbreviated as GBA-ITC) that evenly divides the nodes in the cluster into multiple groups. During the computation of the ITC loss, aggregation is performed within each group and coupled with batch accumulation, enabling the decoupling of the ITC loss computation from the overall batch size, leading to reduced memory requirements and enhanced scalability. This technique yields a 60\% acceleration in training speed. We also adopted the "SHARING-DELINKING" training strategy proposed by the M6-10T project <|cite_start|> (Reference: M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining: Recent expeditious developments in deep learning algorithms, distributed training, and even hardware design for large models have enabled training extreme-scale models, say GPT-3 and Switch Transformer possessing hundreds of billions or even trillions of parameters. However, under limited resources, extreme-scale model training that requires enormous amounts of computes and memory footprint suffers from frustratingly low efficiency in model convergence. In this paper, we propose a simple training strategy called "Pseudo-to-Real" for high-memory-footprint-required large models. Pseudo-to-Real is compatible with large models with architecture of sequential layers. We demonstrate a practice of pretraining unprecedented 10-trillion-parameter model, an order of magnitude larger than the state-of-the-art, on solely 512 GPUs within 10 days. Besides demonstrating the application of Pseudo-to-Real, we also provide a technique, Granular CPU offloading, to manage CPU memory for training large model and maintain high GPU utilities. Fast training of extreme-scale models on a decent amount of resources can bring much smaller carbon footprint and contribute to greener AI.) <|cite_end|>, and utilized the ReCLIP <|cite_start|> (Reference: RECLIP: Resource-efficient CLIP by Training with Small Images: We present RECLIP (Resource-efficient CLIP), a simple method that minimizes computational resource footprint for CLIP (Contrastive Language Image Pretraining). Inspired by the notion of coarse-to-fine in computer vision, we leverage small images to learn from large-scale language supervision efficiently, and finetune the model with high-resolution data in the end. Since the complexity of the vision transformer heavily depends on input image size, our approach significantly reduces the training resource requirements both in theory and in practice. Using the same batch size and training epoch, RECLIP achieves highly competitive zero-shot classification and image-text retrieval accuracy with 6 to 8x less computational resources and 7 to 9x fewer FLOPs than the baseline. Compared to the state-of-the-art contrastive learning methods, RECLIP demonstrates 5 to 59x training resource savings while maintaining highly competitive zero-shot classification and retrieval performance. Finally, RECLIP matches the state of the art in transfer learning to open-vocabulary detection tasks, achieving 32 APr on LVIS. We hope this work will pave the path for the broader research community to explore language supervised pretraining in resource-friendly settings.) <|cite_end|> strategy to expedite the convergence efficiency of training. With the aforementioned efficient training methods, we trained a series of $M^2$-Encoder models on BM-6B, with a focus on enhanced fine-grained understanding capabilities. Our $M^2$-Encoders are a series of models spanning from 0.4 billion to 10 billion parameters. We conducted zero-shot evaluations of our models' performance on six bilingual cross-modal retrieval and classification test sets, including three English test datasets: ImageNet <|cite_start|> (Reference: ImageNet: A large-scale Hierarchical Image Database: The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.) <|cite_end|>, Flickr30K <|cite_start|> (Reference: Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models: The Flickr30k dataset has become a standard benchmark for sentence-based image description. This paper presents Flickr30k Entities, which augments the 158k captions from Flickr30k with 244k coreference chains, linking mentions of the same entities across different captions for the same image, and associating them with 276k manually annotated bounding boxes. Such annotations are essential for continued progress in automatic image description and grounded language understanding. They enable us to define a new benchmark for localization of textual entity mentions in an image. We present a strong baseline for this task that combines an image-text embedding, detectors for common objects, a color classifier, and a bias towards selecting larger objects. While our baseline rivals in accuracy more complex state-of-the-art models, we show that its gains cannot be easily parlayed into improvements on such tasks as image-sentence retrieval, thus underlining the limitations of current methods and the need for further research.) <|cite_end|>, COCO <|cite_start|> (Reference: Microsoft COCO Captions: Data Collection and Evaluation Server: In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.) <|cite_end|>, and three equivalent Chinese version test datasets, respectively: ImageNet-CN <|cite_start|> (Reference: Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese: The tremendous success of CLIP (Radford et al., 2021) has promoted the research and application of contrastive learning for vision-language pretraining. In this work, we construct a large-scale dataset of image-text pairs in Chinese, where most data are retrieved from publicly available datasets, and we pretrain Chinese CLIP models on the new dataset. We develop 5 Chinese CLIP models of multiple sizes, spanning from 77 to 958 million parameters. Furthermore, we propose a two-stage pretraining method, where the model is first trained with the image encoder frozen and then trained with all parameters being optimized, to achieve enhanced model performance. Our comprehensive experiments demonstrate that Chinese CLIP can achieve the state-of-the-art performance on MUGE, Flickr30K-CN, and COCO-CN in the setups of zero-shot learning and finetuning, and it is able to achieve competitive performance in zero-shot image classification based on the evaluation on the ELEVATER benchmark (Li et al., 2022). We have released our codes, models, and demos in https://github.com/OFA-Sys/Chinese-CLIP) <|cite_end|>, Flickr30K-CN <|cite_start|> (Reference: Fluency-Guided Cross-Lingual Image Captioning: Image captioning has so far been explored mostly in English, as most available datasets are in this language. However, the application of image captioning should not be restricted by language. Only few studies have been conducted for image captioning in a cross-lingual setting. Different from these works that manually build a dataset for a target language, we aim to learn a cross-lingual captioning model fully from machine-translated sentences. To conquer the lack of fluency in the translated sentences, we propose in this paper a fluency-guided learning framework. The framework comprises a module to automatically estimate the fluency of the sentences and another module to utilize the estimated fluency scores to effectively train an image captioning model for the target language. As experiments on two bilingual (English-Chinese) datasets show, our approach improves both fluency and relevance of the generated captions in Chinese, but without using any manually written sentences from the target language.) <|cite_end|>, and COCO-CN <|cite_start|> (Reference: COCO-CN for Cross-Lingual Image Tagging, Captioning, and Retrieval: This paper contributes to cross-lingual image annotation and retrieval in terms of data and baseline methods. We propose COCO-CN, a novel dataset enriching MS-COCO with manually written Chinese sentences and tags. For effective annotation acquisition, we develop a recommendation-assisted collective annotation system, automatically providing an annotator with several tags and sentences deemed to be relevant with respect to the pictorial content. Having 20 342 images annotated with 27 218 Chinese sentences and 70 993 tags, COCO-CN is currently the largest Chinese–English dataset that provides a unified and challenging platform for cross-lingual image tagging, captioning, and retrieval. We develop conceptually simple yet effective methods per task for learning from cross-lingual resources. Extensive experiments on the three tasks justify the viability of the proposed dataset and methods. Data and code are publicly available at https://github.com/li-xirong/coco-cn.) <|cite_end|>. As shown in Figure \ref{fig:effect}, all of our models have achieved state-of-the-art results with comparable numbers of parameters, across multimodal retrieval and classification tasks in both Chinese and English. For fine-grained evaluation, we collect tasks requiring fine-grained perception, including fine-grained category recognition, counting, multiple object combination recognition, and relationships between objects, and established a bilingual fine-grained benchmark. Our $M^2$-Encoder-10B surpass existing CLIP-based models on our fine-grained benchmark by a large margin, with an absolute improvement of 21.58\% over CN-CLIP <|cite_start|> (Reference: Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese: The tremendous success of CLIP (Radford et al., 2021) has promoted the research and application of contrastive learning for vision-language pretraining. In this work, we construct a large-scale dataset of image-text pairs in Chinese, where most data are retrieved from publicly available datasets, and we pretrain Chinese CLIP models on the new dataset. We develop 5 Chinese CLIP models of multiple sizes, spanning from 77 to 958 million parameters. Furthermore, we propose a two-stage pretraining method, where the model is first trained with the image encoder frozen and then trained with all parameters being optimized, to achieve enhanced model performance. Our comprehensive experiments demonstrate that Chinese CLIP can achieve the state-of-the-art performance on MUGE, Flickr30K-CN, and COCO-CN in the setups of zero-shot learning and finetuning, and it is able to achieve competitive performance in zero-shot image classification based on the evaluation on the ELEVATER benchmark (Li et al., 2022). We have released our codes, models, and demos in https://github.com/OFA-Sys/Chinese-CLIP) <|cite_end|> for Chinese, and 15.2\% over CLIP <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|> for English. Our main contributions are as follows: \begin{itemize} \item We propose BM-6B, an ultra-large dataset consisting of 6 billion image-text pairs with Chinese and English data nearly equally distributed, to mitigate the shortage of extensive Chinese image-text datasets. We verify that the BM-6B dataset is large enough to facilitate the training of bilingual image-text multimodal foundational models from scratch. \item We introduced a novel grouped aggregation strategy named GBA-ITC that leads to reduced memory requirements and enhanced scalability. This technique yields a 60\% acceleration in training speed, facilitating large-scale efficient pretraining. \item We pretrain the $M^2$-Encoder series models on the BM-6B dataset, placing additional emphasis on their fine-grained perception abilities. The resulting $M^2$-Encoder-10B model achieves SOTA performance not only across six bilingual cross-modal retrieval and classification datasets but also excels in our constructed fine-grained perception benchmark within a zero-shot learning setup. \end{itemize} Related Work Recent advancements in adapting VLMs for Chinese language understanding include CN-CLIP <|cite_start|> (Reference: Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese: The tremendous success of CLIP (Radford et al., 2021) has promoted the research and application of contrastive learning for vision-language pretraining. In this work, we construct a large-scale dataset of image-text pairs in Chinese, where most data are retrieved from publicly available datasets, and we pretrain Chinese CLIP models on the new dataset. We develop 5 Chinese CLIP models of multiple sizes, spanning from 77 to 958 million parameters. Furthermore, we propose a two-stage pretraining method, where the model is first trained with the image encoder frozen and then trained with all parameters being optimized, to achieve enhanced model performance. Our comprehensive experiments demonstrate that Chinese CLIP can achieve the state-of-the-art performance on MUGE, Flickr30K-CN, and COCO-CN in the setups of zero-shot learning and finetuning, and it is able to achieve competitive performance in zero-shot image classification based on the evaluation on the ELEVATER benchmark (Li et al., 2022). We have released our codes, models, and demos in https://github.com/OFA-Sys/Chinese-CLIP) <|cite_end|> and AltCLIP <|cite_start|> (Reference: AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities: In this work, we present a conceptually simple and effective method to train a strong bilingual/multilingual multimodal representation model. Starting from the pre-trained multimodal representation model CLIP released by OpenAI, we altered its text encoder with a pre-trained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k-CN, COCO-CN and XTD. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding. Our models and code are available at https://github.com/FlagAI-Open/FlagAI.) <|cite_end|>. CN-CLIP enhances CLIP <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|> with Chinese language support by utilizing locked-image tuning <|cite_start|> (Reference: LiT: Zero-Shot Transfer with Locked-image text Tuning: This paper presents contrastive-tuning, a simple method employing contrastive training to align image and text models while still taking advantage of their pre-training. In our empirical study we find that locked pre-trained image models with unlocked text models work best. We call this instance of contrastive-tuning "Locked-image Tuning" (LiT), which just teaches a text model to read out good representations from a pre-trained image model for new tasks. A LiT model gains the capability of zero-shot transfer to new vision tasks, such as image classification or retrieval. The proposed LiT is widely applicable; it works reliably with multiple pre-training methods (supervised and unsupervised) and across diverse architectures (ResNet, Vision Transformers and MLP-Mixer) using three different image-text datasets. With the transformer-based pre-trained ViT-g/14 model, the LiT model achieves 85.2% zero-shot transfer accuracy on the ImageNet test set, and 82.5% on the challenging out-of-distribution ObjectNet test set.) <|cite_end|> to keep the CLIP visual Encoder constant while aligning it with a Chinese text encoder in the first stage, followed by contrastive fine-tuning using a dataset of 200 million Chinese image-text pairs in the second stage. Meanwhile, AltCLIP extends CLIP with Chinese support by aligning the CLIP text encoder with a multilingual text encoder using a teacher-learning approach. Our approach differs from the methods mentioned above in three key aspects: Firstly, unlike CN-CLIP and AltCLIP, which build upon the existing CLIP model, our bilingual $M^2$-Encoders are developed without relying on any pre-existing pretrained models, directly trained from scratch using the massive bilingual BM-6B dataset. Secondly, these CLIP-based models tend to underperform on tasks that require detailed perception, since they rely on using only the ITC task for cross-modal alignment. Our $M^2$-Encoders are trained with enhanced fine-grained understanding capability. Thirdly, CN-CLIP and AltCLIP are limited to a maximum model size of 1B parameters, potentially constraining their ability to capture intricate patterns. Our scalable model architecture and the BM-6B dataset have enabled us to train a model with up to 10 billion parameters. This has resulted in setting new state-of-the-art benchmarks in both Chinese and English multimodal tasks and is known to be the largest-scale bilingual contrastive-based vision-language model to date. <|paper_end|>
[ "<|reference_start|> Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP. <|reference_end|>", "<|reference_start|> Hierarchical Text-Conditional Image Generation with CLIP Latents: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. <|reference_end|>", "<|reference_start|> Microsoft COCO Captions: Data Collection and Evaluation Server: In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided. <|reference_end|>", "<|reference_start|> Fluency-Guided Cross-Lingual Image Captioning: Image captioning has so far been explored mostly in English, as most available datasets are in this language. However, the application of image captioning should not be restricted by language. Only few studies have been conducted for image captioning in a cross-lingual setting. Different from these works that manually build a dataset for a target language, we aim to learn a cross-lingual captioning model fully from machine-translated sentences. To conquer the lack of fluency in the translated sentences, we propose in this paper a fluency-guided learning framework. The framework comprises a module to automatically estimate the fluency of the sentences and another module to utilize the estimated fluency scores to effectively train an image captioning model for the target language. As experiments on two bilingual (English-Chinese) datasets show, our approach improves both fluency and relevance of the generated captions in Chinese, but without using any manually written sentences from the target language. <|reference_end|>" ]
[ 0, 8, 17, 19 ]
{"<|cite_1|>": "arxiv-323919", "<|multi_cite_2_1|>": "arxiv-500417", "<|multi_cite_2_2|>": "arxiv-477561", "<|multi_cite_2_3|>": "arxiv-503098", "<|multi_cite_2_4|>": "arxiv-498672", "<|multi_cite_2_5|>": "arxiv-503928", "<|multi_cite_2_6|>": "arxiv-497716", "<|multi_cite_2_7|>": "ss-1189281", "<|multi_cite_3_1|>": "ss-745412", "<|multi_cite_3_2|>": "arxiv-323257", "<|multi_cite_3_3|>": "arxiv-388766", "<|cite_4|>": "arxiv-454329", "<|cite_6|>": "arxiv-500387", "<|cite_7|>": "arxiv-372464", "<|cite_8|>": "arxiv-496568", "<|cite_9|>": "ss-710402", "<|cite_10|>": "arxiv-77941", "<|cite_11|>": "arxiv-75485", "<|cite_12|>": "arxiv-459275", "<|cite_13|>": "arxiv-131921", "<|cite_14|>": "ss-1289835", "<|cite_15|>": "arxiv-459275", "<|cite_16|>": "arxiv-323919", "<|cite_17|>": "arxiv-459275", "<|cite_18|>": "arxiv-461596", "<|cite_19|>": "arxiv-323919", "<|cite_20|>": "arxiv-381157"}
1502.02135
<|paper_start|> Title: Simultaneous Time-Space Upper Bounds for Certain Problems in Planar Graphs Abstract: Simultaneous Time-Space Upper Bounds for Certain Problems in Planar Graphs: In this paper, we show that given a weighted, directed planar graph $G$, and any $\epsilon >0$, there exists a polynomial time and $O(n^{\frac{1}{2}+\epsilon})$ space algorithm that computes the shortest path between two fixed vertices in $G$. We also consider the {\RB} problem, which states that given a graph $G$ whose edges are colored either red or blue and two fixed vertices $s$ and $t$ in $G$, is there a path from $s$ to $t$ in $G$ that alternates between red and blue edges. The {\RB} problem in planar DAGs is {\NL}-complete. We exhibit a polynomial time and $O(n^{\frac{1}{2}+\epsilon})$ space algorithm (for any $\epsilon >0$) for the {\RB} problem in planar DAG. In the last part of this paper, we consider the problem of deciding and constructing the perfect matching present in a planar bipartite graph and also a similar problem which is to find a Hall-obstacle in a planar bipartite graph. We show the time-space bound of these two problems are same as the bound of shortest path problem in a directed planar graph. Introduction \label{sec:intro} Computing shortest path between two vertices in a weighted, directed graph is a fundamental problem in computer science. There are several popular and efficient algorithms that are known for this problem such as Dijkstra's algorithm <|cite_start|> (Reference: {A note on two problems in connexion with graphs: We consider n points (nodes), some or all pairs of which are connected by a branch; the length of each branch is given. We restrict ourselves to the case where at least one path exists between any two nodes. We now consider two problems. Problem 1. Constrnct the tree of minimum total length between the n nodes. (A tree is a graph with one and only one path between every two nodes.) In the course of the construction that we present here, the branches are subdivided into three sets: I. the branches definitely assignec~ to the tree under construction (they will form a subtree) ; II. the branches from which the next branch to be added to set I, will be selected ; III. the remaining branches (rejected or not yet considered). The nodes are subdivided into two sets: A. the nodes connected by the branches of set I, B. the remaining nodes (one and only one branch of set II will lead to each of these nodes), We start the construction by choosing an arbitrary node as the only member of set A, and by placing all branches that end in this node in set II. To start with, set I is empty. From then onwards we perform the following two steps repeatedly. Step 1. The shortest branch of set II is removed from this set and added to) <|cite_end|> and Bellman-Ford algorithm <|cite_start|> (Reference: On a Routing Problem: In a system with one queue and several service stations, it is a natural principle to route a customer to the idle station with the distributionwise shortest service time. For the case with exponentially distributed service times, we use a coupling to give strong support to that principle. We also treat another topic. A modified version of our methods brings support to the design principle: It is better with few but quick servers.) <|cite_end|> <|cite_start|> (Reference: P. Network Flow Theory: ) <|cite_end|>. Both of these algorithms require linear amount of space and run in polynomial time. However Bellman-Ford algorithm is more versatile since it is also able to handle graphs with negative edge weights (but no negative weight cycles). There is also a more recent algorithm by Klein, Mozes and Weimann <|cite_start|> (Reference: Shortest paths in directed planar graphs with negative lengths: a linear-space $o(n \log^2 n)$-time algorithm: We give an O(n log n)-time, linear-space algorithm that, given a directed planar graph with positive and negative arc-lengths, and given a node s, finds the distances from s to all nodes. The best previously known algorithm requires O(n log n) time and O(n log n) space.) <|cite_end|> which runs in polynomial time (with better parameters) but still requires linear space, however this algorithm considers shortest path problem only for directed planar graphs. Another fundamental problem in space complexity theory that is closely related to the shortest path problem is the problem of deciding reachability between two vertices in a directed graph. This problem characterizes the complexity class non-deterministic logspace or {\NL}. Savitch <|cite_start|> (Reference: Relationships Between Nondeterministic and Deterministic Tape Complexities: ) <|cite_end|> showed that {\NL} is contained in $\L^2$ ({\L} is {\em deterministic log-space} class), however Savitch's algorithm takes $\Theta(n^{\log n})$ time. Barnes et. al. <|cite_start|> (Reference: {A sublinear space, polynomial time algorithm for directed s-t connectivity: A deterministic sublinear space, polynomial-time algorithm for directed s-t connectivity, which is the problem of detecting whether there is a path from vertex s to vertex t in a directed graph, is presented. For n-vertex graphs, the algorithm can use as little as n/2/sup Theta /( square root log n) space while still running in polynomial time.<<ETX>>) <|cite_end|> gave a $O(n/2^{k\sqrt{\log n}})$ space, polynomial time algorithm for this problem. It is an important open question whether we can exhibit a polynomial time and $O(n^{1-\epsilon})$ space algorithm for the reachability problem in directed graphs, for any $\epsilon >0$. The readers may refer to a survey by Wigderson <|cite_start|> (Reference: The Complexity of Graph Connectivity: ) <|cite_end|> to know more about the reachability problem. Imai et. al. answered this question for the class of directed planar graph. They gave a polynomial time and $O(\sspace)$ space algorithm by efficiently constructing a {\em planar separator} and applying a divide and conquer strategy. In a recent work, their result has been extended to the class of {\em high-genus} and {\em $H$-minor-free} graphs <|cite_start|> (Reference: New time-space upperbounds for directed reachability in high-genus and $h$-minor-free graphs: We obtain the following new simultaneous time-space upper bounds for the directed reachability problem. (1) A polynomial-time, O(n^{2/3} * g^{1/3})-space algorithm for directed graphs embedded on orientable surfaces of genus g. (2) A polynomial-time, O(n^{2/3})-space algorithm for all H-minor-free graphs given the tree decomposition, and (3) for K_{3,3}-free and K_5-free graphs, a polynomial-time, O(n^{1/2 + epsilon})-space algorithm, for every epsilon > 0. For the general directed reachability problem, the best known simultaneous time-space upper bound is the BBRS bound, due to Barnes, Buss, Ruzzo, and Schieber, which achieves a space bound of O(n/2^{k * sqrt(log(n))}) with polynomial running time, for any constant k. It is a significant open question to improve this bound for reachability over general directed graphs. Our algorithms beat the BBRS bound for graphs embedded on surfaces of genus n/2^{omega(sqrt(log(n))}, and for all H-minor-free graphs. This significantly broadens the class of directed graphs for which the BBRS bound can be improved.) <|cite_end|>. The natural question is whether we can extend these results to the shortest path problem. For a special class of graphs known as {\em grid graphs} (a subclass of planar graphs), Asano and Doerr devised a $O(n^{\frac{1}{2}+\epsilon}) $ space and polynomial time algorithm for the shortest path problem <|cite_start|> (Reference: Memory-Constrained Algorithms for Shortest Path Problem: We present an algorithm computing a shortest path between to vertices in a square grid graph with edge weights that uses memory less than linear in the number of vertices (apart from that for storing in the input). For any e > 0, our algorithm uses a work space of) <|cite_end|>. In their paper, Asano and Doerr posed the question whether their result can be extended to planar graphs in general. In this paper, we give a positive answer to their question and exhibit the first sub-linear space, polynomial time algorithm for the shortest path problem in planar graphs. Note that the shortest path problem for both undirected and directed graph is {\NL}-complete <|cite_start|> (Reference: Computational complexity - a conceptual perspective: This book is rooted in the thesis that complexity theory is extremely rich in conceptual content, and that this contents should be explicitly communicated in expositions and courses on the subject. The desire to provide a corresponding textbook is indeed the motivation for writing the current book and its main governing principle. The book offers a conceptual perspective on complexity theory, and the presentation is designed to highlight this perspective. It is intended mainly for students that wish to learn complexity theory and for educators that intend to teach a course on complexity theory. The book is also intended to promote interest in complexity theory and make it acccessible to general readers with adequate background (which is mainly being comfortable with abstract discussions, definitions and proofs). We expect most readers to have a basic knowledge of algorithms, or at least be fairly comfortable with the notion of an algorithm. The book focuses on several sub-areas of complexity theory (including, e.g., pseudorandomness and probabilistic proof systems). In each case, the exposition starts from the intuitive questions addresses by the sub-area, as embodied in the concepts that it studies. The exposition discusses the fundamental importance of these questions, the choices made in the actual formulation of these questions and notions, the approaches that underly the answers, and the ideas that are embedded in these answers. Our view is that these (\non-technical") aspects are the core of the field, and the presentation attempts to re ect this view.) <|cite_end|>. Another interesting generalization of the reachability problem, is the {\RB} problem (for the definition see Section \ref{sec:redblue}). The {\RB} problem is {\NL}-complete even when restricted to planar DAGs <|cite_start|> (Reference: On the power of isolation in planar graphs: We study (deterministic) isolation for certain structures in directed and undirected planar graphs. The motivation for undertaking such a study comes from recent positive results on this topic. For example: Bourke et al. [2009] isolate a directed path in planar graphs and subsequently Datta et al. [2010b] isolate a perfect matching in bipartite planar graphs. Our first observation is that sufficiently strong (and plausible) isolations for certain structures in planar graphs would have strong consequences such as: NL ⊆ ⊕L, Bipartite-Matching ∈ NC, and NP ⊆ ⊕P. Our second observation is that although we do not yet have such strong isolations for arbitrary planar graphs, we do have them for bipartite planar graphs, that is, non-bipartiteness is the main bottleneck.) <|cite_end|>. A natural relaxation of the above problem is {\EB} problem defined in Section \ref{sec:redblue}. In general, {\EB} problem is {\NP}-complete <|cite_start|> (Reference: The even-path problem for graphs and digraphs: On donne un simple algorithme en temps lineaire pour trouver des chemins de meme longueur entre deux nœuds specifie d'un graphe donne. On montre que le meme probleme pour des graphes orientes est NP complet) <|cite_end|>, but for planar graphs, it is known to be in {\P} <|cite_start|> (Reference: Finding an Even Simple Path in a Directed Planar Graph: In this paper we show that the following problem, the even simple path (ESP) problem for directed planar graphs, is solvable in polynomial time: Given: a directed planar graph G=(V,E) and two nodes s, (starting node), t ,(targetnode) \in V; Find: a simple path (i.e., without repeated nodes) from s to t of even length. (The length of the path is the number of edges it contains.)) <|cite_end|>. In this paper, we also give the first sublinear space and polynomial time algorithm known for the {\RB} and the {\EB} problem in planar DAGs. Another central problem in Algorithms and Complexity Theory is the problem of finding the perfect matching (denoted as {\PM}). The best known upper bound for {\PM} is \emph{non-uniform {\SPL}} <|cite_start|> (Reference: Isolation, matching, and counting: Uniform and nonuniform upper bounds: We show that the perfect matching problem is in the complexity class SPL (in the nonuniform setting). This provides a better upper bound on the complexity of the matching problem, as well as providing motivation for studying the complexity class SPL. Using similar techniques, we show that counting the number of accepting paths of a nondeterministic logspace machine can be done in NL/poly, if the number of paths is small. This clarifies the complexity of the class FewL. Using derandomization techniques, we then improve this to show that this counting problem is in NL. Determining if our other theorems hold in the uniform setting remains an important open question, although we provide evidence that they do. More precisely, if there are problems in DSPACE(n) requiring exponential-size circuits, then all of our results hold in the uniform setting.) <|cite_end|> and the best hardness known is {\NL}-hardness <|cite_start|> (Reference: Constant Depth Reducibility: The purpose of this paper is to study reducibilities that can be computed by combinational logic networks of polynomial size and constant depth containing AND’s, OR’s and NOT’s, with no bound placed on the fan-in of AND-gates and OR-gates. Two such reducibilities are defined, and reductions and equivalences among several common problems such as parity, sorting, integer multiplication, graph connectivity, bipartite matching and network flow are given. Certain problems are shown to be complete, with respect to these reducibilities, in the complexity classes deterministic logarithmic space, nondeterministic logarithmic space, and deterministic polynomial time. New upper bounds on the size-depth (unbounded fan-in) circuit complexity of symmetric Boolean functions are established.) <|cite_end|>. However, {\PM} in planar graph is known to be {\L}-hard <|cite_start|> (Reference: Planarity, determinants, permanents, and (unique) matchings: Viewing the computation of the determinant and the permanent of integer matrices as combinatorial problems on associated graphs, we explore the restrictiveness of planarity on their complexities and show that both problems remain as hard as in the general case, that is, GapL- and P- complete. On the other hand, both bipartite planarity and bimodal planarity bring the complexity of permanents down (but no further) to that of determinants. The permanent or the determinant modulo 2 is complete for ⊕L, and we show that parity of paths in a layered grid graph (which is bimodal planar) is also complete for this class. We also relate the complexity of grid graph reachability to that of testing existence/uniqueness of a perfect matching in a planar bipartite graph.) <|cite_end|>. If we consider the planar bipartite graph, then {\PM} problem is known to be in {\UL} <|cite_start|> (Reference: Improved bounds for bipartite matching on surfaces: We exhibit the following new upper bounds on the space complexity and the parallel complexity of the Bipartite Perfect Matching (BPM) problem for graphs of small genus: (1) BPM in planar graphs is in UL (improves upon the SPL bound from Datta, Kulkarni, and Roy; (2) BPM in constant genus graphs is in NL (orthogonal to the SPL bound from Datta, Kulkarni, Tewari, and Vinodchandran.; (3) BPM in poly-logarithmic genus graphs is in NC; (extends the NC bound for O(log n) genus graphs from Mahajan and Varadarajan, and Kulkarni, Mahajan, and Varadarajan. For Part (1) we combine the flow technique of Miller and Naor with the double counting technique of Reinhardt and Allender . For Part (2) and (3) we extend Miller and Naor's result to higher genus surfaces in the spirit of Chambers, Erickson and Nayyeri.) <|cite_end|>. {\PM} problem in bipartite graphs can be solved in polynomial time using Ford-Fulkerson algorithm for network flow <|cite_start|> (Reference: Algorithm design: The quest for efficiency in computational methods yields not only fast algorithms, but also insights that lead to elegant, simple, and general problem-solving methods.) <|cite_end|> but that takes the space linear in number of edges present in the graph. Unfortunately, no sublinear ($O(n^{1-\epsilon})$, for any $\epsilon >0$) space and polynomial time algorithm is known for {\PM} in planar bipartite graphs. Same is true for the problem of finding Hall-obstacle (denoted as {\HO} (Decision + Construction) for planar bipartite graphs, whereas it is known from <|cite_start|> (Reference: Improved bounds for bipartite matching on surfaces: We exhibit the following new upper bounds on the space complexity and the parallel complexity of the Bipartite Perfect Matching (BPM) problem for graphs of small genus: (1) BPM in planar graphs is in UL (improves upon the SPL bound from Datta, Kulkarni, and Roy; (2) BPM in constant genus graphs is in NL (orthogonal to the SPL bound from Datta, Kulkarni, Tewari, and Vinodchandran.; (3) BPM in poly-logarithmic genus graphs is in NC; (extends the NC bound for O(log n) genus graphs from Mahajan and Varadarajan, and Kulkarni, Mahajan, and Varadarajan. For Part (1) we combine the flow technique of Miller and Naor with the double counting technique of Reinhardt and Allender . For Part (2) and (3) we extend Miller and Naor's result to higher genus surfaces in the spirit of Chambers, Erickson and Nayyeri.) <|cite_end|>, that {\HO} (Decision) is in {\co-UL} and {\HO} (Construction) is in {\NL}, when the graph under consideration is planar bipartite.\\ The problem {\ExPM} (Decision) (first posed in <|cite_start|> (Reference: The complexity of restricted spanning tree problems: The complexity of the foUowmg class of problems Is investigated: Given a distance matrix, fred the shortest spanning tree that is isomorphic to a given prototype. Several classical combinatorial problems, both easy and hard, fall into this category for an appropriate choice of the family of prototypes, for example, taking the family to be the set of all paths gives the traveling salesman problem or taking the family to be the set of all 2-stars gives the weighted matching problem It is shown that the complexity of these problems depends explicitly on the rate of growth of a sLmple parameter of the family of prototypes.) <|cite_end|>) denotes the problem of deciding the presence of a perfect matching in a given graph $G$ with edges coloured with Red or Blue, containing exactly $k$ Red edges for an integer $k$. This problem is not even known to be in {\P}. Now we consider a natural relaxation of the above problem just by concentrating on the perfect matching containing even number of Red edges and denote this problem as \emph{{\EPM}}. {\EPM} problem is in {\P} for bipartite graphs and in {\NL} for planar bipartite graphs <|cite_start|> (Reference: Improved bounds for bipartite matching on surfaces: We exhibit the following new upper bounds on the space complexity and the parallel complexity of the Bipartite Perfect Matching (BPM) problem for graphs of small genus: (1) BPM in planar graphs is in UL (improves upon the SPL bound from Datta, Kulkarni, and Roy; (2) BPM in constant genus graphs is in NL (orthogonal to the SPL bound from Datta, Kulkarni, Tewari, and Vinodchandran.; (3) BPM in poly-logarithmic genus graphs is in NC; (extends the NC bound for O(log n) genus graphs from Mahajan and Varadarajan, and Kulkarni, Mahajan, and Varadarajan. For Part (1) we combine the flow technique of Miller and Naor with the double counting technique of Reinhardt and Allender . For Part (2) and (3) we extend Miller and Naor's result to higher genus surfaces in the spirit of Chambers, Erickson and Nayyeri.) <|cite_end|>. Till now, we do not know about any sublinear space and polynomial time algorithm for {\EPM} problem when concentrating only on planar bipartite graphs. \subsection*{Our Contribution} In this paper, we prove the following results. \begin{theorem} \label{thm:shortpath} For directed planar graphs (containing no negative weight cycle and weights are bounded by polynomial in $n$) and for any constant $ 0 < \epsilon < \frac{1}{2} $, there is an algorithm that solves {\ShP} problem in polynomial time and $ O(n^{\frac{1}{2}+\epsilon}) $ space, where $n$ is the number of vertices of the given graph. \end{theorem} We use the space efficient construction of separator for planar graphs, and this is one of the main building blocks for the results stated in this paper. Let the separator be $S$. Now calculate the shortest distance of every $v \in S$ from the vertex $s$. The shortest path from $s$ to $t$ must pass through the vertices in the separator and thus knowing the shortest path from $s$ to each of such vertex is enough to find the shortest path from $s$ to $t$. The shortest path from $s$ to any $v \in S$ can be found by applying the same shortest path algorithm recursively to each of the connected component of the graph induced by $V \setminus S$. As a base case we use {\BellmanFord} algorithm to find the shortest path. The recursive invocation of the above technique will lead to the time-space bound mentioned in the above theorem.\\ Another main contribution of this paper is to give an algorithm for the {\RB} problem in planar DAG. The main idea behind our algorithm is to use a modified version of DFS algorithm along with the planar separator. \begin{theorem} \label{thm:redbluepath} For any constant $ 0 < \epsilon < \frac{1}{2} $, there is an algorithm that solves {\RB} problem in planar DAG in polynomial time and $ O(n^{\frac{1}{2}+\epsilon}) $ space. \end{theorem} Now using the reduction given in <|cite_start|> (Reference: On the power of isolation in planar graphs: We study (deterministic) isolation for certain structures in directed and undirected planar graphs. The motivation for undertaking such a study comes from recent positive results on this topic. For example: Bourke et al. [2009] isolate a directed path in planar graphs and subsequently Datta et al. [2010b] isolate a perfect matching in bipartite planar graphs. Our first observation is that sufficiently strong (and plausible) isolations for certain structures in planar graphs would have strong consequences such as: NL ⊆ ⊕L, Bipartite-Matching ∈ NC, and NP ⊆ ⊕P. Our second observation is that although we do not yet have such strong isolations for arbitrary planar graphs, we do have them for bipartite planar graphs, that is, non-bipartiteness is the main bottleneck.) <|cite_end|> and the algorithm stated in the above theorem, we get an algorithm to solve the directed reachability problem for a fairly large class of graphs described in Section \ref{sec:redblue}, that takes polynomial time and $O(\sspace)$ space. Thus we can able to beat the bound given by Barnes, Buss, Ruzzo and Schieber <|cite_start|> (Reference: {A sublinear space, polynomial time algorithm for directed s-t connectivity: A deterministic sublinear space, polynomial-time algorithm for directed s-t connectivity, which is the problem of detecting whether there is a path from vertex s to vertex t in a directed graph, is presented. For n-vertex graphs, the algorithm can use as little as n/2/sup Theta /( square root log n) space while still running in polynomial time.<<ETX>>) <|cite_end|> for such class of graphs.\\ In this paper, we also establish a relation between {\EB} problem in a planar DAG and the problem of finding odd length cycle in a directed planar graph and thus we argue that both this problem has the same simultaneous time-space complexity. We use two colors Red and Blue to color the vertices of the given graph and then use the color assigned to the vertices of the separator to detect the odd length cycle. The conflicting assignment of color to the same vertex in the separator will lead to the presence of an odd length cycle. Here also we use the recursive approach to color the vertices and as a base case we use the well known {\tt BFS} algorithm to solve the problem of detecting odd length cycle in each small component. Thus we get the following theorem regarding solving {\EB} problem. \begin{theorem} \label{thm:evenpath} For any constant $ 0 < \epsilon < \frac{1}{2} $, there is an algorithm that solves {\EB} problem in planar DAG in polynomial time and $ O(n^{\frac{1}{2}+\epsilon}) $ space. \end{theorem} Our another contribution is to give an time-space efficient algorithm for perfect matching problem in case of planar bipartite graphs. \begin{theorem} \label{thm:perfectmatching} In Planar Bipartite Graphs, for any constant $ 0 < \epsilon < \frac{1}{2} $,\\ (a) {\PM} (Decision + Construction) can be solved in polynomial time and $ O(n^{\frac{1}{2}+\epsilon}) $ space.\\ (b) {\HO} (Decision + Construction) can be solved in polynomial time and $ O(n^{\frac{1}{2}+\epsilon}) $ space. \end{theorem} We build on the Miller and Noar's algorithm for perfect matching in planar bipartite graph. We show that this algorithm takes polynomial time and $ O(n^{\frac{1}{2}+\epsilon}) $ space as the only hard part of this algorithm is to find the shortest distance. We also argue that problem of finding Hall obstacle is directly associated with the problem of finding negative weight cycle and thus get same simultaneous time-space bound for this problem as of the problem of detecting negative weight cycle.\\ Next we show that the complexity of even perfect matching in planar bipartite graph is same as the perfect matching problem in planar bipartite graph and deciding the presence of odd length cycle in directed planar graph. Thus we get the following theorem for {\EPM} problem. \begin{theorem} \label{thm:evenPM} For any constant $ 0 < \epsilon < \frac{1}{2} $, there exists an algorithm that solves {\EPM} in planar bipartite graphs in polynomial time and $ O(n^{\frac{1}{2}+\epsilon}) $ space. \end{theorem} The rest of the paper is organized as follows. In the next section, we give some notations and definitions used in this paper. In Section \ref{sec:shortestpath}, we give an algorithm for shortest path problem in directed planar graphs. In Section \ref{sec:redblue}, we give a simultaneous time-space bound for deciding the presence of Red-Blue Path in a planar DAG and then establish a relation between the problem of deciding the presence of an odd length cycle in directed planar graphs with the problem of deciding the presence of even path between two given vertices in planar DAG and thus give the same simultaneous time-space bound for both of these problems. And finally in Section \ref{sec:matching}, we discuss the simultaneous time-space bound of some matching problems in planar bipartite graphs. <|paper_end|>
[ "<|reference_start|> Memory-Constrained Algorithms for Shortest Path Problem: We present an algorithm computing a shortest path between to vertices in a square grid graph with edge weights that uses memory less than linear in the number of vertices (apart from that for storing in the input). For any e > 0, our algorithm uses a work space of <|reference_end|>", "<|reference_start|> Planarity, determinants, permanents, and (unique) matchings: Viewing the computation of the determinant and the permanent of integer matrices as combinatorial problems on associated graphs, we explore the restrictiveness of planarity on their complexities and show that both problems remain as hard as in the general case, that is, GapL- and P- complete. On the other hand, both bipartite planarity and bimodal planarity bring the complexity of permanents down (but no further) to that of determinants. The permanent or the determinant modulo 2 is complete for ⊕L, and we show that parity of paths in a layered grid graph (which is bimodal planar) is also complete for this class. We also relate the complexity of grid graph reachability to that of testing existence/uniqueness of a perfect matching in a planar bipartite graph. <|reference_end|>", "<|reference_start|> Algorithm design: The quest for efficiency in computational methods yields not only fast algorithms, but also insights that lead to elegant, simple, and general problem-solving methods. <|reference_end|>", "<|reference_start|> The complexity of restricted spanning tree problems: The complexity of the foUowmg class of problems Is investigated: Given a distance matrix, fred the shortest spanning tree that is isomorphic to a given prototype. Several classical combinatorial problems, both easy and hard, fall into this category for an appropriate choice of the family of prototypes, for example, taking the family to be the set of all paths gives the traveling salesman problem or taking the family to be the set of all 2-stars gives the weighted matching problem It is shown that the complexity of these problems depends explicitly on the rate of growth of a sLmple parameter of the family of prototypes. <|reference_end|>" ]
[ 8, 15, 17, 19 ]
{"<|cite_1|>": "ss-681930", "<|cite_2|>": "ss-1012394", "<|cite_3|>": "ss-1659916", "<|cite_4|>": "ss-794185", "<|cite_5|>": "ss-774009", "<|cite_6|>": "ss-889446", "<|cite_7|>": "ss-2433423", "<|cite_9|>": "ss-794186", "<|cite_10|>": "ss-889447", "<|cite_11|>": "ss-794187", "<|cite_12|>": "ss-794188", "<|cite_13|>": "ss-2516392", "<|cite_14|>": "ss-794189", "<|cite_15|>": "ss-1272424", "<|cite_16|>": "ss-851862", "<|cite_17|>": "ss-1934088", "<|cite_18|>": "ss-794190", "<|cite_19|>": "ss-1519843", "<|cite_20|>": "ss-794190", "<|cite_21|>": "ss-745512", "<|cite_22|>": "ss-794190", "<|cite_24|>": "ss-794188", "<|cite_25|>": "ss-889446"}
2404.15305-1
<|cite_start|> (Reference: A Survey of Unsupervised Deep Domain Adaptation: Deep learning has produced state-of-the-art results for a variety of tasks. While such approaches for supervised learning have performed well, they assume that training and testing data are drawn from the same distribution, which may not always be the case. As a complement to this challenge, single-source unsupervised domain adaptation can handle situations where a network is trained on labeled data from a source domain and unlabeled data from a related but different target domain with the goal of performing well at test-time on the target domain. Many single-source and typically homogeneous unsupervised deep domain adaptation approaches have thus been developed, combining the powerful, hierarchical representations from deep learning with domain adaptation to reduce reliance on potentially-costly target data labels. This survey will compare these approaches by examining alternative methods, the unique and common elements, results, and theoretical insights. We follow this with a look at application areas and open research directions.) <|cite_end|>offer a more suitable fit for our scenario, as they allow the utilization of fine-tuning data collected by end-users. Most approaches target to utilize unlabeled or a limited number of labeled target domain data <|cite_start|> (Reference: Unsupervised Domain Adaptation by Backpropagation: Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of "deep" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.) <|cite_end|> <|cite_start|> (Reference: Correlation-aware Adversarial Domain Adaptation and Generalization: Domain adaptation (DA) and domain generalization (DG) have emerged as a solution to the domain shift problem where the distribution of the source and target data is different. The task of DG is more challenging than DA as the target data is totally unseen during the training phase in DG scenarios. The current state-of-the-art employs adversarial techniques, however, these are rarely considered for the DG problem. Furthermore, these approaches do not consider correlation alignment which has been proven highly beneficial for minimizing domain discrepancy. In this paper, we propose a correlation-aware adversarial DA and DG framework where the features of the source and target data are minimized using correlation alignment along with adversarial learning. Incorporating the correlation alignment module along with adversarial learning helps to achieve a more domain agnostic model due to the improved ability to reduce domain discrepancy with unlabeled target data more effectively. Experiments on benchmark datasets serve as evidence that our proposed method yields improved state-of-the-art performance.) <|cite_end|>to adapt the model to the target domain. In activity recognition, DA has been approached as an efficient transfer learning problem <|cite_start|> (Reference: Stratified Transfer Learning for Cross-domain Activity Recognition: In activity recognition, it is often expensive and time-consuming to acquire sufficient activity labels. To solve this problem, transfer learning leverages the labeled samples from the source domain to annotate the target domain which has few or none labels. Existing approaches typically consider learning a global domain shift while ignoring the intra-affinity between classes, which will hinder the performance of the algorithms. In this paper, we propose a novel and general cross-domain learning framework that can exploit the intra-affinity of classes to perform intra-class knowledge transfer. The proposed framework, referred to as Stratified Transfer Learning (STL), can dramatically improve the classification accuracy for cross-domain activity recognition. Specifically, STL first obtains pseudo labels for the target domain via majority voting technique. Then, it performs intra-class knowledge transfer iteratively to transform both domains into the same subspaces. Finally, the labels of target domain are obtained via the second annotation. To evaluate the performance of STL, we conduct comprehensive experiments on three large public activity recognition datasets~(i.e. OPPORTUNITY, PAMAP2, and UCI DSADS), which demonstrates that STL significantly outperforms other state-of-the-art methods w.r.t. classification accuracy (improvement of 7.68%). Furthermore, we extensively investigate the performance of STL across different degrees of similarities and activity levels between domains. And we also discuss the potential of STL in other pervasive computing applications to provide empirical experience for future research.) <|cite_end|> <|cite_start|> (Reference: Cross-domain Activity Recognition via Substructural Optimal Transport: It is expensive and time-consuming to collect sufficient labeled data for human activity recognition (HAR). Domain adaptation is a promising approach for cross-domain activity recognition. Existing methods mainly focus on adapting cross-domain representations via domain-level, class-level, or sample-level distribution matching. However, they might fail to capture the fine-grained locality information in activity data. The domain- and class-level matching are too coarse that may result in under-adaptation, while sample-level matching may be affected by the noise seriously and eventually cause over-adaptation. In this paper, we propose substructure-level matching for domain adaptation (SSDA) to better utilize the locality information of activity data for accurate and efficient knowledge transfer. Based on SSDA, we propose an optimal transport-based implementation, Substructural Optimal Transport (SOT), for cross-domain HAR. We obtain the substructures of activities via clustering methods and seeks the coupling of the weighted substructures between different domains. We conduct comprehensive experiments on four public activity recognition datasets (i.e. UCI-DSADS, UCI-HAR, USC-HAD, PAMAP2), which demonstrates that SOT significantly outperforms other state-of-the-art methods w.r.t classification accuracy (9%+ improvement). In addition, our mehtod is 5x faster than traditional OT-based DA methods with the same hyper-parameters.) <|cite_end|> <|cite_start|> (Reference: Scaling human activity recognition via deep learning-based domain adaptation: We investigate the problem of making human activity recognition (AR) scalable-i.e., allowing AR classifiers trained in one context to be readily adapted to a different contextual domain. This is important because AR technologies can achieve high accuracy if the classifiers are trained for a specific individual or device, but show significant degradation when the same classifier is applied context-e.g., to a different device located at a different on-body position. To allow such adaptation without requiring the onerous step of collecting large volumes of labeled training data in the target domain, we proposed a transductive transfer learning model that is specifically tuned to the properties of convolutional neural networks (CNNs). Our model, called HDCNN, assumes that the relative distribution of weights in the different CNN layers will remain invariant, as long as the set of activities being monitored does not change. Evaluation on real-world data shows that HDCNN is able to achieve high accuracy even without any labeled training data in the target domain, and offers even higher accuracy (significantly outperforming competitive shallow and deep classifiers) when even a modest amount of labeled training data is available.) <|cite_end|>, and methods employing feature matching and confusion maximization <|cite_start|> (Reference: A systematic study of unsupervised domain adaptation for robust human-activity recognition: Wearable sensors are increasingly becoming the primary interface for monitoring human activities. However, in order to scale human activity recognition (HAR) using wearable sensors to million of users and devices, it is imperative that HAR computational models are robust against real-world heterogeneity in inertial sensor data. In this paper, we study the problem of wearing diversity which pertains to the placement of the wearable sensor on the human body, and demonstrate that even state-of-the-art deep learning models are not robust against these factors. The core contribution of the paper lies in presenting a first-of-its-kind in-depth study of unsupervised domain adaptation (UDA) algorithms in the context of wearing diversity -- we develop and evaluate three adaptation techniques on four HAR datasets to evaluate their relative performance towards addressing the issue of wearing diversity. More importantly, we also do a careful analysis to learn the downsides of each UDA algorithm and uncover several implicit data-related assumptions without which these algorithms suffer a major degradation in accuracy. Taken together, our experimental findings caution against using UDA as a silver bullet for adapting HAR models to new domains, and serve as practical guidelines for HAR practitioners as well as pave the way for future research on domain adaptation in HAR.) <|cite_end|>have been proposed. MetaSense <|cite_start|> (Reference: MetaSense: Few-Shot Adaptation to Untrained Conditions in Deep Mobile Sensing: Recent improvements in deep learning and hardware support offer a new breakthrough in mobile sensing; we could enjoy context-aware services and mobile healthcare on a mobile device powered by artificial intelligence. However, most related studies perform well only with a certain level of similarity between trained and target data distribution, while in practice, a specific user's behaviors and device make sensor inputs different. Consequently, the performance of such applications might suffer in diverse user and device conditions as training deep models in such countless conditions is infeasible. To mitigate the issue, we propose MetaSense, an adaptive deep mobile sensing system utilizing only a few (e.g., one or two) data instances from the target user. MetaSense employs meta learning that learns how to adapt to the target user's condition, by rehearsing multiple similar tasks generated from our unique task generation strategies in offline training. The trained model has the ability to rapidly adapt to the target user's condition when a few data are available. Our evaluation with real-world traces of motion and audio sensors shows that MetaSense not only outperforms the state-of-the-art transfer learning by 18% and meta learning based approaches by 15% in terms of accuracy, but also requires significantly less adaptation time for the target user.) <|cite_end|>introduced a meta-learning-based model training approach followed by few-shot adaptation to create domain-specific models. DAPPER <|cite_start|> (Reference: DAPPER: Label-Free Performance Estimation after Personalization for Heterogeneous Mobile Sensing: Many applications utilize sensors in mobile devices and machine learning to provide novel services. However, various factors such as different users, devices, and environments impact the performance of such applications, thus making the domain shift (i.e., distributional shift between the training domain and the target domain) a critical issue in mobile sensing. Despite attempts in domain adaptation to solve this challenging problem, their performance is unreliable due to the complex interplay among diverse factors. In principle, the performance uncertainty can be identified and redeemed by performance validation with ground-truth labels. However, it is infeasible for every user to collect high-quality, sufficient labeled data. To address the issue, we present DAPPER (Domain AdaPtation Performance EstimatoR) that estimates the adaptation performance in a target domain with only unlabeled target data. Our key idea is to approximate the model performance based on the mutual information between the model inputs and corresponding outputs. Our evaluation with four real-world sensing datasets compared against six baselines shows that on average, DAPPER outperforms the state-of-the-art baseline by 39.8% in estimation accuracy. Moreover, our on-device experiment shows that DAPPER achieves up to 396X less computation overhead compared with the baselines.) <|cite_end|>is proposed as another line of research for estimating the expected performance of DA in mobile sensing. However, these approaches assume the availability of labels in the source domain, making them incompatible with our unsupervised pre-training scenario. DARLING <|cite_start|> (Reference: Towards Unsupervised Domain Generalization: Domain generalization (DG) aims to help models trained on a set of source domains generalize better on unseen target domains. The performances of current DG methods largely rely on sufficient labeled data, which are usually costly or unavailable, however. Since unlabeled data are far more accessible, we seek to explore how unsupervised learning can help deep models generalize across domains. Specifically, we study a novel generalization problem called unsupervised domain generalization (UDG), which aims to learn generalizable models with unlabeled data and analyze the effects of pre-training on DG. In UDG, models are pretrained with unlabeled data from various source domains before being trained on labeled source data and eventually tested on unseen target domains. Then we propose a method named Domain-Aware Representation LearnING (DARLING) to cope with the significant and misleading heterogeneity within unlabeled pretraining data and severe distribution shifts between source and target data. Surprisingly we observe that DARLING can not only counterbalance the scarcity of labeled data but also further strengthen the generalization ability of models when the labeled data are insufficient. As a pretraining approach, DARLING shows superior or comparable performance compared with ImageNet pretraining protocol even when the available data are unlabeled and of a vastly smaller amount compared to ImageNet, which may shed light on improving generalization with large-scale unlabeled data.) <|cite_end|>addresses the domain shift from unsupervised learning and covers the problem by integrating conditional optimization that optimizes the contrastive loss per domain. However, our approach differs from DARLING in that our method uses the available target domain data (\ie fine-tuning data) to train domain-specific models. In our evaluation (\S\ref{sec:overall_results}), we demonstrate that this utilization results in superior performance. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/overview.pdf} \vspace{-6pt} \caption{Overview of \proj{} framework.} \vspace{-10pt} \label{fig:project_overview} \end{figure*} \subsection{Unsupervised Meta-Learning} We consider unsupervised meta-learning (UML) <|cite_start|> (Reference: Unsupervised Meta-Learning For Few-Shot Image Classification: Few-shot or one-shot learning of classifiers requires a significant inductive bias towards the type of task to be learned. One way to acquire this is by meta-learning on tasks similar to the target task. In this paper, we propose UMTRA, an algorithm that performs unsupervised, model-agnostic meta-learning for classification tasks. The meta-learning step of UMTRA is performed on a flat collection of unlabeled images. While we assume that these images can be grouped into a diverse set of classes and are relevant to the target task, no explicit information about the classes or any labels are needed. UMTRA uses random sampling and augmentation to create synthetic training tasks for meta-learning phase. Labels are only needed at the final target task learning step, and they can be as little as one sample per class. On the Omniglot and Mini-Imagenet few-shot learning benchmarks, UMTRA outperforms every tested approach based on unsupervised learning of representations, while alternating for the best performance with the recent CACTUs algorithm. Compared to supervised model-agnostic meta-learning approaches, UMTRA trades off some classification accuracy for a reduction in the required labels of several orders of magnitude.) <|cite_end|> <|cite_start|> (Reference: Unsupervised Meta-Learning through Latent-Space Interpolation in Generative Models: Unsupervised meta-learning approaches rely on synthetic meta-tasks that are created using techniques such as random selection, clustering and/or augmentation. Unfortunately, clustering and augmentation are domain-dependent, and thus they require either manual tweaking or expensive learning. In this work, we describe an approach that generates meta-tasks using generative models. A critical component is a novel approach of sampling from the latent space that generates objects grouped into synthetic classes forming the training and validation data of a meta-task. We find that the proposed approach, LAtent Space Interpolation Unsupervised Meta-learning (LASIUM), outperforms or is competitive with current unsupervised learning baselines on few-shot classification tasks on the most widely used benchmark datasets. In addition, the approach promises to be applicable without manual tweaking over a wider range of domains than previous approaches.) <|cite_end|> <|cite_start|> (Reference: Self-Supervised Set Representation Learning for Unsupervised Meta-Learning: Unsupervised meta-learning (UML) essentially shares the spirit of self-supervised learning (SSL) in that their goal aims at learning models without any human supervision so that the models can be adapted to downstream tasks. Further, the learning objective of self-supervised learning, which pulls positive pairs closer and repels negative pairs, also resembles metric-based meta-learning. Metric-based meta-learning is one of the most successful meta-learning methods, which learns to minimize the distance between representations from the same class. One notable aspect of metric-based meta-learning, however, is that it is widely interpreted as a set-level problem since the inference of discriminative class prototypes (or set representations) from few examples is crucial for the performance of downstream tasks. Motivated by this, we propose Set-SimCLR, a novel self-supervised set representation learning framework for targeting UML problem. Specifically, our Set-SimCLR learns a set encoder on top of instance representations to maximize the agreement between two sets of augmented samples, which are generated by applying stochastic augmentations to a given image. We theoretically analyze how our proposed set representation learning can potentially improve the generalization performance at the meta-test. We also empirically validate its effectiveness on various benchmark datasets, showing that Set-SimCLR largely outperforms both UML and instance-level self-supervised learning baselines.) <|cite_end|>methods due to their effectiveness in few-shot adaptation, which is also applicable to our unsupervised pre-training scenario. Traditional methods~ employ pseudo-labeling data through augmentation <|cite_start|> (Reference: Unsupervised Meta-Learning For Few-Shot Image Classification: Few-shot or one-shot learning of classifiers requires a significant inductive bias towards the type of task to be learned. One way to acquire this is by meta-learning on tasks similar to the target task. In this paper, we propose UMTRA, an algorithm that performs unsupervised, model-agnostic meta-learning for classification tasks. The meta-learning step of UMTRA is performed on a flat collection of unlabeled images. While we assume that these images can be grouped into a diverse set of classes and are relevant to the target task, no explicit information about the classes or any labels are needed. UMTRA uses random sampling and augmentation to create synthetic training tasks for meta-learning phase. Labels are only needed at the final target task learning step, and they can be as little as one sample per class. On the Omniglot and Mini-Imagenet few-shot learning benchmarks, UMTRA outperforms every tested approach based on unsupervised learning of representations, while alternating for the best performance with the recent CACTUs algorithm. Compared to supervised model-agnostic meta-learning approaches, UMTRA trades off some classification accuracy for a reduction in the required labels of several orders of magnitude.) <|cite_end|>or generative methods <|cite_start|> (Reference: Unsupervised Meta-Learning through Latent-Space Interpolation in Generative Models: Unsupervised meta-learning approaches rely on synthetic meta-tasks that are created using techniques such as random selection, clustering and/or augmentation. Unfortunately, clustering and augmentation are domain-dependent, and thus they require either manual tweaking or expensive learning. In this work, we describe an approach that generates meta-tasks using generative models. A critical component is a novel approach of sampling from the latent space that generates objects grouped into synthetic classes forming the training and validation data of a meta-task. We find that the proposed approach, LAtent Space Interpolation Unsupervised Meta-learning (LASIUM), outperforms or is competitive with current unsupervised learning baselines on few-shot classification tasks on the most widely used benchmark datasets. In addition, the approach promises to be applicable without manual tweaking over a wider range of domains than previous approaches.) <|cite_end|>, followed by supervised meta-learning <|cite_start|> (Reference: Learning to learn by gradient descent by gradient descent: The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.) <|cite_end|>using the generated labels. Set-SimCLR <|cite_start|> (Reference: Self-Supervised Set Representation Learning for Unsupervised Meta-Learning: Unsupervised meta-learning (UML) essentially shares the spirit of self-supervised learning (SSL) in that their goal aims at learning models without any human supervision so that the models can be adapted to downstream tasks. Further, the learning objective of self-supervised learning, which pulls positive pairs closer and repels negative pairs, also resembles metric-based meta-learning. Metric-based meta-learning is one of the most successful meta-learning methods, which learns to minimize the distance between representations from the same class. One notable aspect of metric-based meta-learning, however, is that it is widely interpreted as a set-level problem since the inference of discriminative class prototypes (or set representations) from few examples is crucial for the performance of downstream tasks. Motivated by this, we propose Set-SimCLR, a novel self-supervised set representation learning framework for targeting UML problem. Specifically, our Set-SimCLR learns a set encoder on top of instance representations to maximize the agreement between two sets of augmented samples, which are generated by applying stochastic augmentations to a given image. We theoretically analyze how our proposed set representation learning can potentially improve the generalization performance at the meta-test. We also empirically validate its effectiveness on various benchmark datasets, showing that Set-SimCLR largely outperforms both UML and instance-level self-supervised learning baselines.) <|cite_end|>, during pre-training, trains a set encoder by creating sets of augmented samples from the same data, employing contrastive learning to maximize agreement between set embeddings. In fine-tuning, it composes sets of data by classes, generating class prototypes using the set encoder to initialize the classifier's parameters. These prototypes enable rapid adaptation for further few-shot fine-tuning. However, our approach differs in that we perform the adaptation to refine the encoder for the target domain, while Set-SimCLR primarily focuses on making the following classifier adaptable to few-shot fine-tuning. Our evaluation (\S\ref{sec:overall_results}) demonstrates the superior performance of our approach in mobile sensing scenarios. <|paper_end|>
[ "<|reference_start|> Stratified Transfer Learning for Cross-domain Activity Recognition: In activity recognition, it is often expensive and time-consuming to acquire sufficient activity labels. To solve this problem, transfer learning leverages the labeled samples from the source domain to annotate the target domain which has few or none labels. Existing approaches typically consider learning a global domain shift while ignoring the intra-affinity between classes, which will hinder the performance of the algorithms. In this paper, we propose a novel and general cross-domain learning framework that can exploit the intra-affinity of classes to perform intra-class knowledge transfer. The proposed framework, referred to as Stratified Transfer Learning (STL), can dramatically improve the classification accuracy for cross-domain activity recognition. Specifically, STL first obtains pseudo labels for the target domain via majority voting technique. Then, it performs intra-class knowledge transfer iteratively to transform both domains into the same subspaces. Finally, the labels of target domain are obtained via the second annotation. To evaluate the performance of STL, we conduct comprehensive experiments on three large public activity recognition datasets~(i.e. OPPORTUNITY, PAMAP2, and UCI DSADS), which demonstrates that STL significantly outperforms other state-of-the-art methods w.r.t. classification accuracy (improvement of 7.68%). Furthermore, we extensively investigate the performance of STL across different degrees of similarities and activity levels between domains. And we also discuss the potential of STL in other pervasive computing applications to provide empirical experience for future research. <|reference_end|>", "<|reference_start|> A systematic study of unsupervised domain adaptation for robust human-activity recognition: Wearable sensors are increasingly becoming the primary interface for monitoring human activities. However, in order to scale human activity recognition (HAR) using wearable sensors to million of users and devices, it is imperative that HAR computational models are robust against real-world heterogeneity in inertial sensor data. In this paper, we study the problem of wearing diversity which pertains to the placement of the wearable sensor on the human body, and demonstrate that even state-of-the-art deep learning models are not robust against these factors. The core contribution of the paper lies in presenting a first-of-its-kind in-depth study of unsupervised domain adaptation (UDA) algorithms in the context of wearing diversity -- we develop and evaluate three adaptation techniques on four HAR datasets to evaluate their relative performance towards addressing the issue of wearing diversity. More importantly, we also do a careful analysis to learn the downsides of each UDA algorithm and uncover several implicit data-related assumptions without which these algorithms suffer a major degradation in accuracy. Taken together, our experimental findings caution against using UDA as a silver bullet for adapting HAR models to new domains, and serve as practical guidelines for HAR practitioners as well as pave the way for future research on domain adaptation in HAR. <|reference_end|>", "<|reference_start|> Unsupervised Meta-Learning For Few-Shot Image Classification: Few-shot or one-shot learning of classifiers requires a significant inductive bias towards the type of task to be learned. One way to acquire this is by meta-learning on tasks similar to the target task. In this paper, we propose UMTRA, an algorithm that performs unsupervised, model-agnostic meta-learning for classification tasks. The meta-learning step of UMTRA is performed on a flat collection of unlabeled images. While we assume that these images can be grouped into a diverse set of classes and are relevant to the target task, no explicit information about the classes or any labels are needed. UMTRA uses random sampling and augmentation to create synthetic training tasks for meta-learning phase. Labels are only needed at the final target task learning step, and they can be as little as one sample per class. On the Omniglot and Mini-Imagenet few-shot learning benchmarks, UMTRA outperforms every tested approach based on unsupervised learning of representations, while alternating for the best performance with the recent CACTUs algorithm. Compared to supervised model-agnostic meta-learning approaches, UMTRA trades off some classification accuracy for a reduction in the required labels of several orders of magnitude. <|reference_end|>", "<|reference_start|> Unsupervised Meta-Learning through Latent-Space Interpolation in Generative Models: Unsupervised meta-learning approaches rely on synthetic meta-tasks that are created using techniques such as random selection, clustering and/or augmentation. Unfortunately, clustering and augmentation are domain-dependent, and thus they require either manual tweaking or expensive learning. In this work, we describe an approach that generates meta-tasks using generative models. A critical component is a novel approach of sampling from the latent space that generates objects grouped into synthetic classes forming the training and validation data of a meta-task. We find that the proposed approach, LAtent Space Interpolation Unsupervised Meta-learning (LASIUM), outperforms or is competitive with current unsupervised learning baselines on few-shot classification tasks on the most widely used benchmark datasets. In addition, the approach promises to be applicable without manual tweaking over a wider range of domains than previous approaches. <|reference_end|>" ]
[ 3, 6, 10, 14 ]
{"<|multi_cite_1_1|>": "ss-1865382", "<|multi_cite_1_2|>": "ss-1865383", "<|multi_cite_2_1|>": "ss-1597539", "<|multi_cite_2_2|>": "ss-2078963", "<|multi_cite_3_1|>": "ss-1088726", "<|multi_cite_3_2|>": "ss-1612644", "<|cite_4|>": "arxiv-401736", "<|multi_cite_5_1|>": "arxiv-308983", "<|multi_cite_5_2|>": "arxiv-461355", "<|cite_6|>": "arxiv-305316", "<|cite_7|>": "arxiv-216350", "<|cite_8|>": "ss-1314002", "<|cite_9|>": "ss-778725", "<|cite_10|>": "arxiv-461355", "<|cite_11|>": "arxiv-461355", "<|cite_12|>": "ss-1322217", "<|multi_cite_13_1|>": "ss-1387269", "<|multi_cite_13_2|>": "arxiv-435740", "<|multi_cite_13_3|>": "arxiv-426865", "<|multi_cite_14_1|>": "ss-770174", "<|multi_cite_14_2|>": "ss-1322217", "<|multi_cite_14_3|>": "ss-1555870", "<|multi_cite_14_4|>": "arxiv-144470", "<|multi_cite_15_1|>": "ss-1322217", "<|multi_cite_15_2|>": "ss-778725", "<|multi_cite_15_3|>": "ss-1540706", "<|multi_cite_15_4|>": "ss-2468576", "<|multi_cite_16_1|>": "arxiv-354825", "<|multi_cite_16_2|>": "ss-1857239", "<|cite_17|>": "arxiv-401736", "<|cite_18|>": "arxiv-216350", "<|cite_19|>": "arxiv-300587", "<|cite_20|>": "arxiv-234041", "<|cite_21|>": "arxiv-248169", "<|multi_cite_22_1|>": "ss-833836", "<|multi_cite_22_2|>": "arxiv-305316", "<|cite_23|>": "arxiv-439970", "<|cite_24|>": "ss-819668", "<|cite_25|>": "arxiv-437422", "<|cite_26|>": "arxiv-396349", "<|multi_cite_27_1|>": "arxiv-308983", "<|multi_cite_27_2|>": "arxiv-461355", "<|multi_cite_28_1|>": "ss-1229244", "<|multi_cite_28_2|>": "ss-1342206", "<|cite_29|>": "arxiv-325293", "<|multi_cite_30_1|>": "arxiv-40081", "<|multi_cite_30_2|>": "arxiv-189807", "<|cite_31|>": "ss-1116051", "<|multi_cite_32_1|>": "arxiv-289376", "<|multi_cite_32_2|>": "arxiv-332284", "<|multi_cite_33_1|>": "arxiv-136884", "<|multi_cite_33_2|>": "arxiv-256238", "<|multi_cite_34_1|>": "arxiv-335857", "<|multi_cite_34_2|>": "ss-1251243", "<|cite_35|>": "ss-1387269", "<|cite_36|>": "arxiv-426865", "<|cite_37|>": "arxiv-183635", "<|multi_cite_38_1|>": "arxiv-66621", "<|multi_cite_38_2|>": "arxiv-236851", "<|multi_cite_39_1|>": "arxiv-144470", "<|multi_cite_39_2|>": "arxiv-319382", "<|multi_cite_39_3|>": "ss-2342540", "<|cite_40|>": "ss-770174", "<|cite_41|>": "ss-1322217", "<|cite_42|>": "arxiv-382454", "<|cite_43|>": "arxiv-354825", "<|multi_cite_44_1|>": "arxiv-182302", "<|multi_cite_44_2|>": "arxiv-272735", "<|multi_cite_44_3|>": "ss-1857239", "<|cite_45|>": "arxiv-182302", "<|cite_46|>": "arxiv-272735", "<|cite_47|>": "arxiv-100100", "<|cite_48|>": "ss-1857239"}
2101.00360
<|paper_start|> Title: New-Type Hoeffding's Inequalities and Application in Tail Bounds Abstract: New-Type Hoeffding's Inequalities and Application in Tail Bounds: It is well known that Hoeffding's inequality has a lot of applications in the signal and information processing fields. How to improve Hoeffding's inequality and find the refinements of its applications have always attracted much attentions. An improvement of Hoeffding inequality was recently given by Hertz \cite{r1}. Eventhough such an improvement is not so big, it still can be used to update many known results with original Hoeffding's inequality, especially for Hoeffding-Azuma inequality for martingales. However, the results in original Hoeffding's inequality and its refinement one by Hertz only considered the first order moment of random variables. In this paper, we present a new type of Hoeffding's inequalities, where the high order moments of random variables are taken into account. It can get some considerable improvements in the tail bounds evaluation compared with the known results. It is expected that the developed new type Hoeffding's inequalities could get more interesting applications in some related fields that use Hoeffding's results. Introduction It is well known that Hoeffding's inequality has been applied in many scenarios in the signal and information processing fields. Since Hoeffding's inequality was first found in 1963 <|cite_start|> (Reference: Probability Inequalities for Sums of Bounded Random Variables: ) <|cite_end|>, it has been attracting much attentions in the academic research <|cite_start|> (Reference: Chernoff Hoeffding bounds for applications with limited independence: Chernoff-Hoeffding (CH) bounds are fundamental tools used in bounding the tail probabilities of the sums of bounded and independent random variables (r.v.'s). We present a simple technique that gives slightly better bounds than these and that more importantly requires only limited independence among the random variables, thereby importing a variety of standard results to the case of limited independence for free. Additional methods are also presented, and the aggregate results are sharp and provide a better understanding of the proof techniques behind these bounds. These results also yield improved bounds for various tail probability distributions and enable improved approximation algorithms for jobshop scheduling. The limited independence result implies that a reduced amount and weaker sources of randomness are sufficient for randomized algorithms whose analyses use the CH bounds, e.g., the analysis of randomized algorithms for random sampling and oblivious packet routing.) <|cite_end|> <|cite_start|> (Reference: A refinement of Hoeffding's inequality: In this paper, we present a refinement of Hoeffding's inequality which is of closed form and which significantly improves on this inequality in many cases. Some numerical comparisons are also presented.) <|cite_end|> and industry. Especially, in the last decade, it has been used to evaluate the channel code design <|cite_start|> (Reference: Second-Order Rate Region of Constant-Composition Codes for the Multiple-Access Channel: This paper studies the second-order asymptotics of coding rates for the discrete memoryless multiple-access channel with a fixed target error probability. Using constant-composition random coding, coded time-sharing, and a variant of Hoeffding's combinatorial central limit theorem, an inner bound on the set of locally achievable second-order coding rates is given for each point on the boundary of the capacity region. It is shown that the inner bound for constant-composition random coding includes that recovered by i.i.d. random coding, and that the inclusion may be strict. The inner bound is extended to the Gaussian multiple-access channel via an increasingly fine quantization of the inputs.) <|cite_end|> <|cite_start|> (Reference: On concentration of measures for LDPC code ensembles: This work considers the concentration of measures for low-density parity-check (LDPC) code ensembles. The two results derived in this paper follow from Azuma's inequality for Doob martingales with bounded differences. The first result is a tightened concentration inequality for the conditional entropy (originally derived by Méasson et al.), and the second result is a concentration inequality for the cardinality of the fundamental systems of cycles of a bipartite graph from the ensemble.) <|cite_end|> and achievable rate over nonlinear channels <|cite_start|> (Reference: New achievable rates for nonlinear Volterra channels via martingale inequalities: This paper establishes new achievable rates for nonlinear Volterra communication channels using refined versions of the Azuma-Hoeffding inequality. The characteristics of these rates are illuminated in special cases of interest that include time invariant linear channels with memory, memoryless non-linear channels, and Volterra channel models.) <|cite_end|> as well as delay performance in CSMA with linear virtual channels under a general topology <|cite_start|> (Reference: Delay optimal CSMA with linear virtual channels under a general topology: In the past few years, an exciting progress has been made on CSMA (Carrier Sense Multiple Access) algorithms that achieve throughput and utility optimality for wireless networks. However, most of these algorithms are known to exhibit poor delay performance making them impractical for implementation. Recently, several papers have addressed the delay issue of CSMA and yet, most of them are limited, in the sense that they focus merely on specific network scenarios with certain conditions rather than general network topology, achieve low delay at the cost of throughput reduction, or lack rigorous provable guarantees. In this paper, we focus on the recent idea of exploiting multiple channels (actually or virtually) for delay reduction in CSMA, and prove that it is per-link delay order-optimal, i.e., O(1)-asymptotic-delay per link, if the number of virtual channels is logarithmic with respect to mixing time of the underlying CSMA Markov chain. The logarithmic number is typically small, i.e., at most linear with respect to the network size. In other words, our contribution provides not only a provable framework for the multiple-channel based CSMA, but also the required explicit number of virtual-multi-channels, which is of great importance for actual implementation. The key step of our analytic framework lies in using quadratic Lyapunov functions in conjunction with (recursively applying) Lindley equation and Azuma's inequality for obtaining an exponential decaying property in certain queueing dynamics. We believe that our technique is of broader interest in analyzing the delay performance of queueing systems with multiple periodic schedulers.) <|cite_end|> in information theory <|cite_start|> (Reference: Concentration of Measure Inequalities in Information Theory, Communications, and Coding: Concentration inequalities have been the subject of exciting developments during the last two decades, and have been intensively studied and used as a powerful tool in various areas. These include convex geometry, functional analysis, statistical physics, mathematical statistics, pure and applied probability theory, information theory, theoretical computer science, learning theory, and dynamical systems. Concentration of Measure Inequalities in Information Theory, Communications, and Coding focuses on some of the key modern mathematical tools that are used for the derivation of concentration inequalities, on their links to information theory, and on their various applications to communications and coding. In addition to being a survey, this monograph also includes various new recent results derived by the authors. This third edition of the bestselling book introduces the reader to the martingale method and the Efron-Stein-Steele inequalities in completely new sections. A new application of lossless source coding with side information is described in detail. Finally, the references have been updated and ones included that have been published since the original publication. Concentration of Measure Inequalities in Information Theory, Communications, and Coding is essential reading for all researchers and scientists in information theory and coding.) <|cite_end|>. As one key tool, it also found the applications in machine learning and big data processing, i.e. PAC-Bayesian method analysis and Markov model analysis in machine learning <|cite_start|> (Reference: PAC-Bayesian Inequalities for Martingales: We present a set of high-probability inequalities that control the concentration of weighted averages of multiple (possibly uncountably many) simultaneously evolving and interdependent martingales. Our results extend the PAC-Bayesian analysis in learning theory from the i.i.d. setting to martingales opening the way for its application to importance weighted sampling, reinforcement learning, and other interactive learning domains, as well as many other domains in probability theory and statistics, where martingales are encountered. We also present a comparison inequality that bounds the expectation of a convex function of a martingale difference sequence shifted to the [0,1] interval by the expectation of the same function of independent Bernoulli variables. This inequality is applied to derive a tighter analog of Hoeffding-Azuma's inequality.) <|cite_end|> <|cite_start|> (Reference: Hoeffding's lemma for Markov Chains and its applications to statistical learning: We extend Hoeffding's lemma to general-state-space and not necessarily reversible Markov chains. Let $\{X_i\}_{i \ge 1}$ be a stationary Markov chain with invariant measure $\pi$ and absolute spectral gap $1-\lambda$, where $\lambda$ is defined as the operator norm of the transition kernel acting on mean zero and square-integrable functions with respect to $\pi$. Then, for any bounded functions $f_i: x \mapsto [a_i,b_i]$, the sum of $f_i(X_i)$ is sub-Gaussian with variance proxy $\frac{1+\lambda}{1-\lambda} \cdot \sum_i \frac{(b_i-a_i)^2}{4}$. This result differs from the classical Hoeffding's lemma by a multiplicative coefficient of $(1+\lambda)/(1-\lambda)$, and simplifies to the latter when $\lambda = 0$. The counterpart of Hoeffding's inequality for Markov chains immediately follows. Our results assume none of countable state space, reversibility and time-homogeneity of Markov chains and cover time-dependent functions with various ranges. We illustrate the utility of these results by applying them to six problems in statistics and machine learning.) <|cite_end|>,statistical mode bias analysis <|cite_start|> (Reference: How biased is your model? Concentration Inequalities, Information and Model Bias: We derive tight and computable bounds on the bias of statistical estimators, or more generally of quantities of interest, when evaluated on a baseline model P rather than on the typically unknown true model Q. Our proposed method combines the scalable information inequality derived by P. Dupuis, K.Chowdhary, the authors and their collaborators together with classical concentration inequalities (such as Bennett's and Hoeffding-Azuma inequalities). Our bounds are expressed in terms of the Kullback-Leibler divergence R(Q||P) of model Q with respect to P and the moment generating function for the statistical estimator under P. Furthermore, concentration inequalities, i.e. bounds on moment generating functions, provide tight and computationally inexpensive model bias bounds for quantities of interest. Finally, they allow us to derive rigorous confidence bands for statistical estimators that account for model bias and are valid for an arbitrary amount of data.) <|cite_end|>, concept drift in online learning for big data mining <|cite_start|> (Reference: {Online and Non-Parametric Drift Detection Methods Based on Hoeffding's Bounds: Incremental and online learning algorithms are more relevant in the data mining context because of the increasing necessity to process data streams. In this context, the target function may change overtime, an inherent problem of online learning (known as concept drift). In order to handle concept drift regardless of the learning model, we propose new methods to monitor the performance metrics measured during the learning process, to trigger drift signals when a significant variation has been detected. To monitor this performance, we apply some probability inequalities that assume only independent, univariate and bounded random variables to obtain theoretical guarantees for the detection of such distributional changes. Some common restrictions for the online change detection as well as relevant types of change (abrupt and gradual) are considered. Two main approaches are proposed, the first one involves moving averages and is more suitable to detect abrupt changes. The second one follows a widespread intuitive idea to deal with gradual changes using weighted moving averages. The simplicity of the proposed methods, together with the computational efficiency make them very advantageous. We use a Naive Bayes classifier and a Perceptron to evaluate the performance of the methods over synthetic and real data.) <|cite_end|> and compressed sensing of high dimensional sparse functions <|cite_start|> (Reference: Compressed learning of high-dimensional sparse functions: This paper presents a simple randomised algorithm for recovering high-dimensional sparse functions, i.e. functions ƒ : [0, 1]<sup>d</sup> → ℝ which depend effectively only on k out of d variables, meaning ƒ(x<inf>1</inf>, …, x<inf>d</inf>) = g(x<inf>i1</inf>, …, x<inf>ik</inf> ), where the indices 1 ≤ i<inf>1</inf> &#60; i<inf>2</inf> &#60; … &#60; i<inf>k</inf> ≤ d are unknown. It is shown that (under certain conditions on g) this algorithm recovers the k unknown coordinates with probability at least 1–6 exp(−L) using only O(k(L+log k)(L+log d)) samples of ƒ.) <|cite_end|>etc. It also has been employed in biomedical fields, i.e. developing the computational molecular modelling tools <|cite_start|> (Reference: Statistical Framework for Uncertainty Quantification in Computational Molecular Modeling: Computational molecular modeling often involves noisy data including uncertainties in model parameters, computational approximations etc., all of which propagates to uncertainties in all computed quantities of interest (QOI). This is a fundamental problem that is often left ignored or treated without sufficient rigor. In this article, we introduce a statistical framework for modeling such uncertainties and providing certificates of accuracy for several QOI. Our framework treats sources of uncertainty as random variables with known distributions, and provides both a theoretical and an empirical technique for propagating those uncertainties to the QOI, also modeled as a random variable. Moreover, the framework also enables one to model uncertainties in a multi-step pipeline, where the outcome of one step cascades into the next. While there are many sources of uncertainty, in this article we have applied our framework to only positional uncertainties of atoms in high resolution models, and in the form of B-factors and their effect in computed molecular properties. The empirical approach requires sufficiently sampling over the joint space of the random variables. We show that using novel pseudo-random number generation techniques, it is possible to achieve the required coverage using very few samples. We have also developed intuitive visualization models to analyze uncertainties at different stages of molecular modeling. We strongly believe this framework would be immensely valuable in evaluating predicted computational models, and provide statistical guarantees on their accuracy.) <|cite_end|> and analyzing the level set estimation in medical image and pattern recognition <|cite_start|> (Reference: Minimax optimal level-set estimation: Tree-structured partitions provide a natural framework for rapid and accurate extraction of level sets of a multivariate function f from noisy data. In general, a level set S is the set on which f exceeds some critical value (e.g. S = {x : f(x) ≥ γ}). Boundaries of such sets typically constitute manifolds embedded in the high-dimensional observation space. The identification of these boundaries is an important theoretical problem with applications for digital elevation maps, medical imaging, and pattern recognition. Because set identification is intrinsically simpler than function denoising or estimation, explicit set extraction methods can achieve higher accuracy than more indirect approaches (such as extracting a set of interest from an estimate of the function). The trees underlying our method are constructed by minimizing a complexity regularized data-fitting term over a family of dyadic partitions. Using this framework, problems such as simultaneous estimation of multiple (non-intersecting) level lines of a function can be readily solved from both a theoretical and practical perspective. Our method automatically adapts to spatially varying regularity of both the boundary of the level set and the function underlying the data. Level set extraction using multiresolution trees can be implemented in near linear time and specifically aims to minimize an error metric sensitive to both the error in the location of the level set and the distance of the function from the critical level. Translation invariant "voting-over-shifts" set estimates can also be computed rapidly using an algorithm based on the undecimated wavelet transform.) <|cite_end|> etc. Due to its widely applications, the refined results and improvements of Hoeffding's inequality and Hoeffding-Azuma inequality in martingales usually resulted in more new insights on the developments of related fields. Recently, Hertz <|cite_start|> (Reference: Improved Hoeffding's Lemma and Hoeffding's Tail Bounds: The purpose of this article is to improve Hoeffding's lemma and consequently Hoeffding's tail bounds. The improvement pertains to left skewed zero mean random variables X\in[a,b], where a<0 and -a>b. The proof of Hoeffding's improved lemma uses Taylor's expansion, the convexity of \exp(sx), s\in \RR, and an unnoticed observation since Hoeffding's publication in 1963 that for -a>b the maximum of  the intermediate function \tau(1-\tau) appearing in Hoeffding's proof is attained at an endpoint rather than at \tau=0.5 as in the case b>-a. Using Hoeffding's  improved lemma we obtain one sided and two sided  tail bounds  for \PP(S_n\ge t) and \PP(|S_n|\ge t), respectively, where S_n=\sum_{i=1}^nX_i and the X_i\in[a_i,b_i],i=1,...,n are independent zero mean  random variables (not necessarily identically distributed). It is interesting to note that we  could  also improve Hoeffding's two sided bound for all \{X_i:  -a_i\ne b_i,i=1,...,n\}. This is  so because here the one sided bound should be  increased by \PP(-S_n\ge t),  wherein the left skewed intervals become right skewed and vice versa.) <|cite_end|> presented an improvement result on the original Hoeffding's inequality by utilizing the asymmetric feature of finite distribution interval of random variables. It can reduce the related exponential coefficient from its arithmetic means to the geometric means of $|a|$ and $b$, where $[a,b]$ ($a<0, b>0)$ is the distributed interval of random variable $X$. This improvement motivate us to improve the Hoeffding's inequality. For simplicity, let us first review the result of Hoeffding's inequality <|cite_start|> (Reference: Probability Inequalities for Sums of Bounded Random Variables: ) <|cite_end|> and its improvement obtained by Hertz <|cite_start|> (Reference: Improved Hoeffding's Lemma and Hoeffding's Tail Bounds: The purpose of this article is to improve Hoeffding's lemma and consequently Hoeffding's tail bounds. The improvement pertains to left skewed zero mean random variables X\in[a,b], where a<0 and -a>b. The proof of Hoeffding's improved lemma uses Taylor's expansion, the convexity of \exp(sx), s\in \RR, and an unnoticed observation since Hoeffding's publication in 1963 that for -a>b the maximum of  the intermediate function \tau(1-\tau) appearing in Hoeffding's proof is attained at an endpoint rather than at \tau=0.5 as in the case b>-a. Using Hoeffding's  improved lemma we obtain one sided and two sided  tail bounds  for \PP(S_n\ge t) and \PP(|S_n|\ge t), respectively, where S_n=\sum_{i=1}^nX_i and the X_i\in[a_i,b_i],i=1,...,n are independent zero mean  random variables (not necessarily identically distributed). It is interesting to note that we  could  also improve Hoeffding's two sided bound for all \{X_i:  -a_i\ne b_i,i=1,...,n\}. This is  so because here the one sided bound should be  increased by \PP(-S_n\ge t),  wherein the left skewed intervals become right skewed and vice versa.) <|cite_end|>. \subsection{Hoeffding's Inequality and An Improvement} Assume that $X$ is a zero mean real valued random variable and $X\in [a, b]$ with $a<0,b>0$. Hoeffding's lemma state that for all $s\in \textbf{R}, s>0$, \begin{equation}\label{hoeffding inequality} E[e^{sX}]\leq \exp\Big\{\frac{s^2 (b-a)^2}{8}\Big\} \end{equation} Recently, D. Hertz presented an improved result with the following form \begin{equation} \label{hertz ine} E[e^{sX}]\leq \exp\Big\{\frac{s^2 \Phi^2(a,b)}{2}\Big \}, \end{equation} where \begin{equation} \Phi(a,b)=\begin{cases} \frac{|a|+b}{2}\quad b >\mid a\mid, \\ \sqrt{|a|b},\quad b\leq \mid a\mid. \end{cases} \end{equation} Since $\sqrt{|a|b} \leq \frac{|a|+b}{2}$, it gives a tighter upper bound for $-a>b$, compared with the original Hoeffding's inequality. Motivated by this result, an interesting question is raised. Can we further improve the Hoeffding's inequality? If so, how to do it. In this paper, we try to derive a new type of Hoeffding's inequalities, where higher order moments of random variable $X$ are taken into account, except $E(X)=0$. i.e. $E(X^k)=m_k (k=2,3,...)$. \subsection{Main Theorem} To give a clear picture of this paper, the new type of Hoeffding's inequalities are given as follows. \begin{thm}\label{main theorem} Assume that $X$ is a real valued random variable with $E(X)=0$, $X\in [a, b]$ with $a<0,b>0$. For all $s\in \textbf{R}, s>0$ and an integer $k$ ($k\geq 1$), then we have \begin{equation}\label{hoeffding k} E[e^{sX}]\leq \Upsilon_k(a,b) \exp\Big\{\frac{s^2}{2k}\Phi^2(a,b)\Big\} \end{equation} where \begin{equation} \Upsilon_k(a,b)=\Big[1+\frac{\max\{|a|,b\}}{|a|}\Big]^k-k\frac{\max\{|a|,b\}}{|a|} \end{equation} \begin{equation} \Phi(a,b)=\begin{cases} \frac{|a|+b}{2}\quad b >\mid a\mid, \\ \sqrt{|a|b},\quad b\leq \mid a\mid. \end{cases} \end{equation} \end{thm} \textbf{Remark 1.} When $k=1$, it is easy to check that $\Upsilon_1(a,b)=1$. This indicates that the new type Hoeffding's inequality will be reduced to the improved Hoeffding's inequality (\ref{hertz ine}),still better than the original Hoeffding's inequality. When $k=2$, $\Upsilon_1(a,b)=1+\{\frac{\max\{|a|,b\}}{|a|}\}^2$ and the exponential coefficient has been decreased by 2 times compared to the improved Hoeffding's inequality (\ref{hertz ine}). In fact, such a result can be refined, which is given by the following Corollary. \begin{cor}\label{cor k equals 2} Under the same assumption of Theorem \ref{main theorem}, for $k=2$, we have \begin{equation} E[e^{sX}]\leq [1+\frac{m_2}{a^2}] \exp\Big\{\frac{s^2}{4}\Phi^2(a,b)\Big\} \end{equation} where $m_2=E(X^2)$. If $E(X^2)$ is unknown, the inequalities can be relaxed as \begin{equation} E[e^{sX}]\leq [1+\frac{b}{|a|}] \exp\Big\{\frac{s^2}{4}\Phi^2(a,b)\Big\} \end{equation} and \begin{equation} \label{ine2} E[e^{sX}]\leq 2 \exp\Big\{\frac{s^2}{4}\Phi^2(a,b)\Big \} \quad \text{if } |a| \geq b \end{equation} \end{cor} Comparing the result in eqn.(\ref{ine2}) with that presented in Theorem \ref{main theorem}, it is easy to check that \begin{equation} [1+\frac{b}{|a|}]\leq 1+\{\frac{\max\{|a|,b\}}{|a|}\}^2 \end{equation} holds. This indicates that Corollary \ref{cor k equals 2} really improves the result presented in Theorem \ref{main theorem} for $k=2$. Comparing to that in eqn. (\ref{hertz ine}), the exponential coefficient has be reduced by 2 times. That is to say, when parameter $s$ is relatively large, the new type of Hoeffding's inequalities will give much tighter results than original Hoeffding's inequality and its improvement obtained by Hertz. The remaining part of this paper is organized as follows. In Section 2, we first present the proof of Corollary (\ref{cor k equals 2}) and show the insight by taking higher order moments of real valued random variables into account and then present the proof of main theorem in this paper. In Section 3, we present the new type Hoeffding's inequalities applications in the one sided and two sided tail bound. We also discuss how to select the integer parameter $k$ to give a tighter bound In Section 4. Finally, in Section 5, we give the conclusion. <|paper_end|>
[ "<|reference_start|> Concentration of Measure Inequalities in Information Theory, Communications, and Coding: Concentration inequalities have been the subject of exciting developments during the last two decades, and have been intensively studied and used as a powerful tool in various areas. These include convex geometry, functional analysis, statistical physics, mathematical statistics, pure and applied probability theory, information theory, theoretical computer science, learning theory, and dynamical systems. Concentration of Measure Inequalities in Information Theory, Communications, and Coding focuses on some of the key modern mathematical tools that are used for the derivation of concentration inequalities, on their links to information theory, and on their various applications to communications and coding. In addition to being a survey, this monograph also includes various new recent results derived by the authors. This third edition of the bestselling book introduces the reader to the martingale method and the Efron-Stein-Steele inequalities in completely new sections. A new application of lossless source coding with side information is described in detail. Finally, the references have been updated and ones included that have been published since the original publication. Concentration of Measure Inequalities in Information Theory, Communications, and Coding is essential reading for all researchers and scientists in information theory and coding. <|reference_end|>", "<|reference_start|> PAC-Bayesian Inequalities for Martingales: We present a set of high-probability inequalities that control the concentration of weighted averages of multiple (possibly uncountably many) simultaneously evolving and interdependent martingales. Our results extend the PAC-Bayesian analysis in learning theory from the i.i.d. setting to martingales opening the way for its application to importance weighted sampling, reinforcement learning, and other interactive learning domains, as well as many other domains in probability theory and statistics, where martingales are encountered. We also present a comparison inequality that bounds the expectation of a convex function of a martingale difference sequence shifted to the [0,1] interval by the expectation of the same function of independent Bernoulli variables. This inequality is applied to derive a tighter analog of Hoeffding-Azuma's inequality. <|reference_end|>", "<|reference_start|> Hoeffding's lemma for Markov Chains and its applications to statistical learning: We extend Hoeffding's lemma to general-state-space and not necessarily reversible Markov chains. Let $\\{X_i\\}_{i \\ge 1}$ be a stationary Markov chain with invariant measure $\\pi$ and absolute spectral gap $1-\\lambda$, where $\\lambda$ is defined as the operator norm of the transition kernel acting on mean zero and square-integrable functions with respect to $\\pi$. Then, for any bounded functions $f_i: x \\mapsto [a_i,b_i]$, the sum of $f_i(X_i)$ is sub-Gaussian with variance proxy $\\frac{1+\\lambda}{1-\\lambda} \\cdot \\sum_i \\frac{(b_i-a_i)^2}{4}$. This result differs from the classical Hoeffding's lemma by a multiplicative coefficient of $(1+\\lambda)/(1-\\lambda)$, and simplifies to the latter when $\\lambda = 0$. The counterpart of Hoeffding's inequality for Markov chains immediately follows. Our results assume none of countable state space, reversibility and time-homogeneity of Markov chains and cover time-dependent functions with various ranges. We illustrate the utility of these results by applying them to six problems in statistics and machine learning. <|reference_end|>", "<|reference_start|> {Online and Non-Parametric Drift Detection Methods Based on Hoeffding's\nBounds: Incremental and online learning algorithms are more relevant in the data mining context because of the increasing necessity to process data streams. In this context, the target function may change overtime, an inherent problem of online learning (known as concept drift). In order to handle concept drift regardless of the learning model, we propose new methods to monitor the performance metrics measured during the learning process, to trigger drift signals when a significant variation has been detected. To monitor this performance, we apply some probability inequalities that assume only independent, univariate and bounded random variables to obtain theoretical guarantees for the detection of such distributional changes. Some common restrictions for the online change detection as well as relevant types of change (abrupt and gradual) are considered. Two main approaches are proposed, the first one involves moving averages and is more suitable to detect abrupt changes. The second one follows a widespread intuitive idea to deal with gradual changes using weighted moving averages. The simplicity of the proposed methods, together with the computational efficiency make them very advantageous. We use a Naive Bayes classifier and a Perceptron to evaluate the performance of the methods over synthetic and real data. <|reference_end|>" ]
[ 7, 8, 9, 11 ]
{"<|cite_1|>": "ss-876676", "<|cite_2|>": "ss-1002580", "<|cite_3|>": "ss-2217262", "<|cite_4|>": "arxiv-43534", "<|cite_5|>": "ss-2217263", "<|cite_6|>": "ss-2217264", "<|cite_7|>": "ss-2217265", "<|cite_8|>": "ss-1203868", "<|cite_9|>": "arxiv-25820", "<|cite_10|>": "ss-1308226", "<|cite_11|>": "arxiv-128143", "<|cite_12|>": "ss-901986", "<|cite_13|>": "ss-1012192", "<|cite_14|>": "ss-1881683", "<|cite_15|>": "ss-1528648", "<|cite_16|>": "ss-2217266", "<|cite_17|>": "ss-876676", "<|cite_18|>": "ss-2217266"}
2007.07218-0
<|paper_start|> Title: Learning Accurate and Human-Like Driving using Semantic Maps and Attention Abstract: Learning Accurate and Human-Like Driving using Semantic Maps and Attention: This paper investigates how end-to-end driving models can be improved to drive more accurately and human-like. To tackle the first issue we exploit semantic and visual maps from HERE Technologies and augment the existing Drive360 dataset with such. The maps are used in an attention mechanism that promotes segmentation confidence masks, thus focusing the network on semantic classes in the image that are important for the current driving situation. Human-like driving is achieved using adversarial learning, by not only minimizing the imitation loss with respect to the human driver but by further defining a discriminator, that forces the driving model to produce action sequences that are human-like. Our models are trained and evaluated on the Drive360 + HERE dataset, which features 60 hours and 3000 km of real-world driving data. Extensive experiments show that our driving models are more accurate and behave more human-like than previous methods. Introduction \label{sec:intro} Over the last few decades autonomous driving has seen dramatic advances, from the humble beginnings <|cite_start|> (Reference: AUTONOMOUS HIGH SPEED ROAD VEHICLE GUIDANCE BY COMPUTER VISION: ) <|cite_end|>, over the DARPA challenges <|cite_start|> (Reference: The 2005 DARPA Grand Challenge: The Great Robot Race: The DARPA Grand Challenge was a landmark in the field of robotics: a race by autonomous vehicles through 132 miles of rough, cross-country Nevada terrain that showcased exciting and unprecedented capabilities in robotic perception, navigation, and control. The event took place in October 2005, and drew teams of competitors from academia and industry, and many garage hobbyists. This book presents fifteen technical papers that are written at a level that makes them easily accessible to a broad technical audience, describing the technology behind most of the robotic vehicles that participated in this famous race. The papers describe each team's driverless vehicle, race strategy, and insights. As a whole, they present the state of the art in autonomous vehicle technology, and offer a glimpse of future technology for tomorrows driverless cars. This book will serve as an authoritative, archival source for the DARPA Grand Challenge and a must have for robotics students and researchers, since it describes the state of the art in perception, planning and control.) <|cite_end|> <|cite_start|> (Reference: The DARPA Urban Challenge: Autonomous Vehicles in City Traffic, George Air Force Base, Victorville, California, USA: ) <|cite_end|>, to today's autonomous driving companies which have driven tens of millions of miles autonomously on public roads. These massive gains were achieved by improving all the components of an autonomous car over the years. Advances were not limited to the hardware, but also to the algorithms necessary to drive a car. Normally these algorithms are large software stacks that are built using multiple layers, such as perception, localization, motion planning, and control, see <|cite_start|> (Reference: The DARPA Urban Challenge: Autonomous Vehicles in City Traffic, George Air Force Base, Victorville, California, USA: ) <|cite_end|> <|cite_start|> (Reference: Autonomous driving in urban environments: Boss and the Urban Challenge: ) <|cite_end|>. However, due to the complexity of such stacked systems, in recent years we have seen a rise of end-to-end driving models that solve autonomous driving. These driving models are an elegant alternative by directly mapping sensor inputs to driving actions <|cite_start|> (Reference: End to End Learning for Self-Driving Cars: We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS).) <|cite_end|> <|cite_start|> (Reference: End-to-end Driving via Conditional Imitation Learning: Deep networks trained on demonstrations of human driving have learned to follow roads and avoid obstacles. However, driving policies trained via imitation learning cannot be controlled at test time. A vehicle trained end-to-end to imitate an expert cannot be guided to take a specific turn at an upcoming intersection. This limits the utility of such systems. We propose to condition imitation learning on high-level command input. At test time, the learned driving policy functions as a chauffeur that handles sensorimotor coordination but continues to respond to navigational commands. We evaluate different architectures for conditional imitation learning in vision-based driving. We conduct experiments in realistic three-dimensional simulations of urban driving and on a 1/5 scale robotic truck that is trained to drive in a residential area. Both systems drive based on visual input yet remain responsive to high-level navigational commands. The supplementary video can be viewed at https://youtu.be/cFtnflNe5fM) <|cite_end|> <|cite_start|> (Reference: End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners: For human drivers, having rear and side-view mirrors is vital for safe driving. They deliver a more complete view of what is happening around the car. Human drivers also heavily exploit their mental map for navigation. Nonetheless, several methods have been published that learn driving models with only a front-facing camera and without a route planner. This lack of information renders the self-driving task quite intractable. We investigate the problem in a more realistic setting, which consists of a surround-view camera system with eight cameras, a route planner, and a CAN bus reader. In particular, we develop a sensor setup that provides data for a 360-degree view of the area surrounding the vehicle, the driving route to the destination, and low-level driving maneuvers (e.g. steering angle and speed) by human drivers. With such a sensor setup we collect a new driving dataset, covering diverse driving scenarios and varying weather/illumination conditions. Finally, we learn a novel driving model by integrating information from the surround-view cameras and the route planner. Two route planners are exploited: 1) by representing the planned routes on OpenStreetMap as a stack of GPS coordinates, and 2) by rendering the planned routes on TomTom Go Mobile and recording the progression into a video. Our experiments show that: 1) 360-degree surround-view cameras help avoid failures made with a single front-view camera, in particular for city driving and intersection scenarios; and 2) route planners help the driving task significantly, especially for steering angle prediction.) <|cite_end|>. Most works on end-to-end driving models use simplistic sensor setups, when compared to traditional autonomous driving stacks <|cite_start|> (Reference: The DARPA Urban Challenge: Autonomous Vehicles in City Traffic, George Air Force Base, Victorville, California, USA: ) <|cite_end|>. However, recent work showed, that rendered maps can improve the performance of end-to-end driving models <|cite_start|> (Reference: End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners: For human drivers, having rear and side-view mirrors is vital for safe driving. They deliver a more complete view of what is happening around the car. Human drivers also heavily exploit their mental map for navigation. Nonetheless, several methods have been published that learn driving models with only a front-facing camera and without a route planner. This lack of information renders the self-driving task quite intractable. We investigate the problem in a more realistic setting, which consists of a surround-view camera system with eight cameras, a route planner, and a CAN bus reader. In particular, we develop a sensor setup that provides data for a 360-degree view of the area surrounding the vehicle, the driving route to the destination, and low-level driving maneuvers (e.g. steering angle and speed) by human drivers. With such a sensor setup we collect a new driving dataset, covering diverse driving scenarios and varying weather/illumination conditions. Finally, we learn a novel driving model by integrating information from the surround-view cameras and the route planner. Two route planners are exploited: 1) by representing the planned routes on OpenStreetMap as a stack of GPS coordinates, and 2) by rendering the planned routes on TomTom Go Mobile and recording the progression into a video. Our experiments show that: 1) 360-degree surround-view cameras help avoid failures made with a single front-view camera, in particular for city driving and intersection scenarios; and 2) route planners help the driving task significantly, especially for steering angle prediction.) <|cite_end|> <|cite_start|> (Reference: Variational End-to-End Navigation and Localization: Deep learning has revolutionized the ability to learn "end-to-end" autonomous vehicle control directly from raw sensory data. While there have been recent extensions to handle forms of navigation instruction, these works are unable to capture the full distribution of possible actions that could be taken and to reason about localization of the robot within the environment. In this paper, we extend end-to-end driving networks with the ability to perform point-to-point navigation as well as probabilistic localization using only noisy GPS data. We define a novel variational network capable of learning from raw camera data of the environment as well as higher level roadmaps to predict (1) a full probability distribution over the possible control commands; and (2) a deterministic control command capable of navigating on the route specified within the map. Additionally, we formulate how our model can be used to localize the robot according to correspondences between the map and the observed visual road topology, inspired by the rough localization that human drivers can perform. We test our algorithms on real-world driving data that the vehicle has never driven through before, and integrate our point-to-point navigation algorithms onboard a full-scale autonomous vehicle for real-time performance. Our localization algorithm is also evaluated over a new set of roads and intersections to demonstrates rough pose localization even in situations without any GPS prior.) <|cite_end|>, and if HD-maps are available that they can be even used as a fundamental part of the end-to-end driving model <|cite_start|> (Reference: ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst: Our goal is to train a policy for autonomous driving via imitation learning that is robust enough to drive a real vehicle. We find that standard behavior cloning is insufficient for handling complex driving scenarios, even when we leverage a perception system for preprocessing the input and a controller for executing the output on the car: 30 million examples are still not enough. We propose exposing the learner to synthesized data in the form of perturbations to the expert's driving, which creates interesting situations such as collisions and/or going off the road. Rather than purely imitating all data, we augment the imitation loss with additional losses that penalize undesirable events and encourage progress -- the perturbations then provide an important signal for these losses and lead to robustness of the learned model. We show that the ChauffeurNet model can handle complex situations in simulation, and present ablation experiments that emphasize the importance of each of our proposed changes and show that the model is responding to the appropriate causal factors. Finally, we demonstrate the model driving a car in the real world.) <|cite_end|> <|cite_start|> (Reference: End-to-end Interpretable Neural Motion Planner: In this paper, we propose a neural motion planner (NMP) for learning to drive autonomously in complex urban scenarios that include traffic-light handling, yielding, and interactions with multiple road-users. Towards this goal, we design a holistic model that takes as input raw LIDAR data and a HD map and produces interpretable intermediate representations in the form of 3D detections and their future trajectories, as well as a cost volume defining the goodness of each position that the self-driving car can take within the planning horizon. We then sample a set of diverse physically possible trajectories and choose the one with the minimum learned cost. Importantly, our cost volume is able to naturally capture multi-modality. We demonstrate the effectiveness of our approach in real-world driving data captured in several cities in North America. Our experiments show that the learned cost volume can generate safer planning than all the baselines.) <|cite_end|>. End-to-end driving models can be deployed to either maneuver an autonomous car, act as a sanity checker of a traditional stack in a tandem approach or be used to evaluate human driving in mobility as a service applications (s.a. taxi driver fatigue). But as such, they not only need to be able to drive \emph{accurately}, but should also drive \emph{human-like}, as this is believed to increase the acceptance of autonomous cars <|cite_start|> (Reference: A Framework for Modeling Human-like Driving Behaviors for Autonomous Vehicles in Driving Simulators: A framework for modeling driver behavior within driving simulators is described in this paper. This framework serves as a basis for building human- like driving behavior models for autonomous vehicles operating within the virtual environment of a driving simulator. The framework consists of four units, the Perception Unit, the Emotions Unit, the Decision- making Unit (DMU), and the Decision- implementation Unit (DIU). The Perception Unit defines how the model perceives its environment in local and global terms. The Emotions Unit defines how the model responds emotionally to its environment. The DMU investigates the environment for possible actions that might potentially serve the model's emotional demands. And finally the DIU tries to implement these decisions when a traffic condition, perceived as safe enough for such an implementation, emerges. Each of these units has its own set of fuzzy variables and fuzzy ifthen rules. Any driving model, that is based on this framework, should provide membership function parameters for these fuzzy variables in accordance with the category of human driving behavior this model is targeting. Our framework addresses decision making and implementation at the maneuvering and operational levels of the driving task. Decisions at the planning level are addressed through a script- based traffic controller. The present model is limited to simulating human behaviors when driving in a two- lane rural environment.) <|cite_end|> <|cite_start|> (Reference: Toward More Realistic Driving Behavior Models for Autonomous Vehicles in Driving Simulators: Autonomous vehicles are perhaps the most encountered element in a driving simulator. Their effect on the realism of the simulator is critical. For autonomous vehicles to contribute positively to the realism of the hosting driving simulator, they need to have a realistic appearance and, possibly more importantly, realistic behavior. Addressed is the problem of modeling realistic and humanlike behaviors on simulated highway systems by developing an abstract framework that captures the details of human driving at the microscopic level. This framework consists of four units that together define and specify the elements needed for a concrete humanlike driving model to be implemented within a driving simulator. These units are the perception unit, the emotions unit, the decision-making unit, and the decision-implementation unit. Realistic models of humanlike driving behavior can be built by implementing the specifications set by the driving framework. Four humanlike driving models have been implemented on the basis of the driving framework: (a) a generic normal driving model, (b) an aggressive driving model, (c) an alcoholic driving model, and (d) an elderly driving model. These driving models provide experiment designers with a powerful tool for generating complex traffic scenarios in their experiments. These behavioral models were incorporated along with three-dimensional visual models and vehicle dynamics models into one entity, which is the autonomous vehicle. Subjects perceived the autonomous vehicles with the described behavioral models as having a positive effect on the realism of the driving simulator. The erratic driving models were identified correctly by the subjects in most cases.) <|cite_end|> <|cite_start|> (Reference: Human-like motion planning model for driving in signalized intersections: ) <|cite_end|>and improve human driver evaluation capability. In this work, we tackle both accurate driving, using high fidelity semantic maps, and human-like driving. This also directly defines our three main contributions: First, the Drive360 dataset introduced in <|cite_start|> (Reference: End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners: For human drivers, having rear and side-view mirrors is vital for safe driving. They deliver a more complete view of what is happening around the car. Human drivers also heavily exploit their mental map for navigation. Nonetheless, several methods have been published that learn driving models with only a front-facing camera and without a route planner. This lack of information renders the self-driving task quite intractable. We investigate the problem in a more realistic setting, which consists of a surround-view camera system with eight cameras, a route planner, and a CAN bus reader. In particular, we develop a sensor setup that provides data for a 360-degree view of the area surrounding the vehicle, the driving route to the destination, and low-level driving maneuvers (e.g. steering angle and speed) by human drivers. With such a sensor setup we collect a new driving dataset, covering diverse driving scenarios and varying weather/illumination conditions. Finally, we learn a novel driving model by integrating information from the surround-view cameras and the route planner. Two route planners are exploited: 1) by representing the planned routes on OpenStreetMap as a stack of GPS coordinates, and 2) by rendering the planned routes on TomTom Go Mobile and recording the progression into a video. Our experiments show that: 1) 360-degree surround-view cameras help avoid failures made with a single front-view camera, in particular for city driving and intersection scenarios; and 2) route planners help the driving task significantly, especially for steering angle prediction.) <|cite_end|>is extended with high precision semantic maps from HERE Technologies. To the best of our knowledge, this is the first large scale dataset, suited for training end-to-end driving models that includes high precision semantic maps. Second, we propose a novel way to include these semantic maps in the end-to-end driving model using an attention mechanism that can promote different confidence masks of a semantic segmentation network, allowing to combine the map information with semantic information in the image. Third, to achieve human-like driving we propose to use adversary learning to teach the car about human driving styles. Specifically, a discriminator is trained, together with our driving model, to distinguish between human driving and our ``machine" driving. This allows us to train for accurate and human-like driving at the same time. A preliminary version of this work has been released on arXiv before with substantial differences <|cite_start|> (Reference: Learning Accurate, Comfortable and Human-like Driving: Autonomous vehicles are more likely to be accepted if they drive accurately, comfortably, but also similar to how human drivers would. This is especially true when autonomous and human-driven vehicles need to share the same road. The main research focus thus far, however, is still on improving driving accuracy only. This paper formalizes the three concerns with the aim of accurate, comfortable and human-like driving. Three contributions are made in this paper. First, numerical map data from HERE Technologies are employed for more accurate driving; a set of map features which are believed to be relevant to driving are engineered to navigate better. Second, the learning procedure is improved from a pointwise prediction to a sequence-based prediction and passengers' comfort measures are embedded into the learning algorithm. Finally, we take advantage of the advances in adversary learning to learn human-like driving; specifically, the standard L1 or L2 loss is augmented by an adversary loss which is based on a discriminator trained to distinguish between human driving and machine driving. Our model is trained and evaluated on the Drive360 dataset, which features 60 hours and 3000 km of real-world driving data. Extensive experiments show that our driving model is more accurate, more comfortable and behaves more like a human driver than previous methods. The resources of this work will be released on the project page.) <|cite_end|>. Related Work \noindent \textbf{Driving Models}. Significant progress has been made in autonomous driving in the last few years. Classical approaches require the recognition of all driving-relevant objects, such as lanes, traffic signs, traffic lights, cars and pedestrians, and then perform motion planning, which is further used for final vehicle control <|cite_start|> (Reference: Autonomous driving in urban environments: Boss and the Urban Challenge: ) <|cite_end|>. These type of systems are sophisticated, and represent the current state-of-the-art for autonomous driving, however they are hard to maintain and prone to error accumulation along the pipeline. End-to-end driving methods, on the other hand, construct a direct mapping from the sensory input to the actions. The idea can be traced back to the 1980s <|cite_start|> (Reference: {{NIPS: 高校正在建设以数字化校园为主要内容的网络应用,为保证校园网络的正常运行,网络安全是校园网络应用的重要保证,尤其数据安全、网络安全、运行安全等,在努力建设应用的过程中,必须将安全放在重要位置,本文介绍了网络入侵防御系统的原理、功能及在高校校园网中的应用。) <|cite_end|>. Other more recent end-to-end examples include <|cite_start|> (Reference: Off-Road Obstacle Avoidance through End-to-End Learning: We describe a vision-based obstacle avoidance system for off-road mobile robots. The system is trained from end to end to map raw input images to steering angles. It is trained in supervised mode to predict the steering angles provided by a human driver during training runs collected in a wide variety of terrains, weather conditions, lighting conditions, and obstacle types. The robot is a 50cm off-road truck, with two forward-pointing wireless color cameras. A remote computer processes the video and controls the robot via radio. The learning system is a large 6-layer convolutional network whose input is a single left/right pair of unprocessed low-resolution images. The robot exhibits an excellent ability to detect obstacles and navigate around them in real time at speeds of 2 m/s.) <|cite_end|> <|cite_start|> (Reference: End to End Learning for Self-Driving Cars: We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS).) <|cite_end|> <|cite_start|> (Reference: LiDAR-Video Driving Dataset: Learning Driving Policies Effectively: Learning autonomous-driving policies is one of the most challenging but promising tasks for computer vision. Most researchers believe that future research and applications should combine cameras, video recorders and laser scanners to obtain comprehensive semantic understanding of real traffic. However, current approaches only learn from large-scale videos, due to the lack of benchmarks that consist of precise laser-scanner data. In this paper, we are the first to propose a LiDAR-Video dataset, which provides large-scale high-quality point clouds scanned by a Velodyne laser, videos recorded by a dashboard camera and standard drivers' behaviors. Extensive experiments demonstrate that extra depth information help networks to determine driving policies indeed.) <|cite_end|> <|cite_start|> (Reference: Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars: Event cameras are bio-inspired vision sensors that naturally capture the dynamics of a scene, filtering out redundant information. This paper presents a deep neural network approach that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle's steering angle. To make the best out of this sensor-algorithm combination, we adapt state-of-the-art convolutional architectures to the output of event sensors and extensively evaluate the performance of our approach on a publicly available large scale event-camera dataset (~1000 km). We present qualitative and quantitative explanations of why event cameras allow robust steering prediction even in cases where traditional cameras fail, e.g. challenging illumination conditions and fast motion. Finally, we demonstrate the advantages of leveraging transfer learning from traditional to event-based vision, and show that our approach outperforms state-of-the-art algorithms based on standard cameras.) <|cite_end|> <|cite_start|> (Reference: End-to-end Driving via Conditional Imitation Learning: Deep networks trained on demonstrations of human driving have learned to follow roads and avoid obstacles. However, driving policies trained via imitation learning cannot be controlled at test time. A vehicle trained end-to-end to imitate an expert cannot be guided to take a specific turn at an upcoming intersection. This limits the utility of such systems. We propose to condition imitation learning on high-level command input. At test time, the learned driving policy functions as a chauffeur that handles sensorimotor coordination but continues to respond to navigational commands. We evaluate different architectures for conditional imitation learning in vision-based driving. We conduct experiments in realistic three-dimensional simulations of urban driving and on a 1/5 scale robotic truck that is trained to drive in a residential area. Both systems drive based on visual input yet remain responsive to high-level navigational commands. The supplementary video can be viewed at https://youtu.be/cFtnflNe5fM) <|cite_end|> <|cite_start|> (Reference: End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners: For human drivers, having rear and side-view mirrors is vital for safe driving. They deliver a more complete view of what is happening around the car. Human drivers also heavily exploit their mental map for navigation. Nonetheless, several methods have been published that learn driving models with only a front-facing camera and without a route planner. This lack of information renders the self-driving task quite intractable. We investigate the problem in a more realistic setting, which consists of a surround-view camera system with eight cameras, a route planner, and a CAN bus reader. In particular, we develop a sensor setup that provides data for a 360-degree view of the area surrounding the vehicle, the driving route to the destination, and low-level driving maneuvers (e.g. steering angle and speed) by human drivers. With such a sensor setup we collect a new driving dataset, covering diverse driving scenarios and varying weather/illumination conditions. Finally, we learn a novel driving model by integrating information from the surround-view cameras and the route planner. Two route planners are exploited: 1) by representing the planned routes on OpenStreetMap as a stack of GPS coordinates, and 2) by rendering the planned routes on TomTom Go Mobile and recording the progression into a video. Our experiments show that: 1) 360-degree surround-view cameras help avoid failures made with a single front-view camera, in particular for city driving and intersection scenarios; and 2) route planners help the driving task significantly, especially for steering angle prediction.) <|cite_end|> <|cite_start|> (Reference: Learning to Steer by Mimicking Features from Heterogeneous Auxiliary Networks: The training of many existing end-to-end steering angle prediction models heavily relies on steering angles as the supervisory signal. Without learning from much richer contexts, these methods are susceptible to the presence of sharp road curves, challenging traffic conditions, strong shadows, and severe lighting changes. In this paper, we considerably improve the accuracy and robustness of predictions through heterogeneous auxiliary networks feature mimicking, a new and effective training method that provides us with much richer contextual signals apart from steering direction. Specifically, we train our steering angle predictive model by distilling multi-layer knowledge from multiple heterogeneous auxiliary networks that perform related but different tasks, e.g., image segmentation or optical flow estimation. As opposed to multi-task learning, our method does not require expensive annotations of related tasks on the target set. This is made possible by applying contemporary off-the-shelf networks on the target set and mimicking their features in different layers after transformation. The auxiliary networks are discarded after training without affecting the runtime efficiency of our model. Our approach achieves a new state-of-the-art on Udacity and Comma.ai, outperforming the previous best by a large margin of 12.8% and 52.1%, respectively. Encouraging results are also shown on Berkeley Deep Drive (BDD) dataset.) <|cite_end|>. In <|cite_start|> (Reference: End to End Learning for Self-Driving Cars: We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS).) <|cite_end|>, the authors trained an end-to-end method with a collection of front-facing videos. The idea was extended later on by using a larger video dataset <|cite_start|> (Reference: End-to-end Learning of Driving Models from Large-scale Video Datasets: Robust perception-action models should be learned from training data with diverse visual appearances and realistic behaviors, yet current approaches to deep visuomotor policy learning have been generally limited to in-situ models learned from a single vehicle or a simulation environment. We advocate learning a generic vehicle motion model from large scale crowd-sourced video data, and develop an end-to-end trainable architecture for learning to predict a distribution over future vehicle egomotion from instantaneous monocular camera observations and previous vehicle state. Our model incorporates a novel FCN-LSTM architecture, which can be learned from large-scale crowd-sourced vehicle action data, and leverages available scene segmentation side tasks to improve performance under a privileged learning paradigm.) <|cite_end|>, by adding side tasks to regularize the training <|cite_start|> (Reference: End-to-end Learning of Driving Models from Large-scale Video Datasets: Robust perception-action models should be learned from training data with diverse visual appearances and realistic behaviors, yet current approaches to deep visuomotor policy learning have been generally limited to in-situ models learned from a single vehicle or a simulation environment. We advocate learning a generic vehicle motion model from large scale crowd-sourced video data, and develop an end-to-end trainable architecture for learning to predict a distribution over future vehicle egomotion from instantaneous monocular camera observations and previous vehicle state. Our model incorporates a novel FCN-LSTM architecture, which can be learned from large-scale crowd-sourced vehicle action data, and leverages available scene segmentation side tasks to improve performance under a privileged learning paradigm.) <|cite_end|> <|cite_start|> (Reference: Learning to Steer by Mimicking Features from Heterogeneous Auxiliary Networks: The training of many existing end-to-end steering angle prediction models heavily relies on steering angles as the supervisory signal. Without learning from much richer contexts, these methods are susceptible to the presence of sharp road curves, challenging traffic conditions, strong shadows, and severe lighting changes. In this paper, we considerably improve the accuracy and robustness of predictions through heterogeneous auxiliary networks feature mimicking, a new and effective training method that provides us with much richer contextual signals apart from steering direction. Specifically, we train our steering angle predictive model by distilling multi-layer knowledge from multiple heterogeneous auxiliary networks that perform related but different tasks, e.g., image segmentation or optical flow estimation. As opposed to multi-task learning, our method does not require expensive annotations of related tasks on the target set. This is made possible by applying contemporary off-the-shelf networks on the target set and mimicking their features in different layers after transformation. The auxiliary networks are discarded after training without affecting the runtime efficiency of our model. Our approach achieves a new state-of-the-art on Udacity and Comma.ai, outperforming the previous best by a large margin of 12.8% and 52.1%, respectively. Encouraging results are also shown on Berkeley Deep Drive (BDD) dataset.) <|cite_end|>, by introducing directional commands <|cite_start|> (Reference: End-to-end Driving via Conditional Imitation Learning: Deep networks trained on demonstrations of human driving have learned to follow roads and avoid obstacles. However, driving policies trained via imitation learning cannot be controlled at test time. A vehicle trained end-to-end to imitate an expert cannot be guided to take a specific turn at an upcoming intersection. This limits the utility of such systems. We propose to condition imitation learning on high-level command input. At test time, the learned driving policy functions as a chauffeur that handles sensorimotor coordination but continues to respond to navigational commands. We evaluate different architectures for conditional imitation learning in vision-based driving. We conduct experiments in realistic three-dimensional simulations of urban driving and on a 1/5 scale robotic truck that is trained to drive in a residential area. Both systems drive based on visual input yet remain responsive to high-level navigational commands. The supplementary video can be viewed at https://youtu.be/cFtnflNe5fM) <|cite_end|>and route planners <|cite_start|> (Reference: End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners: For human drivers, having rear and side-view mirrors is vital for safe driving. They deliver a more complete view of what is happening around the car. Human drivers also heavily exploit their mental map for navigation. Nonetheless, several methods have been published that learn driving models with only a front-facing camera and without a route planner. This lack of information renders the self-driving task quite intractable. We investigate the problem in a more realistic setting, which consists of a surround-view camera system with eight cameras, a route planner, and a CAN bus reader. In particular, we develop a sensor setup that provides data for a 360-degree view of the area surrounding the vehicle, the driving route to the destination, and low-level driving maneuvers (e.g. steering angle and speed) by human drivers. With such a sensor setup we collect a new driving dataset, covering diverse driving scenarios and varying weather/illumination conditions. Finally, we learn a novel driving model by integrating information from the surround-view cameras and the route planner. Two route planners are exploited: 1) by representing the planned routes on OpenStreetMap as a stack of GPS coordinates, and 2) by rendering the planned routes on TomTom Go Mobile and recording the progression into a video. Our experiments show that: 1) 360-degree surround-view cameras help avoid failures made with a single front-view camera, in particular for city driving and intersection scenarios; and 2) route planners help the driving task significantly, especially for steering angle prediction.) <|cite_end|>to indicate the destination, by using multiple surround-view cameras to extend the visual field <|cite_start|> (Reference: End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners: For human drivers, having rear and side-view mirrors is vital for safe driving. They deliver a more complete view of what is happening around the car. Human drivers also heavily exploit their mental map for navigation. Nonetheless, several methods have been published that learn driving models with only a front-facing camera and without a route planner. This lack of information renders the self-driving task quite intractable. We investigate the problem in a more realistic setting, which consists of a surround-view camera system with eight cameras, a route planner, and a CAN bus reader. In particular, we develop a sensor setup that provides data for a 360-degree view of the area surrounding the vehicle, the driving route to the destination, and low-level driving maneuvers (e.g. steering angle and speed) by human drivers. With such a sensor setup we collect a new driving dataset, covering diverse driving scenarios and varying weather/illumination conditions. Finally, we learn a novel driving model by integrating information from the surround-view cameras and the route planner. Two route planners are exploited: 1) by representing the planned routes on OpenStreetMap as a stack of GPS coordinates, and 2) by rendering the planned routes on TomTom Go Mobile and recording the progression into a video. Our experiments show that: 1) 360-degree surround-view cameras help avoid failures made with a single front-view camera, in particular for city driving and intersection scenarios; and 2) route planners help the driving task significantly, especially for steering angle prediction.) <|cite_end|>, by adding synthesized off-the-road scenarios <|cite_start|> (Reference: ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst: Our goal is to train a policy for autonomous driving via imitation learning that is robust enough to drive a real vehicle. We find that standard behavior cloning is insufficient for handling complex driving scenarios, even when we leverage a perception system for preprocessing the input and a controller for executing the output on the car: 30 million examples are still not enough. We propose exposing the learner to synthesized data in the form of perturbations to the expert's driving, which creates interesting situations such as collisions and/or going off the road. Rather than purely imitating all data, we augment the imitation loss with additional losses that penalize undesirable events and encourage progress -- the perturbations then provide an important signal for these losses and lead to robustness of the learned model. We show that the ChauffeurNet model can handle complex situations in simulation, and present ablation experiments that emphasize the importance of each of our proposed changes and show that the model is responding to the appropriate causal factors. Finally, we demonstrate the model driving a car in the real world.) <|cite_end|>, and by adding modules to predict when the model fails <|cite_start|> (Reference: Failure Prediction for Autonomous Driving: The primary focus of autonomous driving research is to improve driving accuracy. While great progress has been made, state-of-the-art algorithms still fail at times. Such failures may have catastrophic consequences. It therefore is important that automated cars foresee problems ahead as early as possible. This is also of paramount importance if the driver will be asked to take over. We conjecture that failures do not occur randomly. For instance, driving models may fail more likely at places with heavy traffic, at complex intersections, and/or under adverse weather/illumination conditions. This work presents a method to learn to predict the occurrence of these failures, i.e. to assess how difficult a scene is to a given driving model and to possibly give the human driver an early headsup. A camera-based driving model is developed and trained over real driving datasets. The discrepancies between the model's predictions and the human `ground-truth' maneuvers were then recorded, to yield the `failure' scores. Experimental results show that the failure score can indeed be learned and predicted. Thus, our prediction method is able to improve the overall safety of an automated driving model by alerting the human driver timely, leading to better human-vehicle collaborative driving.) <|cite_end|>. The main contributions of this work, namely using semantic map data, either directly or through an attention mechanism, and rendering human-like driving in an end-to-end learning framework, are complementary to all methods developed before. There are also methods dedicated to robust transfer of driving policies from a synthetic domain to the real world domain <|cite_start|> (Reference: Driving Policy Transfer via Modularity and Abstraction: End-to-end approaches to autonomous driving have high sample complexity and are difficult to scale to realistic urban driving. Simulation can help end-to-end driving systems by providing a cheap, safe, and diverse training environment. Yet training driving policies in simulation brings up the problem of transferring such policies to the real world. We present an approach to transferring driving policies from simulation to reality via modularity and abstraction. Our approach is inspired by classic driving systems and aims to combine the benefits of modular architectures and end-to-end deep learning approaches. The key idea is to encapsulate the driving policy such that it is not directly exposed to raw perceptual input or low-level vehicle dynamics. We evaluate the presented approach in simulated urban environments and in the real world. In particular, we transfer a driving policy trained in simulation to a 1/5-scale robotic truck that is deployed in a variety of conditions, with no finetuning, on two continents. The supplementary video can be viewed at https://youtu.be/BrMDJqI6H5U) <|cite_end|> <|cite_start|> (Reference: Learning to Drive from Simulation without Real World Labels: Simulation can be a powerful tool for understanding machine learning systems and designing methods to solve real-world problems. Training and evaluating methods purely in simulation is often "doomed to succeed" at the desired task in a simulated environment, but the resulting models are incapable of operation in the real world. Here we present and evaluate a method for transferring a vision-based lane following driving policy from simulation to operation on a rural road without any real-world labels. Our approach leverages recent advances in image-to-image translation to achieve domain transfer while jointly learning a single-camera control policy from simulation control labels. We assess the driving performance of this method using both open-loop regression metrics, and closed-loop performance operating an autonomous vehicle on rural and urban roads.) <|cite_end|>. Some other works study how to better evaluate the learned driving models <|cite_start|> (Reference: Challenges in Autonomous Vehicle Testing and Validation: Software testing is all too often simply a bug hunt rather than a wellconsidered exercise in ensuring quality. A more methodical approach than a simple cycle of system-level test-fail-patch-test will be required to deploy safe autonomous vehicles at scale. The ISO 26262 development V process sets up a framework that ties each type of testing to a corresponding design or requirement document, but presents challenges when adapted to deal with the sorts of novel testing problems that face autonomous vehicles. This paper identifies five major challenge areas in testing according to the V model for autonomous vehicles: driver out of the loop, complex requirements, non-deterministic algorithms, inductive learning algorithms, and failoperational systems. General solution approaches that seem promising across these different challenge areas include: phased deployment using successively relaxed operational scenarios, use of a monitor/actuator pair architecture to separate the most complex autonomy functions from simpler safety functions, and fault injection as a way to perform more efficient edge case testing. While significant challenges remain in safety-certifying the type of algorithms that provide high-level autonomy themselves, it seems within reach to instead architect the system and its accompanying design process to be able to employ existing software safety approaches.) <|cite_end|> <|cite_start|> (Reference: On Offline Evaluation of Vision-based Driving Models: Autonomous driving models should ideally be evaluated by deploying them on a fleet of physical vehicles in the real world. Unfortunately, this approach is not practical for the vast majority of researchers. An attractive alternative is to evaluate models offline, on a pre-collected validation dataset with ground truth annotation. In this paper, we investigate the relation between various online and offline metrics for evaluation of autonomous driving models. We find that offline prediction error is not necessarily correlated with driving quality, and two models with identical prediction error can differ dramatically in their driving performance. We show that the correlation of offline evaluation with driving quality can be significantly improved by selecting an appropriate validation dataset and suitable offline metrics. The supplementary video can be viewed at https://www.youtube.com/watch?v=P8K8Z-iF0cY) <|cite_end|>. Those works are complementary to our work. Other contributions have chosen the middle ground between traditional pipe-lined methods and the monolithic end-to-end approach. They learn driving models from compact intermediate representations called affordance indicators such as \emph{distance to the front car} and \emph{existence of a traffic light} <|cite_start|> (Reference: DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving: Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road/traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website.) <|cite_end|> <|cite_start|> (Reference: Conditional Affordance Learning for Driving in Urban Environments: Most existing approaches to autonomous driving fall into one of two categories: modular pipelines, that build an extensive model of the environment, and imitation learning approaches, that map images directly to control outputs. A recently proposed third paradigm, direct perception, aims to combine the advantages of both by using a neural network to learn appropriate low-dimensional intermediate representations. However, existing direct perception approaches are restricted to simple highway situations, lacking the ability to navigate intersections, stop at traffic lights or respect speed limits. In this work, we propose a direct perception approach which maps video input to intermediate representations suitable for autonomous navigation in complex urban environments given high-level directional inputs. Compared to state-of-the-art reinforcement and conditional imitation learning approaches, we achieve an improvement of up to 68 % in goal-directed navigation on the challenging CARLA simulation benchmark. In addition, our approach is the first to handle traffic lights and speed signs by using image-level labels only, as well as smooth car-following, resulting in a significant reduction of traffic accidents in simulation.) <|cite_end|>. Our engineered features from semantic maps can be considered as some sort of affordance indicators. Recently, reinforcement learning for driving <|cite_start|> (Reference: Safe, Multi-Agent, Reinforcement Learning for Autonomous Driving: Autonomous driving is a multi-agent setting where the host vehicle must apply sophisticated negotiation skills with other road users when overtaking, giving way, merging, taking left and right turns and while pushing ahead in unstructured urban roadways. Since there are many possible scenarios, manually tackling all possible cases will likely yield a too simplistic policy. Moreover, one must balance between unexpected behavior of other drivers/pedestrians and at the same time not to be too defensive so that normal traffic flow is maintained. In this paper we apply deep reinforcement learning to the problem of forming long term driving strategies. We note that there are two major challenges that make autonomous driving different from other robotic tasks. First, is the necessity for ensuring functional safety - something that machine learning has difficulty with given that performance is optimized at the level of an expectation over many instances. Second, the Markov Decision Process model often used in robotics is problematic in our case because of unpredictable behavior of other agents in this multi-agent scenario. We make three contributions in our work. First, we show how policy gradient iterations can be used without Markovian assumptions. Second, we decompose the problem into a composition of a Policy for Desires (which is to be learned) and trajectory planning with hard constraints (which is not learned). The goal of Desires is to enable comfort of driving, while hard constraints guarantees the safety of driving. Third, we introduce a hierarchical temporal abstraction we call an "Option Graph" with a gating mechanism that significantly reduces the effective horizon and thereby reducing the variance of the gradient estimation even further.) <|cite_end|> <|cite_start|> (Reference: Deep Reinforcement Learning framework for Autonomous Driving: Reinforcement learning is considered to be a strong AI paradigm which can be used to teach machines through interaction with the environment and learning from their mistakes. Despite its perceived utility, it has not yet been successfully applied in automotive applications. Motivated by the successful demonstrations of learning of Atari games and Go by Google DeepMind, we propose a framework for autonomous driving using deep reinforcement learning. This is of particular relevance as it is difficult to pose autonomous driving as a supervised learning problem due to strong interactions with the environment including other vehicles, pedestrians and roadworks. As it is a relatively new area of research for autonomous driving, we provide a short overview of deep reinforcement learning and then describe our proposed framework. It incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios. It also integrates the recent work on attention models to focus on relevant information, thereby reducing the computational complexity for deployment on embedded hardware. The framework was tested in an open source 3D car racing simulator called TORCS. Our simulation results demonstrate learning of autonomous maneuvering in a scenario of complex road curvatures and simple interaction of other vehicles.) <|cite_end|> <|cite_start|> (Reference: Learning to Drive in a Day: We demonstrate the first application of deep reinforcement learning to autonomous driving. From randomly initialised parameters, our model is able to learn a policy for lane following in a handful of training episodes using a single monocular image as input. We provide a general and easy to obtain reward: the distance travelled by the vehicle without the safety driver taking control. We use a continuous, model-free deep reinforcement learning algorithm, with all exploration and optimisation performed on-vehicle. This demonstrates a new framework for autonomous driving which moves away from reliance on defined logical rules, mapping, and direct supervision. We discuss the challenges and opportunities to scale this approach to a broader range of autonomous driving tasks.) <|cite_end|>and learning to drive in simulators <|cite_start|> (Reference: Exploring the Limitations of Behavior Cloning for Autonomous Driving: Driving requires reacting to a wide variety of complex environment conditions and agent behaviors. Explicitly modeling each possible scenario is unrealistic. In contrast, imitation learning can, in theory, leverage data from large fleets of human-driven cars. Behavior cloning in particular has been successfully used to learn simple visuomotor policies end-to-end, but scaling to the full spectrum of driving behaviors remains an unsolved problem. In this paper, we propose a new benchmark to experimentally investigate the scalability and limitations of behavior cloning. We show that behavior cloning leads to state-of-the-art results, including in unseen environments, executing complex lateral and longitudinal maneuvers without these reactions being explicitly programmed. However, we confirm well-known limitations (due to dataset bias and overfitting), new generalization issues (due to dynamic objects and the lack of a causal model), and training instability requiring further research before behavior cloning can graduate to real-world driving. The code of the studied behavior cloning approaches can be found at https://github.com/felipecode/coiltraine .) <|cite_end|> <|cite_start|> (Reference: Learning Situational Driving: Human drivers have a remarkable ability to drive in diverse visual conditions and situations, e.g., from maneuvering in rainy, limited visibility conditions with no lane markings to turning in a busy intersection while yielding to pedestrians. In contrast, we find that state-of-the-art sensorimotor driving models struggle when encountering diverse settings with varying relationships between observation and action. To generalize when making decisions across diverse conditions, humans leverage multiple types of situation-specific reasoning and learning strategies. Motivated by this observation, we develop a framework for learning a situational driving policy that effectively captures reasoning under varying types of scenarios. Our key idea is to learn a mixture model with a set of policies that can capture multiple driving modes. We first optimize the mixture model through behavior cloning and show it to result in significant gains in terms of driving performance in diverse conditions. We then refine the model by directly optimizing for the driving task itself, i.e., supervised with the navigation task reward. Our method is more scalable than methods assuming access to privileged information, e.g., perception labels, as it only assumes demonstration and reward-based supervision. We achieve over 98% success rate on the CARLA driving benchmark as well as state-of-the-art performance on a newly introduced generalization benchmark.) <|cite_end|>have both received increased attention. \textbf{Navigation Maps}. Increasing the accuracy and robustness of self-localization on a map <|cite_start|> (Reference: IEEE/ION Position, Location and Navigation Symposium: — In urban areas, GNSS localization quality is often degraded due to signal blockage and multi-path reflections. When several GNSS signals are blocked by buildings, the remaining unblocked GNSS satellites are typically in a poor geometry for localization (nearly collinear along the street direction). Multi-path reflections result in pseudo range mea ­ surements that can be significantly longer than the line of sight path (true range) resulting in biased geolocation esti ­ mates. If a 3D map of the environment is available, one can address these problems by evaluating the likelihood of GNSS signal strength and location measurements given the map. We present two approaches based on this observation. The first is appropriate for cases when network connectivity may be unavailable or undesired and uses a particle filter framework that simultaneously improve both localization and the 3D map. This approach is shown via experiments to improve the map of a section of a university campus while simultaneously improving receiver localization. The second approach which may be more suitable for smartphone applications assumes that network connectivity is available and thus a software service running in the cloud performs the mapping and localization calculations. Early experiments demonstrate the potential of this approach to significantly improve geo-localization accuracy in urban areas.) <|cite_end|> <|cite_start|> (Reference: {IV: It is well known that oral pathology is an essential bridge between basic and clinical science in dental field. Although oral pathology has been introduced to Korean dental science since 1945, there is not yet presented about oral pathologic history. The purpose of this study are to summarize and to introduce Korean oral pathologic history in serial form for Korean oral pathologists.) <|cite_end|>and computing the fastest, most fuel-efficient trajectory from one point to another through a road network <|cite_start|> (Reference: Driving with knowledge from the physical world: This paper presents a Cloud-based system computing customized and practically fast driving routes for an end user using (historical and real-time) traffic conditions and driver behavior. In this system, GPS-equipped taxicabs are employed as mobile sensors constantly probing the traffic rhythm of a city and taxi drivers' intelligence in choosing driving directions in the physical world. Meanwhile, a Cloud aggregates and mines the information from these taxis and other sources from the Internet, like Web maps and weather forecast. The Cloud builds a model incorporating day of the week, time of day, weather conditions, and individual driving strategies (both of the taxi drivers and of the end user for whom the route is being computed). Using this model, our system predicts the traffic conditions of a future time (when the computed route is actually driven) and performs a self-adaptive driving direction service for a particular user. This service gradually learns a user's driving behavior from the user's GPS logs and customizes the fastest route for the user with the help of the Cloud. We evaluate our service using a real-world dataset generated by over 33,000 taxis over a period of 3 months in Beijing. As a result, our service accurately estimates the travel time of a route for a user; hence finding the fastest route customized for the user.) <|cite_end|> <|cite_start|> (Reference: Route Planning in Transportation Networks: We survey recent advances in algorithms for route planning in transportation networks. For road networks, we show that one can compute driving directions in milliseconds or less even at continental scale. A variety of techniques provide different trade-offs between preprocessing effort, space requirements, and query time. Some algorithms can answer queries in a fraction of a microsecond, while others can deal efficiently with real-time traffic. Journey planning on public transportation systems, although conceptually similar, is a significantly harder problem due to its inherent time-dependent and multicriteria nature. Although exact algorithms are fast enough for interactive queries on metropolitan transit systems, dealing with continent-sized instances requires simplifications or heavy preprocessing. The multimodal route planning problem, which seeks journeys combining schedule-based transportation (buses, trains) with unrestricted modes (walking, driving), is even harder, relying on approximate solutions even for metropolitan inputs.) <|cite_end|> <|cite_start|> (Reference: GPSView: A Scenic Driving Route Planner: GPS devices have been widely used in automobiles to compute navigation routes to destinations. The generated driving route targets the minimal traveling distance, but neglects the sightseeing experience of the route. In this study, we propose an augmented GPS navigation system, GPSView, to incorporate a scenic factor into the routing. The goal of GPSView is to plan a driving route with scenery and sightseeing qualities, and therefore allow travelers to enjoy sightseeing on the drive. To do so, we first build a database of scenic roadways with vistas of landscapes and sights along the roadside. Specifically, we adapt an attention-based approach to exploit community-contributed GPS-tagged photos on the Internet to discover scenic roadways. The premise is: a multitude of photos taken along a roadway imply that this roadway is probably appealing and catches the public's attention. By analyzing the geospatial distribution of photos, the proposed approach discovers the roadside sight spots, or Points-Of-Interest (POIs), which have good scenic qualities and visibility to travelers on the roadway. Finally, we formulate scenic driving route planning as an optimization task towards the best trade-off between sightseeing experience and traveling distance. Testing in the northern California area shows that the proposed system can deliver promising results.) <|cite_end|>have been popular research fields for many years. By now, navigation systems are widely used to aid human drivers or pedestrians. Yet, their integration for learning driving models has not received a lot of attention in the academic community, mainly due to limited accessibility <|cite_start|> (Reference: End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners: For human drivers, having rear and side-view mirrors is vital for safe driving. They deliver a more complete view of what is happening around the car. Human drivers also heavily exploit their mental map for navigation. Nonetheless, several methods have been published that learn driving models with only a front-facing camera and without a route planner. This lack of information renders the self-driving task quite intractable. We investigate the problem in a more realistic setting, which consists of a surround-view camera system with eight cameras, a route planner, and a CAN bus reader. In particular, we develop a sensor setup that provides data for a 360-degree view of the area surrounding the vehicle, the driving route to the destination, and low-level driving maneuvers (e.g. steering angle and speed) by human drivers. With such a sensor setup we collect a new driving dataset, covering diverse driving scenarios and varying weather/illumination conditions. Finally, we learn a novel driving model by integrating information from the surround-view cameras and the route planner. Two route planners are exploited: 1) by representing the planned routes on OpenStreetMap as a stack of GPS coordinates, and 2) by rendering the planned routes on TomTom Go Mobile and recording the progression into a video. Our experiments show that: 1) 360-degree surround-view cameras help avoid failures made with a single front-view camera, in particular for city driving and intersection scenarios; and 2) route planners help the driving task significantly, especially for steering angle prediction.) <|cite_end|>. We integrate industrial standard semantic maps -- from HERE Technologies -- into the learning of our driving models. We show the advantage of using these maps either in a straightforward late fusion approach or via a map-based attention module. Similar map features have been used recently in an ADAS system for motorcycle. \textbf{Attention}. In recent years several researchers propose to use different attention mechanism, for end-to-end driving models <|cite_start|> (Reference: Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention: Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers etc., can understand what triggered a particular behavior. Here we explore the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). Our approach is two-stage. In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. We demonstrate the effectiveness of our model on three datasets totaling 16 hours of driving. We first show that training with attention does not degrade the performance of the end-to-end network. Then we show that the network causally cues on a variety of features that are used by humans while driving.) <|cite_end|> <|cite_start|> (Reference: icra: 미국 수정헌법 제5조와 제14조는 적법절차를 규정하고 있다. 적법절차는 형사절차뿐만 아니라 사회보장행정영역에서도 필요한 절차로서 미국의 행정절차법은 이를 사회보장법과 관련하여 조화롭게 준수하려 노력하고 있다.   적법절차는 우리들의 전통과 양심 속에 근본적인 것으로 자리...) <|cite_end|> <|cite_start|> (Reference: Grounding Human-to-Vehicle Advice for Self-driving Vehicles: Recent success suggests that deep neural control networks are likely to be a key component of self-driving vehicles. These networks are trained on large datasets to imitate human actions, but they lack semantic understanding of image contents. This makes them brittle and potentially unsafe in situations that do not match training data. Here, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Attention mechanisms tie controller behavior to salient objects in the advice. We evaluate our model on a novel advisable driving dataset with manually annotated human-to-vehicle advice called Honda Research Institute-Advice Dataset (HAD). We show that taking advice improves the performance of the end-to-end network, while the network cues on a variety of visual features that are provided by advice. The dataset is available at https://usa.honda-ri.com/HAD.) <|cite_end|>. In <|cite_start|> (Reference: Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention: Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers etc., can understand what triggered a particular behavior. Here we explore the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). Our approach is two-stage. In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. We demonstrate the effectiveness of our model on three datasets totaling 16 hours of driving. We first show that training with attention does not degrade the performance of the end-to-end network. Then we show that the network causally cues on a variety of features that are used by humans while driving.) <|cite_end|>visual attention map is used to visualize the focus of the network. In <|cite_start|> (Reference: icra: 미국 수정헌법 제5조와 제14조는 적법절차를 규정하고 있다. 적법절차는 형사절차뿐만 아니라 사회보장행정영역에서도 필요한 절차로서 미국의 행정절차법은 이를 사회보장법과 관련하여 조화롭게 준수하려 노력하고 있다.   적법절차는 우리들의 전통과 양심 속에 근본적인 것으로 자리...) <|cite_end|>, the attention is more guided and can only promote detected objects. Whereas the former is vision-based, in <|cite_start|> (Reference: Grounding Human-to-Vehicle Advice for Self-driving Vehicles: Recent success suggests that deep neural control networks are likely to be a key component of self-driving vehicles. These networks are trained on large datasets to imitate human actions, but they lack semantic understanding of image contents. This makes them brittle and potentially unsafe in situations that do not match training data. Here, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Attention mechanisms tie controller behavior to salient objects in the advice. We evaluate our model on a novel advisable driving dataset with manually annotated human-to-vehicle advice called Honda Research Institute-Advice Dataset (HAD). We show that taking advice improves the performance of the end-to-end network, while the network cues on a variety of visual features that are provided by advice. The dataset is available at https://usa.honda-ri.com/HAD.) <|cite_end|>, natural language based advise to the network is used to focus the network's attention. Our approach differs in the sense that it does not use visual or language-based attention guidance but instead utilizes the rich information present in semantic maps to promote visual object classes based on the driving location. \textbf{Human-Like Driving}. A large body of work has studied human driving styles <|cite_start|> (Reference: International Large-Scale Vehicle Corpora for Research on Driver Behavior on the Road: This paper considers a comprehensive and collaborative project to collect large amounts of driving data on the road for use in a wide range of areas of vehicle-related research centered on driving behavior. Unlike previous data collection efforts, the corpora collected here contain both human and vehicle sensor data, together with rich and continuous transcriptions. While most efforts on in-vehicle research are generally focused within individual countries, this effort links a collaborative team from three diverse regions (i.e., Asia, American, and Europe). Details relating to the data collection paradigm, such as sensors, driver information, routes, and transcription protocols, are discussed, and a preliminary analysis of the data across the three data collection sites from the U.S. (Dallas), Japan (Nagoya), and Turkey (Istanbul) is provided. The usability of the corpora has been experimentally verified with a Cohen's kappa coefficient of 0.74 for transcription reliability, as well as being successfully exploited for several in-vehicle applications. Most importantly, the corpora are publicly available for research use and represent one of the first multination efforts to share resources and understand driver characteristics. Future work on distributing the corpora to the wider research community is also discussed.) <|cite_end|> <|cite_start|> (Reference: A Review of Intelligent Driving Style Analysis Systems and Related Artificial Intelligence Algorithms: In this paper the various driving style analysis solutions are investigated. An in-depth investigation is performed to identify the relevant machine learning and artificial intelligence algorithms utilised in current driver behaviour and driving style analysis systems. This review therefore serves as a trove of information, and will inform the specialist and the student regarding the current state of the art in driver style analysis systems, the application of these systems and the underlying artificial intelligence algorithms applied to these applications. The aim of the investigation is to evaluate the possibilities for unique driver identification utilizing the approaches identified in other driver behaviour studies. It was found that Fuzzy Logic inference systems, Hidden Markov Models and Support Vector Machines consist of promising capabilities to address unique driver identification algorithms if model complexity can be reduced.) <|cite_end|>. Also statistical approaches were employed to evaluate human drivers and to suggest improvements <|cite_start|> (Reference: Driving behavior analysis with smartphones: insights from a controlled field study: We evaluate a mobile application that assesses driving behavior based on in-vehicle acceleration measurements and gives corresponding feedback to drivers. In the insurance business, such applications have recently gained traction as a viable alternative to the monitoring of drivers via "black boxes" installed in vehicles, which lacks interaction opportunities and is perceived as privacy intrusive by policyholders. However, pose uncertainty and other noise-inducing factors make smartphones potentially less reliable as sensor platforms. We therefore compare critical driving events generated by a smartphone with reference measurements from a vehicle-fixed IMU in a controlled field study. The study was designed to capture driver variability under real-world conditions, while minimizing the influence of external factors. We find that the mobile measurements tend to overestimate critical driving events, possibly due to deviation from the calibrated initial device pose. While weather and daytime do not appear to influence event counts, road type is a significant factor that is not considered in most current state-of-the-art implementations.) <|cite_end|> <|cite_start|> (Reference: 2021 IEEE Global Communications Conference (GLOBECOM): ) <|cite_end|>. Some work has even studied human-like motion planning of autonomous cars, but it was constrained to simulated scenarios <|cite_start|> (Reference: A Framework for Modeling Human-like Driving Behaviors for Autonomous Vehicles in Driving Simulators: A framework for modeling driver behavior within driving simulators is described in this paper. This framework serves as a basis for building human- like driving behavior models for autonomous vehicles operating within the virtual environment of a driving simulator. The framework consists of four units, the Perception Unit, the Emotions Unit, the Decision- making Unit (DMU), and the Decision- implementation Unit (DIU). The Perception Unit defines how the model perceives its environment in local and global terms. The Emotions Unit defines how the model responds emotionally to its environment. The DMU investigates the environment for possible actions that might potentially serve the model's emotional demands. And finally the DIU tries to implement these decisions when a traffic condition, perceived as safe enough for such an implementation, emerges. Each of these units has its own set of fuzzy variables and fuzzy ifthen rules. Any driving model, that is based on this framework, should provide membership function parameters for these fuzzy variables in accordance with the category of human driving behavior this model is targeting. Our framework addresses decision making and implementation at the maneuvering and operational levels of the driving task. Decisions at the planning level are addressed through a script- based traffic controller. The present model is limited to simulating human behaviors when driving in a two- lane rural environment.) <|cite_end|> <|cite_start|> (Reference: Toward More Realistic Driving Behavior Models for Autonomous Vehicles in Driving Simulators: Autonomous vehicles are perhaps the most encountered element in a driving simulator. Their effect on the realism of the simulator is critical. For autonomous vehicles to contribute positively to the realism of the hosting driving simulator, they need to have a realistic appearance and, possibly more importantly, realistic behavior. Addressed is the problem of modeling realistic and humanlike behaviors on simulated highway systems by developing an abstract framework that captures the details of human driving at the microscopic level. This framework consists of four units that together define and specify the elements needed for a concrete humanlike driving model to be implemented within a driving simulator. These units are the perception unit, the emotions unit, the decision-making unit, and the decision-implementation unit. Realistic models of humanlike driving behavior can be built by implementing the specifications set by the driving framework. Four humanlike driving models have been implemented on the basis of the driving framework: (a) a generic normal driving model, (b) an aggressive driving model, (c) an alcoholic driving model, and (d) an elderly driving model. These driving models provide experiment designers with a powerful tool for generating complex traffic scenarios in their experiments. These behavioral models were incorporated along with three-dimensional visual models and vehicle dynamics models into one entity, which is the autonomous vehicle. Subjects perceived the autonomous vehicles with the described behavioral models as having a positive effect on the realism of the driving simulator. The erratic driving models were identified correctly by the subjects in most cases.) <|cite_end|>. In <|cite_start|> (Reference: icra: 미국 수정헌법 제5조와 제14조는 적법절차를 규정하고 있다. 적법절차는 형사절차뿐만 아니라 사회보장행정영역에서도 필요한 절차로서 미국의 행정절차법은 이를 사회보장법과 관련하여 조화롭게 준수하려 노력하고 있다.   적법절차는 우리들의 전통과 양심 속에 근본적인 것으로 자리...) <|cite_end|>a cost function that can generate human-like driving was learned using inverse reinforcement learning. Instead of learning a cost, in our work we rely on adversarial learning to force our driving model to generate action sequences that come from the same distribution as human action sequences. Note that using adversarial learning is not a new concept in imitation learning <|cite_start|> (Reference: Generative Adversarial Imitation Learning: Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal. One approach is to recover the expert's cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large, high-dimensional environments.) <|cite_end|>. However, using a discriminator to force the policy to learn human like action sequences is new and compared to <|cite_start|> (Reference: Generative Adversarial Imitation Learning: Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal. One approach is to recover the expert's cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large, high-dimensional environments.) <|cite_end|>
[ "<|reference_start|> Human-like motion planning model for driving in signalized intersections: <|reference_end|>", "<|reference_start|> End-to-end Learning of Driving Models from Large-scale Video Datasets: Robust perception-action models should be learned from training data with diverse visual appearances and realistic behaviors, yet current approaches to deep visuomotor policy learning have been generally limited to in-situ models learned from a single vehicle or a simulation environment. We advocate learning a generic vehicle motion model from large scale crowd-sourced video data, and develop an end-to-end trainable architecture for learning to predict a distribution over future vehicle egomotion from instantaneous monocular camera observations and previous vehicle state. Our model incorporates a novel FCN-LSTM architecture, which can be learned from large-scale crowd-sourced vehicle action data, and leverages available scene segmentation side tasks to improve performance under a privileged learning paradigm. <|reference_end|>", "<|reference_start|> Failure Prediction for Autonomous Driving: The primary focus of autonomous driving research is to improve driving accuracy. While great progress has been made, state-of-the-art algorithms still fail at times. Such failures may have catastrophic consequences. It therefore is important that automated cars foresee problems ahead as early as possible. This is also of paramount importance if the driver will be asked to take over. We conjecture that failures do not occur randomly. For instance, driving models may fail more likely at places with heavy traffic, at complex intersections, and/or under adverse weather/illumination conditions. This work presents a method to learn to predict the occurrence of these failures, i.e. to assess how difficult a scene is to a given driving model and to possibly give the human driver an early headsup. A camera-based driving model is developed and trained over real driving datasets. The discrepancies between the model's predictions and the human `ground-truth' maneuvers were then recorded, to yield the `failure' scores. Experimental results show that the failure score can indeed be learned and predicted. Thus, our prediction method is able to improve the overall safety of an automated driving model by alerting the human driver timely, leading to better human-vehicle collaborative driving. <|reference_end|>", "<|reference_start|> Generative Adversarial Imitation Learning: Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal. One approach is to recover the expert's cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large, high-dimensional environments. <|reference_end|>" ]
[ 15, 28, 35, 67 ]
{"<|cite_1|>": "ss-691875", "<|multi_cite_2_1|>": "ss-1092450", "<|multi_cite_2_2|>": "ss-866128", "<|multi_cite_3_1|>": "ss-866128", "<|multi_cite_3_2|>": "ss-900092", "<|multi_cite_4_1|>": "arxiv-96666", "<|multi_cite_4_2|>": "arxiv-136630", "<|multi_cite_4_3|>": "arxiv-152996", "<|cite_5|>": "ss-866128", "<|multi_cite_6_1|>": "arxiv-152996", "<|multi_cite_6_2|>": "arxiv-181741", "<|multi_cite_7_1|>": "arxiv-183712", "<|multi_cite_7_2|>": "arxiv-315541", "<|multi_cite_8_1|>": "ss-1683512", "<|multi_cite_8_2|>": "ss-1683513", "<|multi_cite_8_3|>": "ss-2334010", "<|cite_9|>": "arxiv-152996", "<|cite_10|>": "arxiv-196844", "<|cite_11|>": "ss-900092", "<|cite_12|>": "ss-911001", "<|multi_cite_13_1|>": "ss-978466", "<|multi_cite_13_2|>": "arxiv-96666", "<|multi_cite_13_3|>": "ss-1683514", "<|multi_cite_13_4|>": "arxiv-153767", "<|multi_cite_13_5|>": "arxiv-136630", "<|multi_cite_13_6|>": "arxiv-152996", "<|multi_cite_13_7|>": "arxiv-179388", "<|cite_14|>": "arxiv-96666", "<|cite_15|>": "arxiv-111749", "<|multi_cite_16_1|>": "arxiv-111749", "<|multi_cite_16_2|>": "arxiv-179388", "<|cite_17|>": "arxiv-136630", "<|cite_18|>": "arxiv-152996", "<|cite_19|>": "arxiv-152996", "<|cite_20|>": "arxiv-183712", "<|cite_21|>": "arxiv-157360", "<|multi_cite_22_1|>": "arxiv-156267", "<|multi_cite_22_2|>": "arxiv-183929", "<|multi_cite_23_1|>": "ss-1257954", "<|multi_cite_23_2|>": "arxiv-172539", "<|multi_cite_24_1|>": "arxiv-77090", "<|multi_cite_24_2|>": "arxiv-162809", "<|multi_cite_25_1|>": "arxiv-107649", "<|multi_cite_25_2|>": "arxiv-121204", "<|multi_cite_25_3|>": "arxiv-164377", "<|multi_cite_26_1|>": "arxiv-200666", "<|multi_cite_26_2|>": "ss-1253519", "<|multi_cite_27_1|>": "ss-1284561", "<|multi_cite_27_2|>": "ss-712197", "<|multi_cite_28_1|>": "ss-1288192", "<|multi_cite_28_2|>": "arxiv-76369", "<|multi_cite_28_3|>": "ss-1683515", "<|cite_29|>": "arxiv-152996", "<|multi_cite_31_1|>": "arxiv-120457", "<|multi_cite_31_2|>": "ss-926592", "<|multi_cite_31_3|>": "arxiv-234479", "<|cite_32|>": "arxiv-120457", "<|cite_33|>": "ss-926592", "<|cite_34|>": "arxiv-234479", "<|multi_cite_35_1|>": "ss-1683516", "<|multi_cite_35_2|>": "ss-2406013", "<|multi_cite_36_1|>": "ss-2076472", "<|multi_cite_36_2|>": "ss-684407", "<|multi_cite_37_1|>": "ss-1683512", "<|multi_cite_37_2|>": "ss-1683513", "<|cite_38|>": "ss-926592", "<|cite_39|>": "arxiv-99846", "<|cite_40|>": "arxiv-99846"}
2006.14979
<|paper_start|> Title: The Sci-hub Effect: Sci-hub downloads lead to more article citations Abstract: The Sci-hub Effect: Sci-hub downloads lead to more article citations: Citations are often used as a metric of the impact of scientific publications. Here, we examine how the number of downloads from Sci-hub as well as various characteristics of publications and their authors predicts future citations. Using data from 12 leading journals in economics, consumer research, neuroscience, and multidisciplinary research, we found that articles downloaded from Sci-hub were cited 1.72 times more than papers not downloaded from Sci-hub and that the number of downloads from Sci-hub was a robust predictor of future citations. Among other characteristics of publications, the number of figures in a manuscript consistently predicts its future citations. The results suggest that limited access to publications may limit some scientific research from achieving its full impact. Introduction Science and its outputs are essential in daily life, as they help to understand our world and provide a basis for better decisions. Although scientific findings are often cited in social media and shared outside the scientific community <|cite_start|> (Reference: The Science of Sharing and the Sharing of Science: Why do members of the public share some scientific findings and not others? What can scientists do to increase the chances that their findings will be shared widely among nonscientists? To address these questions, we integrate past research on the psychological drivers of interpersonal communication with a study examining the sharing of hundreds of recent scientific discoveries. Our findings offer insights into (i) how attributes of a discovery and the way it is described impact sharing, (ii) who generates discoveries that are likely to be shared, and (iii) which types of people are most likely to share scientific discoveries. The results described here, combined with a review of recent research on interpersonal communication, suggest how scientists can frame their work to increase its dissemination. They also provide insights about which audiences may be the best targets for the diffusion of scientific content.) <|cite_end|>, their primary use is what we could call ``scholar consumption.'' This phenomenon includes using websites that provide subscription-based access to massive databases of scientific research <|cite_start|> (Reference: Relationships between consumption, publication and impact in French universities in a value perspective: a bibliometric analysis: ) <|cite_end|>. Despite the existence of local databases in each region of the world, \textit{Web of Science} and \textit{Scopus} are the most widely employed <|cite_start|> (Reference: A tale of two databases: the use of Web of Science and Scopus in academic papers: ) <|cite_end|>. The use of these databases, however, is changing given the emergence of alternative services such as Library Genesis, Paperhub, or Sci-hub, which provide free access to scientific publications without regard for their copyright status <|cite_start|> (Reference: The future of access: How a mosaic of next-gen solutions will deliver more convenient access to more users.: Rogue services such as SciHub deliver a clear message that users demand fast, convenient access to content. This paper discusses how the landscape of user access to content is being defined by RA21, CASA and LibKey. The reach and limitation of each approach is reviewed, discussing how the future of user experience will be a mosaic of services, that while different, all share core attributes.) <|cite_end|>. Sci-hub, in particular, has gained worldwide renown as an initiative which poses wide-ranging implications for all belonging to the global academic system <|cite_start|> (Reference: Use, knowledge, and perception of the scientific contribution of sci-hub in medical students: Study in six countries in latin america: Introduction Sci-Hub is a useful web portal for people working in science as it provides access to millions of free scientific articles. Satisfaction and usage should be explored in the Latino student population. The objective of this study was to evaluate the use, knowledge, and perception of the scientific contribution of Sci-Hub in medical students from Latin America. Methodology A multicenter, observational, analytical study was conducted in 6632 medical students from 6 countries in Latin America. We surveyed from a previously validated instrument, delving into knowledge, monthly average usage, satisfaction level, and perception of the scientific contributions provided by Sci-Hub. Frequencies and percentages are described, and generalized linear models were used to establish statistical associations. Results Only 19.2% of study participants knew of Sci-Hub and its function, while the median use was twice a month. 29.9% of Sci-Hub-aware participants claimed they always find the desired scientific information in their Sci-Hub search; 62.5% of participants affirmed that Sci-Hub contributes to scientific investigation; only 2.2% reported that Sci-Hub does not contribute to science. Conclusion The majority of Latino students are not aware of Sci-Hub.) <|cite_end|> <|cite_start|> (Reference: Sci-Hub and medical practice: an ethical dilemma in Peru.: ) <|cite_end|> <|cite_start|> (Reference: Will the rise of sci-hub pave the road for the subscription-based access to publishing databases?: Sci-hub has become the new phenomenon on the academic publishing market. While its popularity is growing worldwide, large academic publishers are losing millions of dollars on paid article downloads. Perhaps the time has come to re-think the rules of the game of the publishing market and to look for novel solutions for monetizing scientific output. Subscription-based access to publishing databases, similar to the model used in online music streaming, might be one of these solutions that could secure the status quo and stabilize the market.) <|cite_end|> <|cite_start|> (Reference: Sci-hub provides access to nearly all scholarly literature.: The website Sci-Hub enables users to download PDF versions of scholarly articles, including many articles that are paywalled at their journal’s site. Sci-Hub has grown rapidly since its creation in 2011, but the extent of its coverage has been unclear. Here we report that, as of March 2017, Sci-Hub’s database contains 68.9% of the 81.6 million scholarly articles registered with Crossref and 85.1% of articles published in toll access journals. We find that coverage varies by discipline and publisher, and that Sci-Hub preferentially covers popular, paywalled content. For toll access articles, we find that Sci-Hub provides greater coverage than the University of Pennsylvania, a major research university in the United States. Green open access to toll access articles via licit services, on the other hand, remains quite limited. Our interactive browser at https://greenelab.github.io/scihub allows users to explore these findings in more detail. For the first time, nearly all scholarly literature is available gratis to anyone with an Internet connection, suggesting the toll access business model may become unsustainable.) <|cite_end|> <|cite_start|> (Reference: Sci-hub: The new and ultimate disruptor? view from the front: The Harbinger project was a 3‐year‐long international study of the changing attitudes and behaviours of early career researchers (ECRs). One of the aims of the project was to discover if ECRs were adopting disrupting platforms that, legitimately or illegitimately, promote openness and sharing. It has been alleged that such an adoption appeals to them as Millennials. More than 100 ECRs from seven countries were questioned annually, and questions about Sc‐Hub were raised as part of discussions about discovery and access. Interview data were supplemented by desk research and Google Trends statistics. It was found that Sci‐Hub use was increasing and that a quarter of the ECRs now use it, with French ECRs being the biggest users. However, Sci‐Hub is making little headway with ECRs from the UK, USA, Malaysia, and China, although in China's case, this can be explained by it being banned and the country having its own equivalent, www.91lib.com. Sci‐Hub is used as much for convenience as necessity; use is not connected to the strength of library provision and and it has been suggested that it represents a bigger threat to publishers than ResearchGate, whose star might be waning.) <|cite_end|>. As Sci-hub gives access to nearly all scientific literature to anyone with an internet connection, some scholars suggest that the traditional business model that relies on subscription to journals may become unsustainable <|cite_start|> (Reference: My love-hate of Sci-Hub: Like many scientist-editors of journals published by nonprofit scientific societies, I have a love-hate relationship with Sci-Hub, the website operated out of Russia that provides access to 50 million pirated scientific articles to researchers worldwide (see the News story on p. 508). I recognize the underlying motivation of bringing global research content to the developing world. However, I also recognize that much traffic to Sci-Hub is from researchers who already have access to the articles they seek through mechanisms such as site licenses, open access, or other means. Authors who publish in Science journals, for example, can make their papers available immediately upon publication through free referrer links at the authors' websites. Research published after 1996 in a Science journal is made free with registration 1 year after its publication date. So what does the scientific community risk by gathering papers illegally?) <|cite_end|> <|cite_start|> (Reference: Sci-hub provides access to nearly all scholarly literature.: The website Sci-Hub enables users to download PDF versions of scholarly articles, including many articles that are paywalled at their journal’s site. Sci-Hub has grown rapidly since its creation in 2011, but the extent of its coverage has been unclear. Here we report that, as of March 2017, Sci-Hub’s database contains 68.9% of the 81.6 million scholarly articles registered with Crossref and 85.1% of articles published in toll access journals. We find that coverage varies by discipline and publisher, and that Sci-Hub preferentially covers popular, paywalled content. For toll access articles, we find that Sci-Hub provides greater coverage than the University of Pennsylvania, a major research university in the United States. Green open access to toll access articles via licit services, on the other hand, remains quite limited. Our interactive browser at https://greenelab.github.io/scihub allows users to explore these findings in more detail. For the first time, nearly all scholarly literature is available gratis to anyone with an Internet connection, suggesting the toll access business model may become unsustainable.) <|cite_end|> <|cite_start|> (Reference: Will the rise of sci-hub pave the road for the subscription-based access to publishing databases?: Sci-hub has become the new phenomenon on the academic publishing market. While its popularity is growing worldwide, large academic publishers are losing millions of dollars on paid article downloads. Perhaps the time has come to re-think the rules of the game of the publishing market and to look for novel solutions for monetizing scientific output. Subscription-based access to publishing databases, similar to the model used in online music streaming, might be one of these solutions that could secure the status quo and stabilize the market.) <|cite_end|>. Access to scientific literature has been unequal between countries of different levels of economic development. Sci-hub allows access to scientific literature for scientists of both developed and developing countries, and it may, therefore, decrease the difference in access to scientific publications between nations. Because the collective productive knowledge provided by scientific literature might be one of the driving mechanisms of economic development <|cite_start|> (Reference: The atlas of economic complexity: Mapping paths to prosperity: Why do some countries grow and others do not? The authors of The Atlas of Economic Complexity offer readers an explanation based on "Economic Complexity," a measure of a society’s productive knowledge. Prosperous societies are those that have the knowledge to make a larger variety of more complex products. The Atlas of Economic Complexity attempts to measure the amount of productive knowledge countries hold and how they can move to accumulate more of it by making more complex products. Through the graphical representation of the "Product Space," the authors are able to identify each country's "adjacent possible," or potential new products, making it easier to find paths to economic diversification and growth. In addition, they argue that a country’s economic complexity and its position in the product space are better predictors of economic growth than many other well-known development indicators, including measures of competitiveness, governance, finance, and schooling. Using innovative visualizations, the book locates each country in the product space, provides complexity and growth potential rankings for 128 countries, and offers individual country pages with detailed information about a country’s current capabilities and its diversification options. The maps and visualizations included in the Atlas can be used to find more viable paths to greater productive knowledge and prosperity.) <|cite_end|> <|cite_start|> (Reference: Productivity in physical and chemical science predicts the future economic growth of developing countries better than other popular indices.: Scientific productivity of middle income countries correlates stronger with present and future wealth than indices reflecting its financial, social, economic or technological sophistication. We identify the contribution of the relative productivity of different scientific disciplines in predicting the future economic growth of a nation. Results show that rich and poor countries differ in the relative proportion of their scientific output in the different disciplines: countries with higher relative productivity in basic sciences such as physics and chemistry had the highest economic growth in the following five years compared to countries with a higher relative productivity in applied sciences such as medicine and pharmacy. Results suggest that the economies of middle income countries that focus their academic efforts in selected areas of applied knowledge grow slower than countries which invest in general basic sciences.) <|cite_end|> <|cite_start|> (Reference: Can scientific productivity impact the economic complexity of countries?: ) <|cite_end|> <|cite_start|> (Reference: The impact of research output on economic growth by fields of science: a dynamic panel data analysis, 1980–2016: ) <|cite_end|>, more access to this literature would translate into more prosperity, especially for the countries which used to have limited access. A more immediate impact of Sci-hub is likely to be observed on scientific knowledge interchange in general, and scientific citing in particular. Existing studies have examined various other predictors of citations. The length of the title of a paper has been shown to be negatively associated with its annual citation rate <|cite_start|> (Reference: What makes an article citable?: To date, no research has been undertaken to examine what constitutes high citation counts. This study examined the quantifiable characteristics in publications and investigated their associations with citation per year. That is, this study empirically examined the relationships with length, authorship, and collaboration and citation counts in the 300 most cited publications in tourism and hospitality journals. The results reveal a negative relationship between the length of a title and citation per year.) <|cite_end|>. According to some scholars, the use of graphs and tables for communicating scientific findings could be critical as well <|cite_start|> (Reference: Constructing knowledge: The role of graphs and tables in hard and soft psychology.: Because graphs provide a compact, rhetorically powerful way of representing research findings, recent theories of science have postulated their use as a distinguishing feature of science. Studies have shown that the use of graphs in journal articles correlates highly with the hardness of scientific fields, both across disciplines and across sub-fields of psychology. In contrast, the use of tables and inferential statistics in psychology is inversely related to subfield hardness, suggesting that the relationship between hardness and graph use is not attributable to differences in the use of quantitative data in subfields or their commitment to empiricism. Enhanced "graphicacy" among psychologists could contribute to the progress of psychological science by providing alternatives to significance testing and by facilitating communication across subfields.) <|cite_end|>. The area of knowledge and the impact factor of journals have also been deemed as relevant predictors of citations <|cite_start|> (Reference: Journal impact factor shapes scientists’ reward signal in the prospect of publication.: The incentive structure of a scientist’s life is increasingly mimicking economic principles. While intensely criticized, the journal impact factor (JIF) has taken a role as the new currency for scientists. Successful goal-directed behavior in academia thus requires knowledge about the JIF. Using functional neuroimaging we examined how the JIF, as a powerful incentive in academia, has shaped the behavior of scientists and the reward signal in the striatum. We demonstrate that the reward signal in the nucleus accumbens increases with higher JIF during the anticipation of a publication and found a positive correlation with the personal publication record (pJIF) supporting the notion that scientists have incorporated the predominant reward principle of the scientific community in their reward system. The implications of this behavioral adaptation within the ecological niche of the scientist’s habitat remain unknown, but may also have effects which were not intended by the community.) <|cite_end|> <|cite_start|> (Reference: Citations for randomized controlled trials in sepsis literature: the halo effect caused by journal impact factor.: Citations for randomized controlled trials (RCT) are important for the dissemination of study results. However, predictors of citations for RCTs have not been investigated. The study aimed to investigate the predictors of citations for RCTs in sepsis literature. RCTs that investigated the efficacy of treatment strategies on clinical outcomes in sepsis patients were included, and publication dates were restricted to the period from 2000 to 2016. Risk of bias was assessed using the Cochrane handbook for systematic reviews and interventions. A multivariable linear regression model was built to investigate the independent variables associated with total citations. In total, 160 RCTs met our inclusion criteria and were included for analysis. The median of total citations was 28.5 (IQR: 6–76). The journal impact factor (IF) for articles was 6.312 (IQR: 3.143–7.214). The dependent variable was transformed by the square root to improve normality and meet the assumption of homoscedasticity. The journal IF (coefficient: 0.2; 95% CI: 0.16, 0.25) was independently associated with total citations. Large samples were associated with more total citations (coefficient: 0.0026; 95% CI: 0.0013, 0.0039). The study demonstrated that the journal IF was a major determinant of the RCT’s total citation number.) <|cite_end|>. Moreover, the number of citations is influenced by the number of authors per article <|cite_start|> (Reference: Inventor team size as a predictor of the future citation impact of patents: ) <|cite_end|> and the so-called ``\textit{chaperone effect in scientific publishing}'' that takes place when senior scientists appear as the last author of a paper whose first author is a junior scientist <|cite_start|> (Reference: The Chaperone Effect in Scientific Publishing: Experience plays a critical role in crafting high impact scientific work. This is particularly evident in top multidisciplinary journals, where a scientist is unlikely to appear as senior author if they have not previously published within the same journal. Here, we develop a quantitative understanding of author order by quantifying this 'Chaperone Effect', capturing how scientists transition into senior status within a particular publication venue. We illustrate that the chaperone effect has different magnitude for journals in different branches of science, being more pronounced in medical and biological sciences and weaker in natural sciences. Finally, we show that in the case of high-impact venues, the chaperone effect has significant implications, specifically resulting in a higher average impact relative to papers authored by new PIs. Our findings shed light on the role played by experience in publishing within specific scientific journals, on the paths towards acquiring the necessary experience and expertise, and on the skills required to publish in prestigious venues.) <|cite_end|>. Despite previous studies suggesting that research articles downloads can predict its later citations <|cite_start|> (Reference: Earlier Web Usage Statistics as Predictors of Later Citation Impact: The use of citation counts to assess the impact of research articles is well established. However, the citation impact of an article can only be measured several years after it has been published. As research articles are increasingly accessed through the Web, the number of times an article is downloaded can be instantly recorded and counted. One would expect the number of times an article is read to be related both to the number of times it is cited and to how old the article is. This paper analyses how short-term Web usage impact predicts medium-term citation impact. The physics e-print archive (arXiv.org) is used to test this.) <|cite_end|>, up to the best of our knowledge, the role of Sci-hub as a relevant predictor of citations remains unknown. Our goal in this paper is to study the effect of Sci-hub on citing published articles. Our central hypothesis is that the access of articles through Sci-hub is associated with a higher number of future citations because it bypasses the obstacles imposed by paid subscription-based information retrieval services, and thus it increases the potential impact of the article <|cite_start|> (Reference: Sci-hub provides access to nearly all scholarly literature.: The website Sci-Hub enables users to download PDF versions of scholarly articles, including many articles that are paywalled at their journal’s site. Sci-Hub has grown rapidly since its creation in 2011, but the extent of its coverage has been unclear. Here we report that, as of March 2017, Sci-Hub’s database contains 68.9% of the 81.6 million scholarly articles registered with Crossref and 85.1% of articles published in toll access journals. We find that coverage varies by discipline and publisher, and that Sci-Hub preferentially covers popular, paywalled content. For toll access articles, we find that Sci-Hub provides greater coverage than the University of Pennsylvania, a major research university in the United States. Green open access to toll access articles via licit services, on the other hand, remains quite limited. Our interactive browser at https://greenelab.github.io/scihub allows users to explore these findings in more detail. For the first time, nearly all scholarly literature is available gratis to anyone with an Internet connection, suggesting the toll access business model may become unsustainable.) <|cite_end|>. Additionally, we test the effects of various other characteristics of the article, the journal where it is published, and its authors in predicting its citation rate. The generalized specification of our model is as follows: \begin{equation} C_i = \beta \times Scihub_i + X_{i}^{'} \gamma + \theta_i \label{eq1} \end{equation} Where $C_i$ stands for the number of citations the paper $i$ has received, as captured by Scopus official records. $\beta$ is our parameter of interest which quantifies the relationship between the citation of a paper and the number of times the paper $i$ was downloaded from Sci-hub. $X_{i}'$ is a vector containing the following control variables: i) the impact factor of the journal where the paper was published, ii) the length of the title of the paper, as captured by the number of types or unique words in it, iii) the number of figures included in the paper $i$ and its supplementary material, iv) the number of tables included in the paper $i$ and its supplementary material, v) The chaperone effect, as captured by the H-index of the first and last author of the paper $i$ and the number of publications of the last author in the same journal, in case that the last author had a bigger H-index than the first author, and vi) the annual GDP per capita and the Nature Index for country of affiliation of the first and the last author. The annual GDP per capita was obtained from World Bank, and the Nature Index provides a close to real-time proxy of high-quality research output and collaboration at the institutional, national, and regional level. The Nature index in our analyses is used only as a national proxy. Finally, the parameter $\theta_i$ represents the residuals of our model. <|paper_end|>
[ "<|reference_start|> Will the rise of sci-hub pave the road for the subscription-based access to publishing databases?: Sci-hub has become the new phenomenon on the academic publishing market. While its popularity is growing worldwide, large academic publishers are losing millions of dollars on paid article downloads. Perhaps the time has come to re-think the rules of the game of the publishing market and to look for novel solutions for monetizing scientific output. Subscription-based access to publishing databases, similar to the model used in online music streaming, might be one of these solutions that could secure the status quo and stabilize the market. <|reference_end|>", "<|reference_start|> Sci-hub provides access to nearly all scholarly literature.: The website Sci-Hub enables users to download PDF versions of scholarly articles, including many articles that are paywalled at their journal’s site. Sci-Hub has grown rapidly since its creation in 2011, but the extent of its coverage has been unclear. Here we report that, as of March 2017, Sci-Hub’s database contains 68.9% of the 81.6 million scholarly articles registered with Crossref and 85.1% of articles published in toll access journals. We find that coverage varies by discipline and publisher, and that Sci-Hub preferentially covers popular, paywalled content. For toll access articles, we find that Sci-Hub provides greater coverage than the University of Pennsylvania, a major research university in the United States. Green open access to toll access articles via licit services, on the other hand, remains quite limited. Our interactive browser at https://greenelab.github.io/scihub allows users to explore these findings in more detail. For the first time, nearly all scholarly literature is available gratis to anyone with an Internet connection, suggesting the toll access business model may become unsustainable. <|reference_end|>", "<|reference_start|> Sci-hub provides access to nearly all scholarly literature.: The website Sci-Hub enables users to download PDF versions of scholarly articles, including many articles that are paywalled at their journal’s site. Sci-Hub has grown rapidly since its creation in 2011, but the extent of its coverage has been unclear. Here we report that, as of March 2017, Sci-Hub’s database contains 68.9% of the 81.6 million scholarly articles registered with Crossref and 85.1% of articles published in toll access journals. We find that coverage varies by discipline and publisher, and that Sci-Hub preferentially covers popular, paywalled content. For toll access articles, we find that Sci-Hub provides greater coverage than the University of Pennsylvania, a major research university in the United States. Green open access to toll access articles via licit services, on the other hand, remains quite limited. Our interactive browser at https://greenelab.github.io/scihub allows users to explore these findings in more detail. For the first time, nearly all scholarly literature is available gratis to anyone with an Internet connection, suggesting the toll access business model may become unsustainable. <|reference_end|>", "<|reference_start|> Inventor team size as a predictor of the future citation impact of patents: <|reference_end|>" ]
[ 6, 7, 10, 20 ]
{"<|cite_1|>": "ss-781839", "<|cite_2|>": "ss-2201982", "<|cite_3|>": "ss-2201983", "<|cite_4|>": "ss-2201984", "<|multi_cite_5_1|>": "ss-2201985", "<|multi_cite_5_2|>": "ss-2201986", "<|multi_cite_5_3|>": "ss-2201987", "<|multi_cite_5_4|>": "ss-2201988", "<|multi_cite_5_5|>": "ss-2201989", "<|multi_cite_6_1|>": "ss-2201990", "<|multi_cite_6_2|>": "ss-2201988", "<|multi_cite_6_3|>": "ss-2201987", "<|multi_cite_8_1|>": "ss-952249", "<|multi_cite_8_2|>": "ss-2201991", "<|multi_cite_8_3|>": "ss-2201992", "<|multi_cite_8_4|>": "ss-2201993", "<|cite_9|>": "ss-2201994", "<|cite_10|>": "ss-2201995", "<|multi_cite_11_1|>": "ss-2201996", "<|multi_cite_11_3|>": "ss-2201997", "<|cite_12|>": "ss-2201998", "<|cite_13|>": "arxiv-185685", "<|cite_14|>": "arxiv-672691", "<|cite_15|>": "ss-2201988"}
1403.5986
<|paper_start|> Title: Controllability Analysis for Multirotor Helicopter Rotor Degradation and Failure Abstract: Controllability Analysis for Multirotor Helicopter Rotor Degradation and Failure: This paper considers the controllability analysis problem for a class of multirotor systems subject to rotor failure/wear. It is shown that classical controllability theories of linear systems are not sufficient to test the controllability of the considered multirotors. Owing to this, an easy-to-use measurement index is introduced to assess the available control authority. Based on it, a new necessary and sufficient condition for the controllability of multirotors is derived. Furthermore, a controllability test procedure is approached. The proposed controllability test method is applied to a class of hexacopters with different rotor configurations and different rotor efficiency parameters to show its effectiveness. The analysis results show that hexacopters with different rotor configurations have different fault-tolerant capabilities. It is therefore necessary to test the controllability of the multirotors before any fault-tolerant control strategies are employed. Introduction Multirotor helicopters <|cite_start|> (Reference: Multirotor Aerial Vehicles: Modeling Estimation and Control of Quadrotor: This article provides a tutorial introduction to modeling, estimation, and control for multirotor aerial vehicles that includes the common four-rotor or quadrotor case.) <|cite_end|> <|cite_start|> (Reference: Hardware and Software Architecture for Nonlinear Control of Multirotor Helicopters: This paper presents the design and implementation of a nonlinear control scheme for multirotor helicopters that takes first-order drag effects into account explicitly. A dynamic model including the blade flapping and induced drag forces is provided and a hierarchical nonlinear controller is presented. This controller is designed for both high-precision flights as well as robustness against model uncertainties and external disturbances. This is achieved by using saturated integrators with fast desaturation properties. The implementation of the controller on the flybox hexacopter platform is described. The hardware and software architecture of this UAV is discussed, and useful hints and insights gained during its design process are presented. Finally, experimental results and videos are reported to demonstrate the successful implementation and the performance of the overall system.) <|cite_end|> <|cite_start|> (Reference: Kinematic analysis and control design for a nonplanar multirotor vehicle: A new class of nonplanar multirotor rotary vehicle is introduced that has the capability of independent control of both thrust and torque vectors in three dimensions. The vehicle configuration is based around the use of six thrust producing rotors arranged in pairs on three separate reference planes. Variable thrust can be provided via fixedpitch/variable-speed rotors or variable-pitch/fixed-speed rotors. The orientation of rotor reference planes affects the orthogonality of force and torque control, and it is shown how maneuverability can be traded with propulsive efficiency. The static mapping between force and torque control outputs and rotor inputs is derived from rotor geometry and a simple rotor aerodynamic model that does not include interference between rotors or fuselage drag and does not explicitly include induced-velocity effects. Controllers are synthesized for both position and attitude control, with acceptable stability demonstrated via Lyapunov analysis. Vehicle closed-loop dynamic response is investigated in simulation, and controller performance is shown to meet design requirements in the presence of unmodeled rotor inertia effects. Experimental results on a static test rig confirm that the simplified rotor aerodynamic modeling used for control synthesis is adequate for symmetric flight conditions around hover. A free flying prototype has been flight-tested in hover, showing that practical vehicles are possible, accepting the fact that increased control capability comes at the expense of reduced payload and duration, compared with a conventional helicopter.) <|cite_end|> are attracting increasing attention in recent years because of their important contribution and cost effective application in several tasks such as surveillance, search and rescue missions and so on. However, there exists a potential risk to civil safety if a mutirotor aircraft crashes, especially in an urban area. Therefore, it is of great importance to consider the flight safety of multirotor helicopters in the presence of rotor faults or failures <|cite_start|> (Reference: Fault/damage tolerant control of a quadrotor helicopter UAV using model reference adaptive control and gain-scheduled PID: In this paper, two useful approaches to Fault Tolerant Control (FTC) for a quadrotor helicopter Unmanned Aerial Vehicle (UAV) in the presence of fault(s) in one or more actuators during flight have been investigated and experimentally tested based on a Model Reference Adaptive Control (MRAC) and a Gain-Scheduled Proportional-IntegralDerivative (GS-PID) control. A Linear Quadratic Regulator (LQR) controller is used in cooperation with the MRAC and the GS-PID to control the pitch and roll attitudes of the helicopter. Unlike the MRAC, the GS-PID is used only to control the helicopter in height control mode. MRAC is used to control the helicopter in both height control as well as trajectory control. For damage tolerant control the MRAC is evaluated based on partial damage of one of propellers during flight. Finally, the experimental flight testing results of both controllers are presented for the fault tolerant control performance comparison in the presence of actuator faults in the quadrotor UAV.) <|cite_end|>. Fault-Tolerant Control (FTC) <|cite_start|> (Reference: Fault tolerant flight control: This PhD dissertation deals with Fault-Tolerant Flight Control System as applied to Bell-205 Helicopter which was a platform for earlier research on design of control laws using modern robust control techniques. Earlier research was focusing on control law design, simulation, and flight testing. This thesis has applied the general concept in Fault- Tolerant Control on the vehicle and built Fault Detection, Isolation, and Accommodation (FDIA) for the sensors using actual Flight Test Data (FTD) and the results was validated against other FTD data sets and found quite satisfactory. In an additional step, the FDIA was integrated with an H∞ controller and checked against various faults and the resultant FTFCS was tested. The µ-synthesis was used in the analysis & design of an FTFCS and this integration has indicated better performance and stability margins over H∞. Finally, the insight thinking about the implication of faults on controllers and overall system integrity has led to mathematical formulation of a novel FTFCS scheme which is adaptive in nature and based on manipulation of Algebraic Riccati Equations. The simulation results is not part of the thesis.) <|cite_end|> has the potential to improve the safety and reliability of multirotor helicopters. FTC is the ability of a controlled system to maintain or gracefully degrade control objectives despite the occurrence of a fault <|cite_start|> (Reference: Reconfigurability Analysis for a Class of Linear Hybrid Systems: Abstract The reconfigurability regarded as a kind of system's intrinsic property is discussed for a class of linear hybrid systems based on the controllability concept. An algebraic approach is proposed for the reconfigurability analysis. Some sufficient and necessary computable conditions are derived just based on the manipulation of system matrices. Furthermore, some novel features of fault-tolerant hybrid systems, such as the spatial redundancy and the temporal redundancy induced by this analysis, are also exploited. Finally, the proposed method and results are illustrated by examples.) <|cite_end|>. There are many applications in which fault tolerance may be achieved by using adaptive control, reliable control\textbf{,} or reconfigurable control strategies <|cite_start|> (Reference: Development of an Active Fault-Tolerant Flight Control Strategy: This paper discusses the design of an active fault-tolerant flight control strategy for improvement of the operational control capability of the aircraft system. The research work draws expertise from actions undertaken within the European Flight Mechanics Action Group [FM-AG(16)] on fault-tolerant control, which develops a collaborative effort in Europe to create new fault-tolerant control technologies that significantly advance the goals of the aviation safety. The methodology is applied to a trimmable horizontal stabilizer runaway fault occurring in a large transport aircraft. The goal is to provide a self-repairing capability to enable the pilot to land the aircraft safely. The fault-tolerant control strategy works in such a way that once the fault is detected by the fault detection and isolation unit, a compensation loop is activated for safe recovery. A key feature of the proposed strategy is that the design of the fault-tolerant control loop is done independently of the nominal autopilot and the nominal flight control system in place. Nonlinear simulation results demonstrate the effectiveness of the proposed fault-tolerant control scheme.) <|cite_end|>. Some strategies involve explicit fault diagnosis, and some do not. The reader is referred to a recent survey paper <|cite_start|> (Reference: Bibliographical review on reconfigurable fault-tolerant control systems: ) <|cite_end|> for an outline of the state of art in the field of FTC. However, only few attempts are known that focus on the fundamental FTC property analysis, one of which is defined as the (control) reconfigurability <|cite_start|> (Reference: Reconfigurability Analysis for a Class of Linear Hybrid Systems: Abstract The reconfigurability regarded as a kind of system's intrinsic property is discussed for a class of linear hybrid systems based on the controllability concept. An algebraic approach is proposed for the reconfigurability analysis. Some sufficient and necessary computable conditions are derived just based on the manipulation of system matrices. Furthermore, some novel features of fault-tolerant hybrid systems, such as the spatial redundancy and the temporal redundancy induced by this analysis, are also exploited. Finally, the proposed method and results are illustrated by examples.) <|cite_end|>. A faulty multirotor system with inadequate reconfigurability cannot be made to effectively tolerate faults regardless of the feedback control strategy used <|cite_start|> (Reference: Technical Communique: Control reconfigurability of linear time-invariant systems: ) <|cite_end|>. The control reconfigurability can be analyzed from the intrinsic and performance-based perspectives. The aim of this Note is to analyze the control reconfigurability for multirotor systems (4-, 6- and 8-rotor helicopters, etc.) from the controllability analysis point of view\textbf{.} Classical controllability theories of linear systems are not sufficient to test the controllability of the considered multirotor helicopters, as the rotors can only provide unidirectional lift (upward or downward) in practice. In our previous work <|cite_start|> (Reference: Controllability Analysis and Degraded Control for a Class of Hexacopters Subject to Rotor Failures: This paper considers the controllability analysis and fault tolerant control problem for a class of hexacopters. It is shown that the considered hexacopter is uncontrollable when one rotor fails, even though the hexacopter is over-actuated and its controllability matrix is row full rank. According to this, a fault tolerant control strategy is proposed to control a degraded system, where the yaw states of the considered hexacopter are ignored. Theoretical analysis indicates that the degraded system is controllable if and only if the maximum lift of each rotor is greater than a certain value. The simulation and experiment results on a prototype hexacopter show the feasibility of our controllability analysis and degraded control strategy.) <|cite_end|>, it was shown that a hexacopter with the standard symmetrical configuration is uncontrollable if one rotor fails, though the controllability matrix of the hexacopter is row full rank. Thus, the reconfigurability based on the controllability Gramian <|cite_start|> (Reference: Technical Communique: Control reconfigurability of linear time-invariant systems: ) <|cite_end|> is no longer applicable. Brammer in <|cite_start|> (Reference: Controllability in linear autonomous systems with positive controllers: This paper presents results on the controllability of the autonomous linear control system in $R^n $, $\dot x = Ax + Bu$, where $u \in \Omega \subset R^m $ without the assumption that the origin in $R^m $ is interior to $\Omega $. Necessary and sufficient conditions are given for null-controllability (controllability of each point in some neighborhood of the origin to the origin) and global null-controllability with uniformly bounded controllers. This paper extends some results of Saperstone and Yorke who considered the problem of the controllability of the above system with $m = 1$ and $\Omega = [0,1]$ and obtained necessary and sufficient conditions for controllability for this system. Corollaries to the main result include existence of time-optimal controllers and controllability of nonlinear systems. An example of control of an economic system is presented.) <|cite_end|> proposed a necessary and sufficient condition for the controllability of linear autonomous systems with positive constraint, which can be used to analyze the controllability of multirotor systems. However, the theorems in <|cite_start|> (Reference: Controllability in linear autonomous systems with positive controllers: This paper presents results on the controllability of the autonomous linear control system in $R^n $, $\dot x = Ax + Bu$, where $u \in \Omega \subset R^m $ without the assumption that the origin in $R^m $ is interior to $\Omega $. Necessary and sufficient conditions are given for null-controllability (controllability of each point in some neighborhood of the origin to the origin) and global null-controllability with uniformly bounded controllers. This paper extends some results of Saperstone and Yorke who considered the problem of the controllability of the above system with $m = 1$ and $\Omega = [0,1]$ and obtained necessary and sufficient conditions for controllability for this system. Corollaries to the main result include existence of time-optimal controllers and controllability of nonlinear systems. An example of control of an economic system is presented.) <|cite_end|> are not easy to use in practice. Owing to this, the controllability of a given system is reduced to those of its subsystems with real eigenvalues based on the Jordan canonical form in <|cite_start|> (Reference: Positive Controllability Test for Continuous-Time Linear Systems: This note presents a method to test the positive controllability of a continuous-time linear system. Based on the Jordan canonical form, the controllability of a given system can be reduced to those of its subsystems with real eigenvalues. Because the dimension of the subsystem is smaller than that of the given system, the controllability test can be simplified. It is pointed out that the conditions for a continuous-time system to be positive controllable have almost the same expressions as those for a discrete-time system.) <|cite_end|>. However, appropriate stable algorithms to compute Jordan real canonical form should be used to avoid ill-conditioned calculations. Moreover, a step-by-step controllability test procedure is not given. To address these problems, in this Note the theory proposed in <|cite_start|> (Reference: Controllability in linear autonomous systems with positive controllers: This paper presents results on the controllability of the autonomous linear control system in $R^n $, $\dot x = Ax + Bu$, where $u \in \Omega \subset R^m $ without the assumption that the origin in $R^m $ is interior to $\Omega $. Necessary and sufficient conditions are given for null-controllability (controllability of each point in some neighborhood of the origin to the origin) and global null-controllability with uniformly bounded controllers. This paper extends some results of Saperstone and Yorke who considered the problem of the controllability of the above system with $m = 1$ and $\Omega = [0,1]$ and obtained necessary and sufficient conditions for controllability for this system. Corollaries to the main result include existence of time-optimal controllers and controllability of nonlinear systems. An example of control of an economic system is presented.) <|cite_end|> is extended and a new necessary and sufficient condition of controllability is derived for the considered multirotor systems. Nowadays, larger multirotor aircraft are starting to emerge and some multirotor aircraft are controlled by varying the collective pitch of the blade. This work considers only the multirotor helicopters controlled by varying the RPM (Revolutions Per Minute) of each rotor but this research can be extended to most multirotor aircraft regardless of size whether they are controlled by varying the collective pitch of the blade or the RPM. The linear dynamical model of the considered multirotor helicopters around hover conditions is derived first, and then the control constraint is specified. It is pointed out that classical controllability theories of linear systems are not sufficient to test the controllability of the derived model (Section II). Then the controllability of the derived model is studied based on the theory in <|cite_start|> (Reference: Controllability in linear autonomous systems with positive controllers: This paper presents results on the controllability of the autonomous linear control system in $R^n $, $\dot x = Ax + Bu$, where $u \in \Omega \subset R^m $ without the assumption that the origin in $R^m $ is interior to $\Omega $. Necessary and sufficient conditions are given for null-controllability (controllability of each point in some neighborhood of the origin to the origin) and global null-controllability with uniformly bounded controllers. This paper extends some results of Saperstone and Yorke who considered the problem of the controllability of the above system with $m = 1$ and $\Omega = [0,1]$ and obtained necessary and sufficient conditions for controllability for this system. Corollaries to the main result include existence of time-optimal controllers and controllability of nonlinear systems. An example of control of an economic system is presented.) <|cite_end|>, and two conditions which are necessary and sufficient for the controllability of the derived model are given. In order to make the two conditions easy to test in practice, an Available Control Authority Index (ACAI) is introduced to quantify the available control authority of the considered multirotor systems. Based on the ACAI, a new necessary and sufficient condition is given to test the controllability of the considered multirotor systems (Section III). Furthermore, the computation of the proposed ACAI and a step-by-step controllability test procedure is approached for practical application (Section IV). The proposed controllability test method is used to analyze the controllability of a class of hexacopters to show its effectiveness (Section V). The major contributions of this Note are: (i) an ACAI to quantify the available control authority of the considered multirotor systems, (ii) a new necessary and sufficient controllability test condition based on the proposed ACAI, and (iii) a step-by-step controllability test procedure for the considered multirotor systems. <|paper_end|>
[ "<|reference_start|> Multirotor Aerial Vehicles: Modeling Estimation and Control of Quadrotor: This article provides a tutorial introduction to modeling, estimation, and control for multirotor aerial vehicles that includes the common four-rotor or quadrotor case. <|reference_end|>", "<|reference_start|> Hardware and Software Architecture for Nonlinear Control of Multirotor Helicopters: This paper presents the design and implementation of a nonlinear control scheme for multirotor helicopters that takes first-order drag effects into account explicitly. A dynamic model including the blade flapping and induced drag forces is provided and a hierarchical nonlinear controller is presented. This controller is designed for both high-precision flights as well as robustness against model uncertainties and external disturbances. This is achieved by using saturated integrators with fast desaturation properties. The implementation of the controller on the flybox hexacopter platform is described. The hardware and software architecture of this UAV is discussed, and useful hints and insights gained during its design process are presented. Finally, experimental results and videos are reported to demonstrate the successful implementation and the performance of the overall system. <|reference_end|>", "<|reference_start|> Fault/damage tolerant control of a quadrotor helicopter UAV using model reference adaptive control and gain-scheduled PID: In this paper, two useful approaches to Fault Tolerant Control (FTC) for a quadrotor helicopter Unmanned Aerial Vehicle (UAV) in the presence of fault(s) in one or more actuators during flight have been investigated and experimentally tested based on a Model Reference Adaptive Control (MRAC) and a Gain-Scheduled Proportional-IntegralDerivative (GS-PID) control. A Linear Quadratic Regulator (LQR) controller is used in cooperation with the MRAC and the GS-PID to control the pitch and roll attitudes of the helicopter. Unlike the MRAC, the GS-PID is used only to control the helicopter in height control mode. MRAC is used to control the helicopter in both height control as well as trajectory control. For damage tolerant control the MRAC is evaluated based on partial damage of one of propellers during flight. Finally, the experimental flight testing results of both controllers are presented for the fault tolerant control performance comparison in the presence of actuator faults in the quadrotor UAV. <|reference_end|>", "<|reference_start|> Development of an Active Fault-Tolerant Flight Control Strategy: This paper discusses the design of an active fault-tolerant flight control strategy for improvement of the operational control capability of the aircraft system. The research work draws expertise from actions undertaken within the European Flight Mechanics Action Group [FM-AG(16)] on fault-tolerant control, which develops a collaborative effort in Europe to create new fault-tolerant control technologies that significantly advance the goals of the aviation safety. The methodology is applied to a trimmable horizontal stabilizer runaway fault occurring in a large transport aircraft. The goal is to provide a self-repairing capability to enable the pilot to land the aircraft safely. The fault-tolerant control strategy works in such a way that once the fault is detected by the fault detection and isolation unit, a compensation loop is activated for safe recovery. A key feature of the proposed strategy is that the design of the fault-tolerant control loop is done independently of the nominal autopilot and the nominal flight control system in place. Nonlinear simulation results demonstrate the effectiveness of the proposed fault-tolerant control scheme. <|reference_end|>" ]
[ 0, 1, 3, 6 ]
{"<|multi_cite_1_1|>": "ss-1123597", "<|multi_cite_1_2|>": "ss-2179603", "<|multi_cite_1_3|>": "ss-1135296", "<|cite_2|>": "ss-777147", "<|cite_3|>": "ss-1467420", "<|cite_4|>": "ss-2179604", "<|multi_cite_5_2|>": "ss-2179605", "<|cite_6|>": "ss-1833514", "<|cite_7|>": "ss-2179604", "<|cite_8|>": "ss-2179606", "<|cite_9|>": "arxiv-47517", "<|cite_10|>": "ss-2179606", "<|cite_11|>": "ss-951461", "<|cite_12|>": "ss-951461", "<|cite_13|>": "ss-2179607", "<|cite_14|>": "ss-951461", "<|cite_15|>": "ss-951461"}
2408.08045
<|paper_start|> Title: Joint Message Detection, Channel, and User Position Estimation for Unsourced Random Access in Cell-Free Networks Abstract: Joint Message Detection, Channel, and User Position Estimation for Unsourced Random Access in Cell-Free Networks: We consider unsourced random access (uRA) in user-centric cell-free (CF) wireless networks, where random access users send codewords from a common codebook during specifically dedicated random access channel (RACH) slots. The system is conceptually similar to the so-called 2-step RACH currently discussed in 3GPP standardization. In order to cope with the distributed and CF nature of the network, we propose to partition the network coverage area into zones (referred to as ''locations'') and assign an uRA codebook to each location, such that users in a certain location make use of the associated codebook. The centralized uRA decoder makes use of the multisource AMP algorithm recently proposed by the authors. This yields at once the list of active uRA codewords, an estimate of the corresponding channel vectors, and an estimate of the active users' position. We show excellent performance of this approach and perfect agreement with the rigorous theoretical ''state evolution'' analysis. We also show that the proposed ''location-based'' partitioned codebook approach significantly outperforms a baseline system with a single non-partitioned uRA codebook. Introduction \label{intro} To support massive connectivity and low latency, unsourced Random Access (uRA) has been proposed and widely investigated in several works <|cite_start|> (Reference: A perspective on massive random-access: This paper discusses the contemporary problem of providing multiple-access (MAC) to a massive number of uncoordinated users. First, we define a random-access code for Ka-user Gaussian MAC to be a collection of norm-constrained vectors such that the noisy sum of any Ka of them can be decoded with a given (suitably defined) probability of error. An achievability bound for such codes is proposed and compared against popular practical solutions: ALOHA, coded slotted ALOHA, CDMA, and treating interference as noise. It is found out that as the number of users increases existing solutions become vastly energy-inefficient. Second, we discuss the asymptotic (in blocklength) problem of coding for a K-user Gaussian MAC when K is proportional to blocklength and each user's payload is fixed. It is discovered that the energy-per-bit vs. spectral efficiency exhibits a rather curious tradeoff in this case.) <|cite_end|> <|cite_start|> (Reference: SPARCs for Unsourced Random Access: Unsourced random-access (U-RA) is a type of grant-free random access with a virtually unlimited number of users, of which only a certain number $K_a$ are active on the same time slot. Users employ exactly the same codebook, and the task of the receiver is to decode the list of transmitted messages. We present a concatenated coding construction for U-RA on the AWGN channel, in which a sparse regression code (SPARC) is used as an inner code to create an effective outer OR-channel. Then an outer code is used to resolve the multiple-access interference in the OR-MAC. We propose a modified version of the approximate message passing (AMP) algorithm as an inner decoder and give a precise asymptotic analysis of the error probabilities of the AMP decoder and of a hypothetical optimal inner MAP decoder. This analysis shows that the concatenated construction can achieve a vanishing per-user error probability in the limit of large blocklength and a large number of active users at sum-rates up to the symmetric Shannon capacity, i.e. as long as $K_aR < 0.5\log_2(1+K_a\SNR)$. This extends previous point-to-point optimality results about SPARCs to the unsourced multiuser scenario. Furthermore, we give an optimization algorithm to find the power allocation for the inner SPARC code that minimizes the required $\SNR$.) <|cite_end|> <|cite_start|> (Reference: Non-Bayesian Activity Detection, Large-Scale Fading Coefficient Estimation, and Unsourced Random Access with a Massive MIMO Receiver: In this paper, we study the problem of user activity detection and large-scale fading coefficient estimation in a random access wireless uplink with a massive MIMO base station with a large number $M$ of antennas and a large number of wireless single-antenna devices (users). We consider a block fading channel model where the $M$-dimensional channel vector of each user remains constant over a coherence block containing $L$ signal dimensions in time-frequency. In the considered setting, the number of potential users $K_\text{tot}$ is much larger than $L$ but at each time slot only $K_a<<K_\text{tot}$ of them are active. Previous results, based on compressed sensing, require that $K_a\leq L$, which is a bottleneck in massive deployment scenarios such as Internet-of-Things and unsourced random access. In this work we show that such limitation can be overcome when the number of base station antennas $M$ is sufficiently large. We also provide two algorithms. One is based on Non-Negative Least-Squares, for which the above scaling result can be rigorously proved. The other consists of a low-complexity iterative componentwise minimization of the likelihood function of the underlying problem. Finally, we use the discussed approximated ML algorithm as the decoder for the inner code in a concatenated coding scheme for unsourced random access, a grant-free uncoordinated multiple access scheme where all users make use of the same codebook, and the massive MIMO base station must come up with the list of transmitted messages irrespectively of the identity of the transmitters. We show that reliable communication is possible at any $E_b/N_0$ provided that a sufficiently large number of base station antennas is used, and that a sum spectral efficiency in the order of $\mathcal{O}(L\log(L))$ is achievable.) <|cite_end|> <|cite_start|> (Reference: Unsourced Random Access With Coded Compressed Sensing: Integrating AMP and Belief Propagation: Sparse regression codes with approximate message passing (AMP) decoding have gained much attention in recent times. The concepts underlying this coding scheme extend to unsourced random access with coded compressed sensing (CCS), as first demonstrated by Fengler, Jung, and Caire. Specifically, their approach employs a concatenated coding framework with an inner AMP decoder followed by an outer tree decoder. In their original implementation, these two components work independently of each other, with the tree decoder acting on the static output of the AMP decoder. This article introduces a novel framework where the inner AMP decoder and the outer decoder operate in tandem, dynamically passing information back and forth to take full advantage of the underlying CCS structure. This scheme necessitates the redesign of the outer code as to enable belief propagation in a computationally tractable manner. The enhanced architecture exhibits significant performance benefits over a range of system parameters. The error performance of the proposed scheme can be accurately predicted through a set of equations known as state evolution of AMP. These findings are supported both analytically and through numerical methods.) <|cite_end|>. In uRA, a virtually unlimited population of user devices (UEs) make use of the same codebook. At every random access (RACH) slot, only a finite number of users is active and transmit a codeword from the uRA codebook. These codewords can be seen as ``tokens'', used by the uRA users to be identified and to reserve some transmission opportunity in a subsequent slot. Interestingly, a similar idea is being developed in 3GPP standardization and is referred to as 2-step RACH <|cite_start|> (Reference: {Two-step random access in 5G new radio: Channel structure design and performance: A common design of the random access procedure on the physical random access channel (PRACH) is required for the diverse usage scenarios in the fifth generation new radio (5G NR) mobile networks. Based on the latest 3GPP specifications and evaluation assumptions agreed for Release 16, the 2 step-RACH (2SR) enhancement, composed of the denoted MsgA and MsgB, not only reduces the latency but also the control-signalling overhead due to the reduced number of messages transmitted. The channel structure of MsgA comprises RACH preamble and data in the physical uplink shared channel (PUSCH) while MsgB combines the random access response and the contention resolution. This procedure should operate in local area (LA), medium range (MR) and wide area (WA) cells despite the lack of time alignment (TA) in the PUSCH part of MsgA. The demodulation performance degradation observed without time offset compensation at the base station (gNB), specially for MR or WA cells, highlight that practical gNB implementations relying in MAC control element-based TA command for PUSCH time alignment are not conceivable for 2SR. Furthermore, in the case that all preambles from multiple users (UEs) trying to perform the initial access are mapped to the same PUSCH physical resources, the associated data parts overlap and may result in unsuccessful decoding. There is therefore a trade-off between the collision probability of the PUSCH part of MsgA and the resource overhead for 2SR. This paper addresses the channel structure design of this procedure for the preamble and data parts of MsgA together with the receiver processing framework. The performance results suggest that using lower payload sizes provide higher resource utilization and allow more UEs to be multiplexed within the same PUSCH occasion. In addition, using different DMRS ports for UEs sharing same physical resources decrease the probability of failure in the decoding of the data part of MsgA while reduces the resource overhead for 2SR.) <|cite_end|>, where the uRA codebook is formed by a collection of ``preamble sequences'', each pointing at a block of time-frequency referred to as physical uplink shared channel (PUSCH) opportunity (see Fig.~\ref{2step}). \begin{figure}[t] \centering \includegraphics[width=7cm]{2step} \caption{a) Conventional 4-step RACH; b) Novel 2-step RACH.} \label{2step} \vspace{-0.5cm} \end{figure} In a cell-free (CF) user-centric wireless network (e.g., see <|cite_start|> (Reference: Foundations of User-Centric Cell-Free Massive MIMO: Imagine a coverage area where each mobile device is communicating with a preferred set of wireless access points (among many) that are selected based on its needs and cooperate to jointly serve it, instead of creating autonomous cells. This effectively leads to a user-centric post-cellular network architecture, which can resolve many of the interference issues and service-quality variations that appear in cellular networks. This concept is called User-centric Cell-free Massive MIMO (multiple-input multiple-output) and has its roots in the intersection between three technology components: Massive MIMO, coordinated multipoint processing, and ultra-dense networks. The main challenge is to achieve the benefits of cell-free operation in a practically feasible way, with computational complexity and fronthaul requirements that are scalable to enable massively large networks with many mobile devices. This monograph covers the foundations of User-centric Cell-free Massive MIMO, starting from the motivation and mathematical definition. It continues by describing the state-of-the-art signal processing algorithms for channel estimation, uplink data reception, and downlink data transmission with either centralized or distributed implementation. The achievable spectral efficiency is mathematically derived and evaluated numerically using a running example that exposes the impact of various system parameters and algorithmic choices. The fundamental tradeoffs between communication performance, computational complexity, and fronthaul signaling requirements are thoroughly analyzed. Finally, the basic algorithms for pilot assignment, dynamic cooperation cluster formation, and power optimization are provided, while open problems related to these and other resource allocation problems are reviewed. All the numerical examples can be reproduced using the accompanying Matlab code.) <|cite_end|> and references therein), the problem is further complicated by the fact that UEs are spatially distributed. Since uplink (UL) and downlink (DL) transmission is operated by user-centric clusters of radio units (RUs), the system cannot establish such clusters for the uRA users since their position is a priori unknown. In order to cope with the distributed and CF nature of the network, we propose to partition the network coverage area into zones (referred to as ``locations'') and assign an uRA codebook to each location, such that users in a certain location make use of the associated codebook. The centralized uRA decoder makes use of the multisource AMP algorithm recently proposed by the authors <|cite_start|> (Reference: Joint Message Detection and Channel Estimation for Unsourced Random Access in Cell-Free User-Centric Wireless Networks: We consider unsourced random access (uRA) in a cell-free (CF) user-centric wireless network, where a large number of potential users compete for a random access slot, while only a finite subset is active. The random access users transmit codewords of length $L$ symbols from a shared codebook, which are received by $B$ geographically distributed radio units (RUs) equipped with $M$ antennas each. Our goal is to devise and analyze a \emph{centralized} decoder to detect the transmitted messages (without prior knowledge of the active users) and estimate the corresponding channel state information. A specific challenge lies in the fact that, due to the geographically distributed nature of the CF network, there is no fixed correspondence between codewords and large-scale fading coefficients (LSFCs). To overcome this problem, we propose a scheme where the access codebook is partitioned in "location-based" subcodes, such that users in a particular location make use of the corresponding subcode. The joint message detection and channel estimation is obtained via a novel {\em Approximated Message Passing} (AMP) algorithm to estimate the linear superposition of matrix-valued "sources" corrupted by Gaussian noise. The matrices to be estimated exhibit zero rows for inactive messages and Gaussian-distributed rows corresponding to the active messages. The asymmetry in the LSFCs and message activity probabilities leads to \emph{different statistics} for the matrix sources, which distinguishes the AMP formulation from previous cases. In the regime where the codebook size scales linearly with $L$, while $B$ and $M$ are fixed, we present a rigorous high-dimensional analysis of the proposed AMP algorithm. Then, exploiting the fundamental decoupling principle of AMP, we provide a comprehensive analysis of Neyman-Pearson message detection, along with the subsequent channel estimation.) <|cite_end|>. Messages are detected using a Neyman-Pearson binary hypothesis test on the AMP output. The AMP output also yields the estimates of the channel vectors corresponding to active messages and an estimate of the active users' position on a quantized grid of positions with much finer resolution with respect to the coarse locations. As a result, after a RACH slot, the system knows a) the list of active messages; b) the corresponding uplink channel estimates; c) the position of the users sending these messages. Hence, the system is able to immediately setup user-centric clusters of RUs for each uRA user, receive subsequent uplink data and/or send beamformed downlink data. The proposed scheme bears the potential of seamless connectivity and very-low access latency, by avoiding the lengthy explicit pilot assignment and user-centric cluster setup phase which is routinely implied (yet rarely analyzed) in the conventional studies on CF user-centric wireless networks. <|paper_end|>
[ "<|reference_start|> SPARCs for Unsourced Random Access: Unsourced random-access (U-RA) is a type of grant-free random access with a virtually unlimited number of users, of which only a certain number $K_a$ are active on the same time slot. Users employ exactly the same codebook, and the task of the receiver is to decode the list of transmitted messages. We present a concatenated coding construction for U-RA on the AWGN channel, in which a sparse regression code (SPARC) is used as an inner code to create an effective outer OR-channel. Then an outer code is used to resolve the multiple-access interference in the OR-MAC. We propose a modified version of the approximate message passing (AMP) algorithm as an inner decoder and give a precise asymptotic analysis of the error probabilities of the AMP decoder and of a hypothetical optimal inner MAP decoder. This analysis shows that the concatenated construction can achieve a vanishing per-user error probability in the limit of large blocklength and a large number of active users at sum-rates up to the symmetric Shannon capacity, i.e. as long as $K_aR < 0.5\\log_2(1+K_a\\SNR)$. This extends previous point-to-point optimality results about SPARCs to the unsourced multiuser scenario. Furthermore, we give an optimization algorithm to find the power allocation for the inner SPARC code that minimizes the required $\\SNR$. <|reference_end|>", "<|reference_start|> Unsourced Random Access With Coded Compressed Sensing: Integrating AMP and Belief Propagation: Sparse regression codes with approximate message passing (AMP) decoding have gained much attention in recent times. The concepts underlying this coding scheme extend to unsourced random access with coded compressed sensing (CCS), as first demonstrated by Fengler, Jung, and Caire. Specifically, their approach employs a concatenated coding framework with an inner AMP decoder followed by an outer tree decoder. In their original implementation, these two components work independently of each other, with the tree decoder acting on the static output of the AMP decoder. This article introduces a novel framework where the inner AMP decoder and the outer decoder operate in tandem, dynamically passing information back and forth to take full advantage of the underlying CCS structure. This scheme necessitates the redesign of the outer code as to enable belief propagation in a computationally tractable manner. The enhanced architecture exhibits significant performance benefits over a range of system parameters. The error performance of the proposed scheme can be accurately predicted through a set of equations known as state evolution of AMP. These findings are supported both analytically and through numerical methods. <|reference_end|>", "<|reference_start|> {Two-step random access in 5G new radio: Channel structure design and performance: A common design of the random access procedure on the physical random access channel (PRACH) is required for the diverse usage scenarios in the fifth generation new radio (5G NR) mobile networks. Based on the latest 3GPP specifications and evaluation assumptions agreed for Release 16, the 2 step-RACH (2SR) enhancement, composed of the denoted MsgA and MsgB, not only reduces the latency but also the control-signalling overhead due to the reduced number of messages transmitted. The channel structure of MsgA comprises RACH preamble and data in the physical uplink shared channel (PUSCH) while MsgB combines the random access response and the contention resolution. This procedure should operate in local area (LA), medium range (MR) and wide area (WA) cells despite the lack of time alignment (TA) in the PUSCH part of MsgA. The demodulation performance degradation observed without time offset compensation at the base station (gNB), specially for MR or WA cells, highlight that practical gNB implementations relying in MAC control element-based TA command for PUSCH time alignment are not conceivable for 2SR. Furthermore, in the case that all preambles from multiple users (UEs) trying to perform the initial access are mapped to the same PUSCH physical resources, the associated data parts overlap and may result in unsuccessful decoding. There is therefore a trade-off between the collision probability of the PUSCH part of MsgA and the resource overhead for 2SR. This paper addresses the channel structure design of this procedure for the preamble and data parts of MsgA together with the receiver processing framework. The performance results suggest that using lower payload sizes provide higher resource utilization and allow more UEs to be multiplexed within the same PUSCH occasion. In addition, using different DMRS ports for UEs sharing same physical resources decrease the probability of failure in the decoding of the data part of MsgA while reduces the resource overhead for 2SR. <|reference_end|>", "<|reference_start|> Joint Message Detection and Channel Estimation for Unsourced Random Access in Cell-Free User-Centric Wireless Networks: We consider unsourced random access (uRA) in a cell-free (CF) user-centric wireless network, where a large number of potential users compete for a random access slot, while only a finite subset is active. The random access users transmit codewords of length $L$ symbols from a shared codebook, which are received by $B$ geographically distributed radio units (RUs) equipped with $M$ antennas each. Our goal is to devise and analyze a \\emph{centralized} decoder to detect the transmitted messages (without prior knowledge of the active users) and estimate the corresponding channel state information. A specific challenge lies in the fact that, due to the geographically distributed nature of the CF network, there is no fixed correspondence between codewords and large-scale fading coefficients (LSFCs). To overcome this problem, we propose a scheme where the access codebook is partitioned in \"location-based\" subcodes, such that users in a particular location make use of the corresponding subcode. The joint message detection and channel estimation is obtained via a novel {\\em Approximated Message Passing} (AMP) algorithm to estimate the linear superposition of matrix-valued \"sources\" corrupted by Gaussian noise. The matrices to be estimated exhibit zero rows for inactive messages and Gaussian-distributed rows corresponding to the active messages. The asymmetry in the LSFCs and message activity probabilities leads to \\emph{different statistics} for the matrix sources, which distinguishes the AMP formulation from previous cases. In the regime where the codebook size scales linearly with $L$, while $B$ and $M$ are fixed, we present a rigorous high-dimensional analysis of the proposed AMP algorithm. Then, exploiting the fundamental decoupling principle of AMP, we provide a comprehensive analysis of Neyman-Pearson message detection, along with the subsequent channel estimation. <|reference_end|>" ]
[ 1, 3, 4, 6 ]
{"<|multi_cite_1_1|>": "ss-1197482", "<|multi_cite_1_2|>": "arxiv-188002", "<|multi_cite_1_3|>": "arxiv-230632", "<|multi_cite_1_4|>": "ss-1350505", "<|cite_2|>": "ss-2439441", "<|cite_3|>": "arxiv-359242", "<|cite_4|>": "arxiv-499524"}
0903.1022
<|paper_start|> Title: On-Off Random Access Channels: A Compressed Sensing Framework Abstract: On-Off Random Access Channels: A Compressed Sensing Framework: This paper considers a simple on-off random multiple access channel, where n users communicate simultaneously to a single receiver over m degrees of freedom. Each user transmits with probability lambda, where typically lambda n < m << n, and the receiver must detect which users transmitted. We show that when the codebook has i.i.d. Gaussian entries, detecting which users transmitted is mathematically equivalent to a certain sparsity detection problem considered in compressed sensing. Using recent sparsity results, we derive upper and lower bounds on the capacities of these channels. We show that common sparsity detection algorithms, such as lasso and orthogonal matching pursuit (OMP), can be used as tractable multiuser detection schemes and have significantly better performance than single-user detection. These methods do achieve some near-far resistance but--at high signal-to-noise ratios (SNRs)--may achieve capacities far below optimal maximum likelihood detection. We then present a new algorithm, called sequential OMP, that illustrates that iterative detection combined with power ordering or power shaping can significantly improve the high SNR performance. Sequential OMP is analogous to successive interference cancellation in the classic multiple access channel. Our results thereby provide insight into the roles of power control and multiuser detection on random-access signalling. Introduction In wireless systems, \emph{random access} refers to any multiple access communication protocol where the users autonomously decide whether or not to transmit depending on their own traffic requirements and estimates of the network load. While random access is best known for its use in packet data communication in wireless local area networks (LANs) <|cite_start|> (Reference: Home networking with IEEE 802.15.4: a developing standard for low-rate wireless personal area networks: This article presents the IEEE 802.15.4 draft standard and its home networking applications. The main features of the standard are network flexibility, low cost, and low power consumption; the standard is suitable for many applications in the home requiring low-data-rate communications in an ad hoc self-organizing network.) <|cite_end|>, this paper considers random access for simple on--off messaging. On-off random access signaling can be used for a variety of control tasks in wireless networks such as user presence indication, initial access, scheduling requests and paging. Random on--off signaling is already used for some of these tasks in current cellular systems <|cite_start|> (Reference: Cdma/hdr: a bandwidth efficient high speed wireless data service for nomadic users: This article presents an approach to providing very high-data-rate downstream Internet access by nomadic users within the current CDMA physical layer architecture. A means for considerably increasing the throughput by optimizing packet data protocols and by other network and coding techniques are presented and supported by simulations and laboratory measurements. The network architecture, based on Internet protocols adapted to the mobile environment, is described, followed by a discussion of economic considerations in comparison to cable and DSL services.) <|cite_end|> <|cite_start|> (Reference: HSDPA/HSUPA for UMTS: High Speed Radio Access for Mobile Communications: Preface. Acknowledgements. Abbreviations. 1. Introduction (Harri Holma and Antti Toskala). 1.1 WCDMA technology and deployment status. 1.2 HSPA standardization and deployment schedule. 1.3 Radio capability evolution with HSPA. 2. HSPA standardization and background (Antti Toskala and Karri Ranta-Aho) 2.1 3GPP. 2.2 References. 3. HSPA architecture and protocols (Antti Toskala and Juho Pirskanen). 3.1 Radio resource management architecture. 3.2 References. 4. HSDPA principles (Juho Pirskanen and Antti Toskala). 4.1 HSDPA vs Release 99 DCH. 4.2 Key technologies with HSDPA. 4.3 High-speed dedicated physical control channel. 4.4 BTS measurements for HSDPA operation. 4.5 Terminal capabilities. 4.6 HSDPA MAC layer operation. 4.7 References. 5. HSUPA principles (Karri Ranta-Aho and Antti Toskala). 5.1 HSUPA vs Release 99 DCH. 5.2 Key technologies with HSUPA. 5.3 E-DCH transport channel and physical channels. 5.4 Physical layer procedures. 5.5 MAC layer. 5.6 Iub parameters. 5.7 Mobility. 5.8 UE capabilities and data rates. 5.9 References and list of related 3GPP specifications. 6. Radio resource management (Harri Holma, Troels Kolding, Klaus Pedersen, and Jeroen Wigard). 6.1 HSDPA radio resource management. 6.2 HSUPA radio resource management. 6.3 References. 7. HSDPA bit rates, capacity and coverage (Frank Frederiksen, Harri Holma, Troels Kolding, and Klaus Pedersen). 7.1 General performance factors. 7.2 Single-user performance. 7.3 Multiuser system performance. 7.4 Iub transmission efficiency. 7.5 Capacity and cost of data delivery. 7.6 Round trip time. 7.7 HSDPA measurements. 7.8 HSDPA performance evolution. 7.9 Conclusions. 7.10 Bibliography. 8. HSUPA bit rates, capacity and coverage (Jussi Jaatinen, Harri Holma, Claudio Rosa, and Jeroen Wigard). 8.1 General performance factors. 8.2 Single-user performance. 8.3 Cell capacity. 8.4 HSUPA performance enhancements. 8.5 Conclusions. 8.6 Bibliography. 9. Application and end-to-end performance (Chris Johnson, Sandro Grech, Harri Holma, and Martin Kristensson) 9.1 Packet application introduction. 9.2 Always-on connectivity. 9.3 Application performance over HSPA. 9.4 Application performance vs network load. 9.5 References. 10. Voice-over-IP (Harri Holma, Esa Malkama ki, and Klaus Pedersen). 10.1 VoIP motivation. 10.2 IP header compression. 10.3 VoIP over HSPA. 10.4 References. 11. RF requirements of an HSPA terminal (Harri Holma, Jussi Numminen, Markus Pettersson, and Antti Toskala). 11.1 Transmitter requirements. 11.2 Receiver requirements. 11.3 Frequency bands and multiband terminals. 11.4 References. Index.) <|cite_end|> The limits of on--off random access signaling with multiple users are not fully understood. To this end, we consider a simple random multiple access channel where $n$ users transmit to a single receiver. Each user is assigned a single codeword which it transmits with probability $\lambda$. We wish to understand the capacity of these channels, by which we mean the total number of degrees of freedom $m$ needed to reliably detect which users transmit as a function of $n$, $\lambda$, and the channel conditions. We also wish to establish performance bounds for specific decoding algorithms. This on--off random access channel is related to the classic multiple access channel (MAC) in network information theory <|cite_start|> (Reference: Multi-way communication channels: ) <|cite_end|> <|cite_start|> (Reference: {Elements of information theory: Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index.) <|cite_end|>. The theory of the MAC channel is well understood <|cite_start|> (Reference: Minimum probability of error for asynchronous Gaussian multiple-access channels: Consider a Gaussian multiple-access channel shared by K users who transmit asynchronously independent data streams by modulating a set of assigned signal waveforms. The uncoded probability of error achievable by optimum multiuser detectors is investigated. It is shown that the K -user maximum-likelihood sequence detector consists of a bank of single-user matched filters followed by a Viterbi algorithm whose complexity per binary decision is O(2^{K}) . The upper bound analysis of this detector follows an approach based on the decomposition of error sequences. The issues of convergence and tightness of the bounds are examined, and it is shown that the minimum multiuser error probability is equivalent in the Iow-noise region to that of a single-user system with reduced power. These results show that the proposed multiuser detectors afford important performance gains over conventional single-user systems, in which the signal constellation carries the entire burden of complexity required to achieve a given performance level.) <|cite_end|> <|cite_start|> (Reference: Spectral efficiency of CDMA with random spreading: The CDMA channel with randomly and independently chosen spreading sequences accurately models the situation where pseudonoise sequences span many symbol periods. Furthermore, its analysis provides a comparison baseline for CDMA channels with deterministic signature waveforms spanning one symbol period. We analyze the spectral efficiency (total capacity per chip) as a function of the number of users, spreading gain, and signal-to-noise ratio, and we quantify the loss in efficiency relative to an optimally chosen set of signature sequences and relative to multiaccess with no spreading. White Gaussian background noise and equal-power synchronous users are assumed. The following receivers are analyzed: (a) optimal joint processing, (b) single-user matched filtering, (c) decorrelation, and (d) MMSE linear processing.) <|cite_end|> <|cite_start|> (Reference: Linear Multiuser Receivers: Effective Interference, Effective Bandwidth and User Capacity: Multiuser receivers improve the performance of spread-spectrum and antenna-array systems by exploiting the structure of the multiaccess interference when demodulating the signal of a user. Much of the previous work on the performance analysis of multiuser receivers has focused on their ability to reject worst case interference. Their performance in a power-controlled network and the resulting user capacity are less well-understood. We show that in a large system with each user using random spreading sequences, the limiting interference effects under several linear multiuser receivers can be decoupled, such that each interferer can be ascribed a level of effective interference that it provides to the user to be demodulated. Applying these results to the uplink of a single power-controlled cell, we derive an effective bandwidth characterization of the user capacity: the signal-to-interference requirements of all the users can be met if and only if the sum of the effective bandwidths of the users is less than the total number of degrees of freedom in the system. The effective bandwidth of a user depends only on its own SIR requirement, and simple expressions are derived for three linear receivers: the conventional matched filter, the decorrelator, and the MMSE receiver. The effective bandwidths under the three receivers serve as a basis for performance comparison.) <|cite_end|> <|cite_start|> (Reference: Blind adaptive multiuser detection: We propose a new blind multiuser signal model and detection framework for solving the near-far problem in synchronous CDMA in this paper. Compared with existing blind detectors, the proposed framework requires a minimum number of previously received signals, which is about the number of interfering users, and no sub-space separation or sequence estimation. Hence its computation complexity and detection delay are much reduced. Following this framework, several blind multiuser detectors are developed using least squares (LS) estimation, best least unbiased (BLU) estimation and minimum mean-square error (MMSE) estimation criteria and a recursively adaptive procedure is developed for further decreasing the complexity. All these can be easily extended for asynchronous CDMA. The near-far performance of this framework and the trade-off between the complexity and performance are discussed. Computer simulations are provided to demonstrate the performance of the proposed schemes too) <|cite_end|> and has been applied in commercial CDMA systems <|cite_start|> (Reference: Interference Cancellation for Cellular Systems: A Contemporary Overview: Cellular networks today are interference-limited and only becomes increasingly so in the future due to the many users that need to share the spectrum to achieve high-rate multimedia communication. Despite the enormous amount of academic and industrial research in the past 20 years on interference-aware receivers and the large performance improvements promised by these multi-user techniques, today's receivers still generally treat interference as background noise. In this article, we enumerate the reasons for this widespread scepticism, and discuss how current and future trends increases the need for and viability of multi-user receivers for both the uplink, where many asynchronous users are simultaneously detected, and the downlink, where users are scheduled and largely orthogonalized; but the mobile handset still needs to cope with a few dominant interfering base stations. New results for interference cancelling receivers that use conventional front-ends are shown to alleviate many of the shortcomings of prior techniques, particularly for the challenging uplink. This article gives an overview of key recent research breakthroughs on interference cancellation and highlights system-level considerations for future multi-user receivers.) <|cite_end|>. Unfortunately, it is difficult to apply the classic MAC channel analysis directly to the on--off random access channel under consideration here. In the traditional analysis of the MAC channel, the number of users remains constant, while the number of degrees of freedom of the channel goes to infinity. As a result, each user can employ a capacity-achieving code with an infinite block length. However, in the on--off random access channel considered here, as the number of degrees of freedom of the channel is increased, the goal is not to scale the number of bits per user, but rather the total number of users. Since each user only transmits at most one bit of information, channel coding cannot be used for reliability, and the classic MAC capacity results do not apply. Our analysis is instead based on identifying a connection between the on--off random access channel and the recovery of the sparsity pattern of a signal from noisy random linear measurements. The feasibility of recovering sparse, approximately sparse, or compressible signals from a relatively small number of random linear measurements has recently been termed \emph{compressed sensing} <|cite_start|> (Reference: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information: This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f/spl isin/C/sup N/ and a randomly chosen set of frequencies /spl Omega/. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set /spl Omega/? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=/spl sigma//sub /spl tau//spl isin/T/f(/spl tau/)/spl delta/(t-/spl tau/) obeying |T|/spl les/C/sub M//spl middot/(log N)/sup -1/ /spl middot/ |/spl Omega/| for some constant C/sub M/>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the /spl lscr//sub 1/ minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C/sub M/ which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|/spl middot/logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N/sup -M/) would in general require a number of frequency samples at least proportional to |T|/spl middot/logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.) <|cite_end|> <|cite_start|> (Reference: {Compressed sensing: Signal recovery is a very practical and useful concept in both signal processing and communication area. Basically in compressed sensing, we are interested in compressing a signal, which is sparse in some domain and then, construct the original signal from the compressed one by convex optimization. This is very important to collect as less as measurements from the original signal while having the minimum error in the constructed signal.) <|cite_end|> <|cite_start|> (Reference: {Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?: Suppose we are given a vector f in a class FsubeRopf<sup>N </sup>, e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr<sub>2</sub>) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|<sub>(n)</sub>lesRmiddotn<sup>-1</sup>p/, where R>0 and p>0. Suppose that we take measurements y<sub>k</sub>=langf<sup># </sup>,X<sub>k</sub>rang,k=1,...,K, where the X<sub>k</sub> are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0<p<1 and with overwhelming probability, our reconstruction f<sup>t</sup>, defined as the solution to the constraints y<sub>k</sub>=langf<sup># </sup>,X<sub>k</sub>rang with minimal lscr<sub>1</sub> norm, obeys parf-f<sup>#</sup>par<sub>lscr2</sub>lesC<sub>p </sub>middotRmiddot(K/logN)<sup>-r</sup>, r=1/p-1/2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed) <|cite_end|>. When the users in the on--off random access channel employ certain large random codebooks, we show that the problem at the receiver of detecting the active users is precisely the sparsity detection problem addressed in several recent works in the compressed sensing literature <|cite_start|> (Reference: Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting: The problem of recovering the sparsity pattern of a fixed but unknown vector $\beta^* \in \real^p based on a set of $n$ noisy observations arises in a variety of settings, including subset selection in regression, graphical model selection, signal denoising, compressive sensing, and constructive approximation. Of interest are conditions on the model dimension $p$, the sparsity index $s$ (number of non-zero entries in $\beta^*$), and the number of observations $n$ that are necessary and/or sufficient to ensure asymptotically perfect recovery of the sparsity pattern. This paper focuses on the information-theoretic limits of sparsity recovery: in particular, for a noisy linear observation model based on measurement vectors drawn from the standard Gaussian ensemble, we derive both a set of sufficient conditions for asymptotically perfect recovery using the optimal decoder, as well as a set of necessary conditions that any decoder, regardless of its computational complexity, must satisfy for perfect recovery. This analysis of optimal decoding limits complements our previous work (ARXIV: math.ST/0605740) on sharp thresholds for sparsity recovery using the Lasso ($\ell_1$-constrained quadratic programming) with Gaussian measurement ensembles.) <|cite_end|> <|cite_start|> (Reference: Necessary and Sufficient Conditions on Sparsity Pattern Recovery: The problem of detecting the sparsity pattern of a k-sparse vector in R^n from m random noisy measurements is of interest in many areas such as system identification, denoising, pattern recognition, and compressed sensing. This paper addresses the scaling of the number of measurements m, with signal dimension n and sparsity-level nonzeros k, for asymptotically-reliable detection. We show a necessary condition for perfect recovery at any given SNR for all algorithms, regardless of complexity, is m = Omega(k log(n-k)) measurements. Conversely, it is shown that this scaling of Omega(k log(n-k)) measurements is sufficient for a remarkably simple ``maximum correlation'' estimator. Hence this scaling is optimal and does not require more sophisticated techniques such as lasso or matching pursuit. The constants for both the necessary and sufficient conditions are precisely defined in terms of the minimum-to-average ratio of the nonzero components and the SNR. The necessary condition improves upon previous results for maximum likelihood estimation. For lasso, it also provides a necessary condition at any SNR and for low SNR improves upon previous work. The sufficient condition provides the first asymptotically-reliable detection guarantee at finite SNR.) <|cite_end|> <|cite_start|> (Reference: Sharp thresholds for high-dimensional and noisy recovery of sparsity: The problem of consistently estimating the sparsity pattern of a vector $\betastar \in \real^\mdim$ based on observations contaminated by noise arises in various contexts, including subset selection in regression, structure estimation in graphical models, sparse approximation, and signal denoising. We analyze the behavior of $\ell_1$-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern. Our main result is to establish a sharp relation between the problem dimension $\mdim$, the number $\spindex$ of non-zero elements in $\betastar$, and the number of observations $\numobs$ that are required for reliable recovery. For a broad class of Gaussian ensembles satisfying mutual incoherence conditions, we establish existence and compute explicit values of thresholds $\ThreshLow$ and $\ThreshUp$ with the following properties: for any $\epsilon > 0$, if $\numobs > 2 (\ThreshUp + \epsilon) \log (\mdim - \spindex) + \spindex + 1$, then the Lasso succeeds in recovering the sparsity pattern with probability converging to one for large problems, whereas for $\numobs < 2 (\ThreshLow - \epsilon) \log (\mdim - \spindex) + \spindex + 1$, then the probability of successful recovery converges to zero. For the special case of the uniform Gaussian ensemble, we show that $\ThreshLow = \ThreshUp = 1$, so that the threshold is sharp and exactly determined.) <|cite_end|> <|cite_start|> (Reference: Signal recovery from random measurements via orthogonal matching pursuit: This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.) <|cite_end|>. Results in compressed sensing generally provide bounds on the $\ell^2$ estimation error of a signal as a function of the number of measurements, the signal sparsity and other factors. However, what is relevant for the random on--off multiple access channel is detecting the \emph{positions} of the nonzero entries. This problem arises in subset selection in linear regression <|cite_start|> (Reference: Subset selection in Regression: OBJECTIVES Prediction, Explanation, Elimination or What? How Many Variables in the Prediction Formula? Alternatives to Using Subsets 'Black Box' Use of Best-Subsets Techniques LEAST-SQUARES COMPUTATIONS Using Sums of Squares and Products Matrices Orthogonal Reduction Methods Gauss-Jordan v. Orthogonal Reduction Methods Interpretation of Projections Appendix A: Operation Counts for All-Subsets Regression FINDING SUBSETS WHICH FIT WELL Objectives and Limitations of this Chapter Forward Selection Efroymson's Algorithm Backward Elimination Sequential Replacement Algorithm Replacing Two Variables at a Time Generating All Subsets Using Branch-and-Bound Techniques Grouping Variables Ridge Regression and Other Alternatives The Non-Negative Garrote and the Lasso Some Examples Conclusions and Recommendations HYPOTHESIS TESTING Is There any Information in the Remaining Variables? Is One Subset Better than Another? Appendix A: Spjftvoll's Method - Detailed Description WHEN TO STOP? What Criterion Should We Use? Prediction Criteria Cross-Validation and the PRESS Statistic Bootstrapping Likelihood and Information-Based Stopping Rules Appendix A. Approximate Equivalence of Stopping Rules ESTIMATION OF REGRESSION COEFFICIENTS Selection Bias Choice Between Two Variables Selection Bias in the General Case, and its Reduction Conditional Likelihood Estimation Estimation of Population Means Estimating Least-Squares Projections Appendix A: Changing Projections to Equate Sums of Squares BAYESIAN METHODS Bayesian Introduction 'Spike and Slab' Prior Normal prior for Regression Coefficients Model Averaging Picking the Best Model CONCLUSIONS AND SOME RECOMMENDATIONS REFERENCES INDEX) <|cite_end|>. By exploiting recent compressed sensing results and providing an analysis of a new algorithm, we are able to provide a number of insights: \begin{itemize} \item \emph{Performance bounds with ML detection:} Recent results in <|cite_start|> (Reference: Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting: The problem of recovering the sparsity pattern of a fixed but unknown vector $\beta^* \in \real^p based on a set of $n$ noisy observations arises in a variety of settings, including subset selection in regression, graphical model selection, signal denoising, compressive sensing, and constructive approximation. Of interest are conditions on the model dimension $p$, the sparsity index $s$ (number of non-zero entries in $\beta^*$), and the number of observations $n$ that are necessary and/or sufficient to ensure asymptotically perfect recovery of the sparsity pattern. This paper focuses on the information-theoretic limits of sparsity recovery: in particular, for a noisy linear observation model based on measurement vectors drawn from the standard Gaussian ensemble, we derive both a set of sufficient conditions for asymptotically perfect recovery using the optimal decoder, as well as a set of necessary conditions that any decoder, regardless of its computational complexity, must satisfy for perfect recovery. This analysis of optimal decoding limits complements our previous work (ARXIV: math.ST/0605740) on sharp thresholds for sparsity recovery using the Lasso ($\ell_1$-constrained quadratic programming) with Gaussian measurement ensembles.) <|cite_end|> <|cite_start|> (Reference: Necessary and Sufficient Conditions on Sparsity Pattern Recovery: The problem of detecting the sparsity pattern of a k-sparse vector in R^n from m random noisy measurements is of interest in many areas such as system identification, denoising, pattern recognition, and compressed sensing. This paper addresses the scaling of the number of measurements m, with signal dimension n and sparsity-level nonzeros k, for asymptotically-reliable detection. We show a necessary condition for perfect recovery at any given SNR for all algorithms, regardless of complexity, is m = Omega(k log(n-k)) measurements. Conversely, it is shown that this scaling of Omega(k log(n-k)) measurements is sufficient for a remarkably simple ``maximum correlation'' estimator. Hence this scaling is optimal and does not require more sophisticated techniques such as lasso or matching pursuit. The constants for both the necessary and sufficient conditions are precisely defined in terms of the minimum-to-average ratio of the nonzero components and the SNR. The necessary condition improves upon previous results for maximum likelihood estimation. For lasso, it also provides a necessary condition at any SNR and for low SNR improves upon previous work. The sufficient condition provides the first asymptotically-reliable detection guarantee at finite SNR.) <|cite_end|> <|cite_start|> (Reference: Information-theoretic limits on sparse signal recovery: Dense versus sparse measurement matrices: We study the information-theoretic limits of exactly recovering the support of a sparse signal using noisy projections defined by various classes of measurement matrices. Our analysis is high-dimensional in nature, in which the number of observations $n$, the ambient signal dimension $p$, and the signal sparsity $k$ are all allowed to tend to infinity in a general manner. This paper makes two novel contributions. First, we provide sharper necessary conditions for exact support recovery using general (non-Gaussian) dense measurement matrices. Combined with previously known sufficient conditions, this result yields sharp characterizations of when the optimal decoder can recover a signal for various scalings of the sparsity $k$ and sample size $n$, including the important special case of linear sparsity ($k = \Theta(p)$) using a linear scaling of observations ($n = \Theta(p)$). Our second contribution is to prove necessary conditions on the number of observations $n$ required for asymptotically reliable recovery using a class of $\gamma$-sparsified measurement matrices, where the measurement sparsity $\gamma(n, p, k) \in (0,1]$ corresponds to the fraction of non-zero entries per row. Our analysis allows general scaling of the quadruplet $(n, p, k, \gamma)$, and reveals three different regimes, corresponding to whether measurement sparsity has no effect, a minor effect, or a dramatic effect on the information-theoretic limits of the subset recovery problem.) <|cite_end|> provide simple upper and lower bounds on the number of measurements required to detect the users reliably assuming maximum likelihood (ML) detection. One of the consequences of these bounds is that, unlike the classic MAC channel, the sum rate achievable with random access signaling can be strictly less than the rate achievable with coordinated transmissions with the same total power. \item \emph{Potential gains over single-user detection:} ML detection can be considered as a type of multiuser detection. Current commercial designs, however, almost universally use simple single-user detection (see, for example <|cite_start|> (Reference: Fast acquisition scheme and implementation of PRACH in WCDMA system: The performance and implementation of PRACH (physical random access channel) acquisition in WCDMA system is investigated. The analysis shows that the conventional methods are not satisfying. Thus we proposed the quasi-matched filter acquisition scheme of PRACH preamble which based on fast Hadamard transform. We implement this method by hardware in the practical WCDMA field trial system. The simulation and test results show that the proposed scheme achieves the following performance: the detection probability with E/sub b//N/sub 0/ =7 dB is not less than 95%, and the mean acquisition time is less than 1.33 ms.) <|cite_end|> for a typical WCDMA design). The single-user detection performance can be estimated by bounds given in <|cite_start|> (Reference: Compressed Sensing and Redundant Dictionaries: This article extends the concept of compressed sensing to signals that are not sparse in an orthonormal basis but rather in a redundant dictionary. It is shown that a matrix, which is a composition of a random matrix of certain type and a deterministic dictionary, has small restricted isometry constants. Thus, signals that are sparse with respect to the dictionary can be recovered via Basis Pursuit from a small number of random measurements. Further, thresholding is investigated as recovery algorithm for compressed sensing and conditions are provided that guarantee reconstruction with high probability. The different schemes are compared by numerical experiments.) <|cite_end|> <|cite_start|> (Reference: Necessary and Sufficient Conditions on Sparsity Pattern Recovery: The problem of detecting the sparsity pattern of a k-sparse vector in R^n from m random noisy measurements is of interest in many areas such as system identification, denoising, pattern recognition, and compressed sensing. This paper addresses the scaling of the number of measurements m, with signal dimension n and sparsity-level nonzeros k, for asymptotically-reliable detection. We show a necessary condition for perfect recovery at any given SNR for all algorithms, regardless of complexity, is m = Omega(k log(n-k)) measurements. Conversely, it is shown that this scaling of Omega(k log(n-k)) measurements is sufficient for a remarkably simple ``maximum correlation'' estimator. Hence this scaling is optimal and does not require more sophisticated techniques such as lasso or matching pursuit. The constants for both the necessary and sufficient conditions are precisely defined in terms of the minimum-to-average ratio of the nonzero components and the SNR. The necessary condition improves upon previous results for maximum likelihood estimation. For lasso, it also provides a necessary condition at any SNR and for low SNR improves upon previous work. The sufficient condition provides the first asymptotically-reliable detection guarantee at finite SNR.) <|cite_end|>. The bounds show that ML detection offers a potentially large gain over single-user detection, particularly at high SNRs. The gap at high SNRs can be explained by a certain \emph{self-noise} limit experienced by single-user detection. \item \emph{Lasso- and OMP-based multiuser detection and near--far resistance:} ML sparsity detection is a well-known NP-hard problem <|cite_start|> (Reference: Sparse approximate solutions to linear systems: The following problem is considered: given a matrix $A$ in ${\bf R}^{m \times n}$, ($m$ rows and $n$ columns), a vector $b$ in ${\bf R}^m$, and ${\bf \epsilon} > 0$, compute a vector $x$ satisfying $\| Ax - b \|_2 \leq {\bf \epsilon}$ if such exists, such that $x$ has the fewest number of non-zero entries over all such vectors. It is shown that the problem is NP-hard, but that the well-known greedy heuristic is good in that it computes a solution with at most $\left\lceil 18 \mbox{ Opt} ({\bf \epsilon}/2) \|{\bf A}^+\|^2_2 \ln(\|b\|_2/{\bf \epsilon}) \right\rceil$ non-zero entries, where $\mbox{Opt}({\bf \epsilon}/2)$ is the optimum number of nonzero entries at error ${\bf \epsilon}/2$, ${\bf A}$ is the matrix obtained by normalizing each column of $A$ with respect to the $L_2$ norm, and ${\bf A}^+$ is its pseudo-inverse.) <|cite_end|>. However, there are practical, but suboptimal, algorithms such as the orthogonal matching pursuit (OMP) <|cite_start|> (Reference: Orthogonal least squares methods and their application to non-linear system identification: Abstract Identification algorithms based on the well-known linear least squares methods of gaussian elimination, Cholesky decomposition, classical Gram-Schmidt, modified Gram-Schmidt, Householder transformation, Givens method, and singular value decomposition are reviewed. The classical Gram-Schmidt, modified Gram-Schmidt, and Householder transformation algorithms are then extended to combine structure determination, or which terms to include in the model, and parameter estimation in a very simple and efficient manner for a class of multivariate discrete-time non-linear stochastic systems which are linear in the parameters.) <|cite_end|> <|cite_start|> (Reference: Matching Pursuits with time-frequency dictionaries: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >) <|cite_end|> <|cite_start|> (Reference: Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition: We describe a recursive algorithm to compute representations of functions with respect to nonorthogonal and possibly overcomplete dictionaries of elementary building blocks e.g. affine (wavelet) frames. We propose a modification to the matching pursuit algorithm of Mallat and Zhang (1992) that maintains full backward orthogonality of the residual (error) at every step and thereby leads to improved convergence. We refer to this modified algorithm as orthogonal matching pursuit (OMP). It is shown that all additional computation required for the OMP algorithm may be performed recursively.<<ETX>>) <|cite_end|> <|cite_start|> (Reference: Analysis of Epileptic Activity Based on Brain Mapping of EEG Adaptive Time-Frequency Decomposition: ) <|cite_end|> and ``lasso" <|cite_start|> (Reference: Regression shrinkage and selection via the {lasso: SUMMARY We propose a new method for estimation in linear models. The 'lasso' minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly 0 and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described.) <|cite_end|> methods in sparse estimation that can be used for multiuser detection methods for the on--off random access channel. In comparison to single-user detection, we show that these methods can offer improved performance when the dynamic range in received power levels is large. This near--far resistance feature is similar to that of standard MMSE multiuser detection in CDMA systems <|cite_start|> (Reference: Near-far resistance of multiuser detectors in asynchronous channels: Consideration is given to an asynchronous code-division multiple-access environment in which receiver has knowledge of the signature waveforms of all the users. Under the assumption of white Gaussian background noise, the authors compare detectors by their worst case bit error rate in a near-far environment with low background noise, where the received energies of the users are unknown to the receiver and are not necessarily similar. Conventional single-user detection in a multiuser channel is not near-far resistant, and the substantially higher performance of the optimum multiuser detector requires exponential complexity in the number of users. The authors explore suboptimal demodulation schemes which exhibit a low order of complexity while not exhibiting the impairment of the conventional single-user detector. It is shown that there exists a linear detector whose bit-error-rate is independent of the energy of the interfering users. It is also shown that the near-far resistance of optimum multiuser detection can be achieved by a linear detector. The optimum linear detector for worst-case energies is found, along with existence conditions, which are always satisfied in the models of practical interest. >) <|cite_end|>. \item \emph{Improved high SNR performance with power shaping:} While both lasso and OMP offer improvements over single-user detection, there is still a large gap in the performance of these algorithms in comparison to ML detection at high SNRs. Specifically, at high SNRs, ML achieves a fundamentally different scaling in the number of measurements required for reliable detection than that required by lasso, OMP and single-user detection. We show, however, that when accurate power control is available, the ML scaling can be theoretically achieved with a simplified version of OMP, which we call sequential OMP (SeqOMP). The method is analogous to the classic successive interference cancellation (SIC) method for the MAC channel. Specifically, users are deliberately targeted at different received power levels and then detected and cancelled out in descending order of power. While SeqOMP shows significant gains over single-user detection, for most practical problem sizes it does worse than standard OMP, even without power shaping. However, we show, at least by simulation, that power shaping can improve the performance of OMP as well. \end{itemize} The connection between sparsity detection methods such as OMP and the SIC technique for the MAC channel has also been observed in the recent work of Jin and Rao <|cite_start|> (Reference: Performance limits of matching pursuit algorithms: In this paper, we examine the performance limits of the Orthogonal Matching Pursuit (OMP) algorithm, which has proven to be effective in solving for sparse solutions to inverse problem arising in overcomplete representations. To identify these limits, we exploit the connection between sparse solution problem and multiple access channel (MAC) in wireless communication domain. The forward selective nature of OMP helps it to be recognized as a successive interference cancellation (SIC) scheme that decodes non-zero entries one at a time in a specific order. We leverage this SIC decoding order and utilize the criterion for successful decoding to develop the information-theoretic performance limitation for OMP, which involves factors such as dictionary dimension, signal-to-noise-ratio, and importantly, the relative behavior of the non- zeros entries. Supported by computer simulations, our proposed criterion is demonstrated to be asymptotically effective in explaining the behavior of OMP.) <|cite_end|>. A related work by Wipf and Rao <|cite_start|> (Reference: Comparing the effects of different weight distributions on finding sparse representations: Given a redundant dictionary of basis vectors (or atoms), our goal is to find maximally sparse representations of signals. Previously, we have argued that a sparse Bayesian learning (SBL) framework is particularly well-suited for this task, showing that it has far fewer local minima than other Bayesian-inspired strategies. In this paper, we provide further evidence for this claim by proving a restricted equivalence condition, based on the distribution of the nonzero generating model weights, whereby the SBL solution will equal the maximally sparse representation. We also prove that if these nonzero weights are drawn from an approximate Jeffreys prior, then with probability approaching one, our equivalence condition is satisfied. Finally, we motivate the worst-case scenario for SBL and demonstrate that it is still better than the most widely used sparse representation algorithms. These include Basis Pursuit (BP), which is based on a convex relaxation of the l0 (quasi)-norm, and Orthogonal Matching Pursuit (OMP), a simple greedy strategy that iteratively selects basis vectors most aligned with the current residual.) <|cite_end|> also gave some empirical evidence for the benefit of power shaping when used in conjunction with sparse Bayesian learning algorithms. Both the works <|cite_start|> (Reference: Performance limits of matching pursuit algorithms: In this paper, we examine the performance limits of the Orthogonal Matching Pursuit (OMP) algorithm, which has proven to be effective in solving for sparse solutions to inverse problem arising in overcomplete representations. To identify these limits, we exploit the connection between sparse solution problem and multiple access channel (MAC) in wireless communication domain. The forward selective nature of OMP helps it to be recognized as a successive interference cancellation (SIC) scheme that decodes non-zero entries one at a time in a specific order. We leverage this SIC decoding order and utilize the criterion for successful decoding to develop the information-theoretic performance limitation for OMP, which involves factors such as dictionary dimension, signal-to-noise-ratio, and importantly, the relative behavior of the non- zeros entries. Supported by computer simulations, our proposed criterion is demonstrated to be asymptotically effective in explaining the behavior of OMP.) <|cite_end|> and <|cite_start|> (Reference: Comparing the effects of different weight distributions on finding sparse representations: Given a redundant dictionary of basis vectors (or atoms), our goal is to find maximally sparse representations of signals. Previously, we have argued that a sparse Bayesian learning (SBL) framework is particularly well-suited for this task, showing that it has far fewer local minima than other Bayesian-inspired strategies. In this paper, we provide further evidence for this claim by proving a restricted equivalence condition, based on the distribution of the nonzero generating model weights, whereby the SBL solution will equal the maximally sparse representation. We also prove that if these nonzero weights are drawn from an approximate Jeffreys prior, then with probability approaching one, our equivalence condition is satisfied. Finally, we motivate the worst-case scenario for SBL and demonstrate that it is still better than the most widely used sparse representation algorithms. These include Basis Pursuit (BP), which is based on a convex relaxation of the l0 (quasi)-norm, and Orthogonal Matching Pursuit (OMP), a simple greedy strategy that iteratively selects basis vectors most aligned with the current residual.) <|cite_end|> are discussed in more detail below. The results in this paper make the connections between sparsity detection and the random access MAC channel more precise by giving concrete conditions on the detectability of the sparsity pattern, characterizing the optimal power shaping distribution, and contrasting the classic MAC and on--off random access MAC capacities. The remainder of the paper is organized as follows. The setting is formalized in Section~\ref{sec:chanMod}. In particular, we define all the key problem parameters. Results that can be derived from existing necessary and sufficient conditions for sparsity pattern recovery are then presented in Section~\ref{sec:csAnalysis}. We will see that there is a potentially-large performance gap between single-user detection and the optimal ML detection. Existing ``practical'' multiuser detection techniques perform significantly better than single-user detection in that they are near--far resistant. However, their performance saturates at high SNRs, falling well short of ML detection. Section~\ref{sec:SOMP} presents a new detection algorithm, sequential orthogonal matching pursuit (SeqOMP), that has near--far resistance under certain assumptions on power control. Furthermore, with optimal power shaping, it does not suffer from saturation at high SNRs. Numerical experiments are reported in Section~\ref{sec:sim}. Connection to MAC capacity are discussed in Section~\ref{sec:capacity}, conclusions are given in Section~\ref{sec:concl}, and proofs are relegated to the Appendix. <|paper_end|>
[ "<|reference_start|> Home networking with IEEE 802.15.4: a developing standard for low-rate wireless personal area networks: This article presents the IEEE 802.15.4 draft standard and its home networking applications. The main features of the standard are network flexibility, low cost, and low power consumption; the standard is suitable for many applications in the home requiring low-data-rate communications in an ad hoc self-organizing network. <|reference_end|>", "<|reference_start|> Subset selection in Regression: OBJECTIVES Prediction, Explanation, Elimination or What? How Many Variables in the Prediction Formula? Alternatives to Using Subsets 'Black Box' Use of Best-Subsets Techniques LEAST-SQUARES COMPUTATIONS Using Sums of Squares and Products Matrices Orthogonal Reduction Methods Gauss-Jordan v. Orthogonal Reduction Methods Interpretation of Projections Appendix A: Operation Counts for All-Subsets Regression FINDING SUBSETS WHICH FIT WELL Objectives and Limitations of this Chapter Forward Selection Efroymson's Algorithm Backward Elimination Sequential Replacement Algorithm Replacing Two Variables at a Time Generating All Subsets Using Branch-and-Bound Techniques Grouping Variables Ridge Regression and Other Alternatives The Non-Negative Garrote and the Lasso Some Examples Conclusions and Recommendations HYPOTHESIS TESTING Is There any Information in the Remaining Variables? Is One Subset Better than Another? Appendix A: Spjftvoll's Method - Detailed Description WHEN TO STOP? What Criterion Should We Use? Prediction Criteria Cross-Validation and the PRESS Statistic Bootstrapping Likelihood and Information-Based Stopping Rules Appendix A. Approximate Equivalence of Stopping Rules ESTIMATION OF REGRESSION COEFFICIENTS Selection Bias Choice Between Two Variables Selection Bias in the General Case, and its Reduction Conditional Likelihood Estimation Estimation of Population Means Estimating Least-Squares Projections Appendix A: Changing Projections to Equate Sums of Squares BAYESIAN METHODS Bayesian Introduction 'Spike and Slab' Prior Normal prior for Regression Coefficients Model Averaging Picking the Best Model CONCLUSIONS AND SOME RECOMMENDATIONS REFERENCES INDEX <|reference_end|>", "<|reference_start|> Fast acquisition scheme and implementation of PRACH in WCDMA system: The performance and implementation of PRACH (physical random access channel) acquisition in WCDMA system is investigated. The analysis shows that the conventional methods are not satisfying. Thus we proposed the quasi-matched filter acquisition scheme of PRACH preamble which based on fast Hadamard transform. We implement this method by hardware in the practical WCDMA field trial system. The simulation and test results show that the proposed scheme achieves the following performance: the detection probability with E/sub b//N/sub 0/ =7 dB is not less than 95%, and the mean acquisition time is less than 1.33 ms. <|reference_end|>", "<|reference_start|> Orthogonal least squares methods and their application to non-linear system identification: Abstract Identification algorithms based on the well-known linear least squares methods of gaussian elimination, Cholesky decomposition, classical Gram-Schmidt, modified Gram-Schmidt, Householder transformation, Givens method, and singular value decomposition are reviewed. The classical Gram-Schmidt, modified Gram-Schmidt, and Householder transformation algorithms are then extended to combine structure determination, or which terms to include in the model, and parameter estimation in a very simple and efficient manner for a class of multivariate discrete-time non-linear stochastic systems which are linear in the parameters. <|reference_end|>" ]
[ 0, 17, 21, 25 ]
{"<|cite_1|>": "ss-1407069", "<|multi_cite_2_1|>": "ss-808417", "<|multi_cite_2_2|>": "ss-2003089", "<|multi_cite_3_1|>": "ss-791874", "<|multi_cite_3_2|>": "ss-708234", "<|multi_cite_4_1|>": "ss-1147586", "<|multi_cite_4_2|>": "ss-1011535", "<|multi_cite_4_3|>": "ss-2003090", "<|multi_cite_4_4|>": "ss-1024986", "<|cite_5|>": "ss-1546953", "<|multi_cite_6_1|>": "ss-772570", "<|multi_cite_6_2|>": "ss-808398", "<|multi_cite_6_3|>": "ss-761778", "<|multi_cite_7_1|>": "arxiv-676904", "<|multi_cite_7_2|>": "arxiv-3325", "<|multi_cite_7_3|>": "arxiv-676837", "<|multi_cite_7_4|>": "ss-1274756", "<|cite_8|>": "ss-1620948", "<|multi_cite_9_1|>": "arxiv-676904", "<|multi_cite_9_2|>": "arxiv-3325", "<|multi_cite_9_3|>": "arxiv-3931", "<|cite_10|>": "ss-2003091", "<|multi_cite_11_1|>": "arxiv-676889", "<|multi_cite_11_2|>": "arxiv-3325", "<|cite_12|>": "ss-1931991", "<|multi_cite_13_1|>": "ss-1973667", "<|multi_cite_13_2|>": "ss-1029272", "<|multi_cite_13_3|>": "ss-1156325", "<|multi_cite_13_4|>": "ss-762179", "<|cite_14|>": "ss-881994", "<|cite_15|>": "ss-1024982", "<|cite_16|>": "ss-993972", "<|cite_17|>": "ss-2003092", "<|cite_18|>": "ss-993972", "<|cite_19|>": "ss-2003092"}
1905.11625
<|paper_start|> Title: NIL: Learning Nonlinear Interpolants Abstract: NIL: Learning Nonlinear Interpolants: Nonlinear interpolants have been shown useful for the verification of programs and hybrid systems in contexts of theorem proving, model checking, abstract interpretation, etc. The underlying synthesis problem, however, is challenging and existing methods have limitations on the form of formulae to be interpolated. We leverage classification techniques with space transformations and kernel tricks as established in the realm of machine learning, and present a counterexample-guided method named NIL for synthesizing polynomial interpolants, thereby yielding a unified framework tackling the interpolation problem for the general quantifier-free theory of nonlinear arithmetic, possibly involving transcendental functions. We prove the soundness of NIL and propose sufficient conditions under which NIL is guaranteed to converge, i.e., the derived sequence of candidate interpolants converges to an actual interpolant, and is complete, namely the algorithm terminates by producing an interpolant if there exists one. The applicability and effectiveness of our technique are demonstrated experimentally on a collection of representative benchmarks from the literature, where in particular, our method suffices to address more interpolation tasks, including those with perturbations in parameters, and in many cases synthesizes simpler interpolants compared with existing approaches. Introduction \label{sec_intro} Interpolation-based technique provides a powerful mechanism for local and modular reasoning, thereby improving scalability of various verification techniques, e.g., theorem proving, model checking and abstract interpretation, to name just a few. The study of interpolation was pioneered by Kraj{\'{\i}{\v c}}ek <|cite_start|> (Reference: Interpolation theorems, lower bounds for proof systems, and independence results for bounded arithmetic: A proof of the (propositional) Craig interpolation theorem for cut-free sequent calculus yields that a sequent with a cut-free proof (or with a proof with cut-formulas of restricted form; in particular, with only analytic cuts) with k inferences has an interpolant whose circuit-size is at most k. We give a new proof of the interpolation theorem based on a communication complexity approach which allows a similar estimate for a larger class of proofs. We derive from it several corollaries: 1. Feasible interpolation theorems for the following proof systems: (a) resolution. (b) a subsystem of LK corresponding to the bounded arithmetic theory S 2 2 (). (c) linear equational calculus. (d) cutting planes. 2. New proofs of the exponential lower bounds (for new formulas) (a) for resolution ((15]). (b) for the cutting planes proof system with coeecients written in unary ((4]). 3. An alternative proof of the independence result of 43] concerning the provability of circuit-size lower bounds in the bounded arithmetic theory S 2 2 (). 1 In the other direction we show that a depth 2 subsystem of LK does not admit feasible monotone interpolation theorem (the so called Lyndon theorem), and that a feasible monotone interpolation theorem for the depth 1 subsystem of LK would yield new exponential lower bounds for resolution proofs of the weak pigeonhole principle.) <|cite_end|> and Pudl\'{a}k <|cite_start|> (Reference: Lower bounds for resolution and cutting plane proofs and monotone computations: Abstract We prove an exponential lower bound on the length of cutting plane proofs. The proof uses an extension of a lower bound for monotone circuits to circuits which compute with real numbers and use nondecreasing functions as gates. The latter result is of independent interest, since, in particular, it implies an exponential lower bound for some arithmetic circuits.) <|cite_end|> in connection with theorem proving, by McMillan <|cite_start|> (Reference: Interpolation and SAT-Based Model Checking: ) <|cite_end|> in the context of model checking, by Graf and Sa\"{i}di <|cite_start|> (Reference: Construction of abstract state graphs with {PVS}: We describe in this paper a method based on abstract interpretation which, from a theoretical point of view, is similar to the splitting methods proposed in DGG93, Dam96] but the weaker abstract transition relation we use, allows us to construct automatically abstract state graphs paying a reasonable price. We consider a particular set of abstract states: the set of the monomials on a set of state predicates ' 1 ; :::; ' `. The successor of an abstract state m for a transition of the program is the least monomial satissed by all successors via of concrete states satisfying m. This successor m 0 can be determined exactly if for each predicate ' i it can be determined if ' i or :' i is a postcondition of m for. In order to do this, we use the Pvs theorem prover SOR93] and our Pvs-interface deened in GS96]. If the tactic used for the proof of the veriication conditions is not powerful enough, only an upper approximation of the abstract successor m is constructed. This allows us to compute upper approximations of the set of reachable states which is suucient for the veriication of invariants. Also, for almost the same price, an abstract state graph can be constructed: the expensive part of the algorithm is the computation of an abstract successor as it requires several validity checks. Therefore, only relatively small state graphs can be constructed and the additional cost for the storage of the transition relation is almost negligible. An abstract state graph can be used for the veriication of any property expressible as a temporal logic formula without existential quantiication over paths, due to the results on property preservation CGL94, LGS + 95] using a model checker. An abstract state graph represents also a relatively precise global control graph of the system (the guards of the system are used for the construction of the abstract state graph) which can be used for a backwards veriication of invariants as described in GS96]. A global control graph allows us to generate much stronger structural in-variants using the tool described in ?, BBC + 96] than the initial presentation as a parallel composition of processes. In the case that the control of the system is completely independent of the data part, a control graph is obtained much easier by partial evaluation as proposed in HGD95]; our method allows to mechanize the …) <|cite_end|>, McMillan <|cite_start|> (Reference: An Interpolating Theorem Prover: We present a method of deriving Craig interpolants from proofs in the quantifier-free theory of linear inequality and uninterpreted function symbols, and an interpolating theorem prover based on this method. The prover has been used for predicate refinement in the Blast software model checker, and can also be used directly for model checking infinite-state systems, using interpolation-based image approximation.) <|cite_end|> and Henzinger et al. <|cite_start|> (Reference: Abstractions from Proofs: The success of model checking for large programs depends crucially on the ability to efficiently construct parsimonious abstractions. A predicate abstraction is parsimonious if at each control location, it specifies only relationships between current values of variables, and only those which are required for proving correctness. Previous methods for automatically refining predicate abstractions until sufficient precision is obtained do not systematically construct parsimonious abstractions: predicates usually contain symbolic variables, and are added heuristically and often uniformly to many or all control locations at once. We use Craig interpolation to efficiently construct, from a given abstract error trace which cannot be concretized, a parsominous abstraction that removes the trace. At each location of the trace, we infer the relevant predicates as an interpolant between the two formulas that define the past and the future segment of the trace. Each interpolant is a relationship between current values of program variables, and is relevant only at that particular program location. It can be found by a linear scan of the proof of infeasibility of the trace.We develop our method for programs with arithmetic and pointer expressions, and call-by-value function calls. For function calls, Craig interpolation offers a systematic way of generating relevant predicates that contain only the local variables of the function and the values of the formal parameters when the function was called. We have extended our model checker BLAST with predicate discovery by Craig interpolation, and applied it successfully to C programs with more than 130,000 lines of code, which was not possible with approaches that build less parsimonious abstractions.) <|cite_end|> pertaining to abstraction like CEGAR <|cite_start|> (Reference: Counterexample-{{Guided Abstraction Refinement: We present an automatic iterative abstraction-refinement methodology in which the initial abstract model is generated by an automatic analysis of the con- trol structures in the program to be verified. Abstract models may admit erroneous (or "spurious") counterexamples. We devise new symbolic techniques which ana- lyze such counterexamples and refine the abstract model correspondingly. The refinement algorithm keeps the size of the abstract state space small due to the use of abstraction functions which distinguish many degrees of abstraction for each program variable. We describe an implementation of our methodology in NuSMV. Practical experiments including a large Fujitsu IP core design with ab- out 500 latches and 10000 lines of SMV code confirm the effectiveness of our approach.) <|cite_end|>, and by Wang et al. <|cite_start|> (Reference: Predicate Generation for Learning-Based Quantifier-Free Loop Invariant Inference: We address the predicate generation problem in the context of loop invariant inference. Motivated by the interpolation-based abstraction refinement technique, we apply the interpolation theorem to synthesize predicates implicitly implied by program texts. Our technique is able to improve the effectiveness and efficiency of the learning-based loop invariant inference algorithm in [14]. We report experiment results of examples from Linux, SPEC2000, and Tar utility.) <|cite_end|> in the context of learning-based invariant generation. Developing efficient algorithms for generating interpolants for various theories and their combination has become an active research area, see e.g., <|cite_start|> (Reference: An Interpolating Theorem Prover: We present a method of deriving Craig interpolants from proofs in the quantifier-free theory of linear inequality and uninterpreted function symbols, and an interpolating theorem prover based on this method. The prover has been used for predicate refinement in the Blast software model checker, and can also be used directly for model checking infinite-state systems, using interpolation-based image approximation.) <|cite_end|> <|cite_start|> (Reference: A Combination Method for Generating Interpolants: ) <|cite_end|> <|cite_start|> (Reference: Interpolation for data structures: Interpolation based automatic abstraction is a powerful and robust technique for the automated analysis of hardware and software systems. Its use has however been limited to control-dominated applications because of a lack of algorithms for computing interpolants for data structures used in software programs. We present efficient procedures to construct interpolants for the theories of arrays, sets, and multisets using the reduction approach for obtaining decision procedures for complex data structures. The approach taken is that of reducing the theories of such data structures to the theories of equality and linear arithmetic for which efficient interpolating decision procedures exist. This enables interpolation based techniques to be applied to proving properties of programs that manipulate these data structures.) <|cite_end|> <|cite_start|> (Reference: Constraint solving for interpolation: ) <|cite_end|> <|cite_start|> (Reference: On Interpolation and Symbol Elimination in Theory Extensions: In this paper we study possibilities of interpolation and symbol elimination in extensions of a theory $${\mathcal T}_0$$ with additional function symbols whose properties are axiomatised using a set of clauses. We analyze situations in which we can perform such tasks in a hierarchical way, relying on existing mechanisms for symbol elimination in $${\mathcal T}_0$$. This is for instance possible if the base theory allows quantifier elimination. We analyze possibilities of extending such methods to situations in which the base theory does not allow quantifier elimination but has a model completion which does. We illustrate the method on various examples.) <|cite_end|> <|cite_start|> (Reference: Efficient Interpolant Generation in Satisfiability Modulo Theories: ) <|cite_end|> <|cite_start|> (Reference: Quantified Invariant Generation Using an Interpolating Saturation Prover: ) <|cite_end|>. Though established methods addressing interpolant generation for Presburger arithmetic, decidable fragments of first-order logic, theory of equality over uninterpreted functions (EUFs) as well as their combination have been extensively studied in the literature, there appears to be little work on synthesizing nonlinear interpolants. Dai et al. proposed an algorithm in <|cite_start|> (Reference: Generating Non-Linear Interpolants by Semidefinite Programming: Interpolation-based techniques have been widely and successfully applied in the verification of hardware and software, e.g., in bounded-model check- ing, CEGAR, SMT, etc., whose hardest part is how to synthesize interpolants. Various work for discovering interpolants for propositional logic, quantifier-free fragments of first-order theories and their combinations have been proposed. However, little work focuses on discovering polynomial interpolants in the literature. In this paper, we provide an approach for constructing non-linear interpolants based on semidefinite programming, and show how to apply such results to the verification of programs by examples.) <|cite_end|> for generating interpolants for nonlinear polynomial inequalities based on the existence of a witness guaranteed by Stengle's Positivstellensatz <|cite_start|> (Reference: A nullstellensatz and a positivstellensatz in semialgebraic geometry: ) <|cite_end|> that can be computed using semi-definite programming (SDP). A major limitation of this method is that the two mutually contradictory formulas to be interpolated must share the same set of variables. Okudono et al. extended <|cite_start|> (Reference: Generating Non-Linear Interpolants by Semidefinite Programming: Interpolation-based techniques have been widely and successfully applied in the verification of hardware and software, e.g., in bounded-model check- ing, CEGAR, SMT, etc., whose hardest part is how to synthesize interpolants. Various work for discovering interpolants for propositional logic, quantifier-free fragments of first-order theories and their combinations have been proposed. However, little work focuses on discovering polynomial interpolants in the literature. In this paper, we provide an approach for constructing non-linear interpolants based on semidefinite programming, and show how to apply such results to the verification of programs by examples.) <|cite_end|> in <|cite_start|> (Reference: Sharper and Simpler Nonlinear Interpolants for Program Verification: Interpolation of jointly infeasible predicates plays important roles in various program verification techniques such as invariant synthesis and CEGAR. Intrigued by the recent result by Dai et al.\ that combines real algebraic geometry and SDP optimization in synthesis of polynomial interpolants, the current paper contributes its enhancement that yields sharper and simpler interpolants. The enhancement is made possible by: theoretical observations in real algebraic geometry; and our continued fraction-based algorithm that rounds off (potentially erroneous) numerical solutions of SDP solvers. Experiment results support our tool's effectiveness; we also demonstrate the benefit of sharp and simple interpolants in program verification examples.) <|cite_end|> to cater for the so-called sharper and simpler interpolants by developing a continuous fraction-based algorithm that rounds off numerical solutions. In <|cite_start|> (Reference: Interpolant Synthesis for Quadratic Polynomial Inequalities and Combination with EUF: ) <|cite_end|>, Gan et al. considered the interpolation for inequalities combined with EUFs by employing the hierarchical calculus framework proposed in <|cite_start|> (Reference: Interpolation in local theory extensions: In this paper we study interpolation in local extensions of a base theory. We identify situations in which it is possible to obtain interpolants in a hierarchical manner, by using a prover and a procedure for generating interpolants in the base theory as black-boxes. We present several examples of theory extensions in which interpolants can be computed this way, and discuss applications in verification, knowledge representation, and modular reasoning in combinations of local theories.) <|cite_end|> (and its extension <|cite_start|> (Reference: On Interpolation and Symbol Elimination in Theory Extensions: In this paper we study possibilities of interpolation and symbol elimination in extensions of a theory $\mathcal{T}_0$ with additional function symbols whose properties are axiomatised using a set of clauses. We analyze situations in which we can perform such tasks in a hierarchical way, relying on existing mechanisms for symbol elimination in $\mathcal{T}_0$. This is for instance possible if the base theory allows quantifier elimination. We analyze possibilities of extending such methods to situations in which the base theory does not allow quantifier elimination but has a model completion which does. We illustrate the method on various examples.) <|cite_end|>), while the inequalities are limited to be of the concave quadratic form. In <|cite_start|> (Reference: Interpolants in Nonlinear Theories Over the Reals: ) <|cite_end|>, Gao and Zufferey transformed proof traces from $\delta$-complete decision procedures into interpolants, composed of Boolean combinations of linear constraints, which can deal with certain transcendental functions beyond polynomials. The techniques of encoding interpolants as logical combinations of linear constraints, including <|cite_start|> (Reference: Interpolants in Nonlinear Theories Over the Reals: ) <|cite_end|>, <|cite_start|> (Reference: Craig Interpolation in the Presence of Non-linear Constraints: ) <|cite_end|> and <|cite_start|> (Reference: Interpolants as Classifiers: ) <|cite_end|>, however, yield potentially large interpolants (requiring even an infinite length in the worst case) and their usage thus becomes difficult in practical applications (cf. Example~\ref{exmp:tacas16}). Interpolants can be viewed as classifiers that distinguish, in the context of program verification for instance, positive program states from negative ones (unreachable/error states) and consequently the state-of-the-art classification algorithms can be leveraged for synthesizing interpolants. The universal applicability of classification techniques substantially extends the scope of theories admitting interpolant generation. This idea was first employed by Sharma et al. in <|cite_start|> (Reference: Interpolants as Classifiers: ) <|cite_end|>, which infers linear interpolants through hyperplane-classifiers generated by support vector machines (SVMs) <|cite_start|> (Reference: Pattern recognition using generalized portrait method: ) <|cite_end|> <|cite_start|> (Reference: A Training Algorithm For Optimal Margin Classifiers: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.) <|cite_end|> whilst handles superficial nonlinearities by assembling interpolants in the form purely of conjunctions (or dually, disjunctions) of linear half-spaces, which addresses only a limited category of formulae featuring nonlinearities. The learning-based paradigm has also been exploited in the context of nonlinear constraint solving, see e.g., <|cite_start|> (Reference: Learning-based abstractions for nonlinear constraint solving: We propose a new abstraction refinement procedure based on machine learning to improve the performance of nonlinear constraint solving algorithms on large-scale problems. The proposed approach decomposes the original set of constraints into smaller subsets, and uses learning algorithms to propose sequences of abstractions that take the form of conjunctions of classifiers. The core procedure is a refinement loop that keeps improving the learned results based on counterexamples that are obtained from partial constraints that are easy to solve. Experiments show that the proposed techniques significantly improved the performance of state-of-the-art constraint solvers on many challenging benchmarks. The mechanism is capable of producing intermediate symbolic abstractions that are also important for many applications and for understanding the internal structures of hard constraint solving problems.) <|cite_end|>. In this paper, we present a classification-based learning method for the synthesis of polynomial interpolants for the quantifier-free theory of nonlinear arithmetic. Our approach is based on techniques of space transformations and kernel tricks pertinent to SVMs that have been well-developed in the realm of machine learning. Our method is described by an algorithm called NIL (and its several variants) that adopts the counterexample-guided inductive synthesis framework <|cite_start|> (Reference: Oracle-guided component-based program synthesis: We present a novel approach to automatic synthesis of loop-free programs. The approach is based on a combination of oracle-guided learning from examples, and constraint-based synthesis from components using satisfiability modulo theories (SMT) solvers. Our approach is suitable for many applications, including as an aid to program understanding tasks such as deobfuscating malware. We demonstrate the efficiency and effectiveness of our approach by synthesizing bit-manipulating programs and by deobfuscating programs.) <|cite_end|> <|cite_start|> (Reference: Programming by Sketching for Bit-streaming Programs: This paper introduces the concept of programming with sketches, an approach for the rapid development of high-performance applications. This approach allows a programmer to write clean and portable reference code, and then obtain a high-quality implementation by simply sketching the outlines of the desired implementation. Subsequently, a compiler automatically fills in the missing details while also ensuring that a completed sketch is faithful to the input reference code. In this paper, we develop StreamBit as a sketching methodology for the important class of bit-streaming programs (e.g., coding and cryptography).A sketch is a partial specification of the implementation, and as such, it affords several benefits to programmer in terms of productivity and code robustness. First, a sketch is easier to write compared to a complete implementation. Second, sketching allows the programmer to focus on exploiting algorithmic properties rather than on orchestrating low-level details. Third, a sketch-aware compiler rejects "buggy" sketches, thus improving reliability while allowing the programmer to quickly evaluate sophisticated implementation ideas.We evaluated the productivity and performance benefits of our programming methodology in a user-study, where a group of novice StreamBit programmers competed with a group of experienced C programmers on implementing a cipher. We learned that, given the same time budget, the ciphers developed in StreamBit ran 2.5x faster than ciphers coded in C. We also produced implementations of DES and Serpent that were competitive with hand optimized implementations available in the public domain.) <|cite_end|>. We prove the soundness of NIL and propose sufficient conditions under which NIL is guaranteed to converge, that is, the derived sequence of classifiers (candidate interpolants) converges to an actual interpolant, and is complete, i.e., if an interpolant exists, the method terminates with an actual interpolant. In contrast to related work on generation of nonlinear interpolants, which restrict the input formulae, our technique provides a uniform framework, tackling the interpolation problem for the general quantifier-free theory of nonlinear arithmetic, possibly involving transcendental functions. The applicability and effectiveness of NIL are demonstrated experimentally on a collection of representative benchmarks from the literature; as is evident from experimental results, our method is able to address more demands on the nature of interpolants, including those with perturbations in parameters (due to the robustness inherited from SVMs); in many cases, it synthesizes simpler interpolants compared with other approaches, as shown by the following example. \begin{example}[ <|cite_start|> (Reference: Interpolants in Nonlinear Theories Over the Reals: ) <|cite_end|>]\label{exmp:tacas16} Consider two mutually contradictory inequalities $\phi \define y\ge x^2$ and $\psi \define y\le -\cos(x) + 0.8$. Our NIL algorithm constructs a single polynomial inequality $I \define 15 x^2 < 4 + 20y$ as the interpolant, namely, $\phi \models I$ and $I \wedge \psi$ is unsatisfiable; while the interpolant generated by the approach in <|cite_start|> (Reference: Interpolants in Nonlinear Theories Over the Reals: ) <|cite_end|>, only when provided with sufficiently large finite domains, e.g., $x\in [-\pi, \pi]$ and $y\in [-0.2, \pi^2]$, is $y>1.8\lor(0.59\leq y\leq 1.8\land -1.35\leq x\leq 1.35)\lor (0.09\leq y<0.59\land -0.77\leq x\leq0.77)\lor (y\geq 0\land -0.3\leq x\leq 0.3)$. As will be discussed later, we do not need to provide a priori information to our algorithm such as bounds on variables. \end{example} The rest of the paper is organized as follows. Sect.~\ref{sec_preliminaries} introduces some preliminaries on Craig interpolants and SVMs. In Sect.~\ref{sec_learning}, we present the NIL algorithm dedicated to synthesizing nonlinear interpolants, followed by the analysis of its soundness, conditional completeness and convergence in Sect.~\ref{sec_theories}. Sect.~\ref{sec_experiments} reports several implementation issues and experimental results on a collection of benchmarks (with the robustness discussed in Sect.~\ref{subsec_robustness}). The paper is then concluded in Sect.~\ref{sec_conclusion}. \oomit{ \paragraph*{\it Related Work.} Dai et al. proposed an algorithm in <|cite_start|> (Reference: Generating Non-Linear Interpolants by Semidefinite Programming: Interpolation-based techniques have been widely and successfully applied in the verification of hardware and software, e.g., in bounded-model check- ing, CEGAR, SMT, etc., whose hardest part is how to synthesize interpolants. Various work for discovering interpolants for propositional logic, quantifier-free fragments of first-order theories and their combinations have been proposed. However, little work focuses on discovering polynomial interpolants in the literature. In this paper, we provide an approach for constructing non-linear interpolants based on semidefinite programming, and show how to apply such results to the verification of programs by examples.) <|cite_end|> for generating interpolants for nonlinear polynomial inequalities based on the existence of a witness guaranteed by Stengle's Positivstellensatz <|cite_start|> (Reference: A nullstellensatz and a positivstellensatz in semialgebraic geometry: ) <|cite_end|> that can be computed using semi-definite programming (SDP). A major limitation of this method is that the two mutually contradictory formulas to be interpolated must share the same set of variables. Okudono et al. extended <|cite_start|> (Reference: Generating Non-Linear Interpolants by Semidefinite Programming: Interpolation-based techniques have been widely and successfully applied in the verification of hardware and software, e.g., in bounded-model check- ing, CEGAR, SMT, etc., whose hardest part is how to synthesize interpolants. Various work for discovering interpolants for propositional logic, quantifier-free fragments of first-order theories and their combinations have been proposed. However, little work focuses on discovering polynomial interpolants in the literature. In this paper, we provide an approach for constructing non-linear interpolants based on semidefinite programming, and show how to apply such results to the verification of programs by examples.) <|cite_end|> in <|cite_start|> (Reference: Sharper and Simpler Nonlinear Interpolants for Program Verification: Interpolation of jointly infeasible predicates plays important roles in various program verification techniques such as invariant synthesis and CEGAR. Intrigued by the recent result by Dai et al.\ that combines real algebraic geometry and SDP optimization in synthesis of polynomial interpolants, the current paper contributes its enhancement that yields sharper and simpler interpolants. The enhancement is made possible by: theoretical observations in real algebraic geometry; and our continued fraction-based algorithm that rounds off (potentially erroneous) numerical solutions of SDP solvers. Experiment results support our tool's effectiveness; we also demonstrate the benefit of sharp and simple interpolants in program verification examples.) <|cite_end|> to cater for the so-called sharper and simpler interpolants by developing a continuous fraction-based algorithm that rounds off numerical solutions. In <|cite_start|> (Reference: Interpolant Synthesis for Quadratic Polynomial Inequalities and Combination with EUF: ) <|cite_end|>, Gan et al. considered the interpolation for inequalities combined with EUFs by employing the hierarchical calculus framework proposed in <|cite_start|> (Reference: Interpolation in local theory extensions: In this paper we study interpolation in local extensions of a base theory. We identify situations in which it is possible to obtain interpolants in a hierarchical manner, by using a prover and a procedure for generating interpolants in the base theory as black-boxes. We present several examples of theory extensions in which interpolants can be computed this way, and discuss applications in verification, knowledge representation, and modular reasoning in combinations of local theories.) <|cite_end|>, while the inequalities are limited to be of the concave quadratic form. In <|cite_start|> (Reference: Interpolants in Nonlinear Theories Over the Reals: ) <|cite_end|>, Gao and Zufferey transformed proof traces from $\delta$-complete decision procedures into interpolants composing of Boolean combinations of linear constraints, which can deal with certain transcendental functions beyond polynomials. The techniques of encoding interpolants as logical combinations of linear constraints, including <|cite_start|> (Reference: Interpolants in Nonlinear Theories Over the Reals: ) <|cite_end|> and <|cite_start|> (Reference: Interpolants as Classifiers: ) <|cite_end|>, however, yield potentially large interpolants (even of infinite length in the worst case) and their usage thus becomes difficult in practical applications. Our method instead constructs a single polynomial inequality as an interpolant, for example, given two mutually contradictory inequalities $\phi \define y\ge x^2$ and $\psi \define y\le -\cos(x) + 0.8$, the interpolant generated by our NIL algorithm is $15 x^2 < 4 + 20y$, while the interpolant generated by the approach in <|cite_start|> (Reference: Interpolants in Nonlinear Theories Over the Reals: ) <|cite_end|>, only when provided with sufficiently large finite domains, e.g., $x\in [-\pi, \pi]$ and $y\in [-0.2, \pi^2]$, is $y>1.8\lor(0.59\leq y\leq 1.8\land -1.35\leq x\leq 1.35)\lor (0.09\leq y<0.59\land -0.77\leq x\leq0.77)\lor (y\geq 0\land -0.3\leq x\leq 0.3)$. } <|paper_end|>
[ "<|reference_start|> Interpolants in Nonlinear Theories Over the Reals: <|reference_end|>", "<|reference_start|> Craig Interpolation in the Presence of Non-linear Constraints: <|reference_end|>", "<|reference_start|> Interpolants in Nonlinear Theories Over the Reals: <|reference_end|>", "<|reference_start|> Interpolants in Nonlinear Theories Over the Reals: <|reference_end|>" ]
[ 22, 24, 33, 40 ]
{"<|cite_1|>": "ss-1080481", "<|cite_2|>": "ss-1704768", "<|cite_3|>": "ss-973261", "<|cite_4|>": "ss-977664", "<|cite_5|>": "ss-2285150", "<|cite_6|>": "ss-1060596", "<|cite_7|>": "ss-1405752", "<|cite_8|>": "arxiv-34731", "<|multi_cite_9_1|>": "ss-2285150", "<|multi_cite_9_2|>": "ss-2011928", "<|multi_cite_9_3|>": "ss-2073804", "<|multi_cite_9_4|>": "ss-2011929", "<|multi_cite_9_5|>": "ss-2276523", "<|multi_cite_9_6|>": "ss-2011930", "<|multi_cite_9_7|>": "ss-1835296", "<|cite_10|>": "arxiv-41936", "<|cite_11|>": "ss-1383320", "<|cite_12|>": "arxiv-41936", "<|cite_13|>": "arxiv-133392", "<|cite_14|>": "ss-854911", "<|cite_15|>": "arxiv-4181", "<|cite_16|>": "arxiv-117142", "<|cite_17|>": "ss-1577163", "<|cite_18|>": "ss-1577163", "<|cite_19|>": "ss-1494995", "<|cite_20|>": "ss-1490250", "<|cite_21|>": "ss-1490250", "<|multi_cite_22_1|>": "ss-2296549", "<|multi_cite_22_2|>": "ss-1115171", "<|cite_23|>": "ss-854912", "<|multi_cite_24_1|>": "ss-1290261", "<|multi_cite_24_2|>": "ss-911062", "<|cite_25|>": "ss-1577163", "<|cite_26|>": "ss-1577163", "<|cite_27|>": "arxiv-41936", "<|cite_28|>": "ss-1383320", "<|cite_29|>": "arxiv-41936", "<|cite_30|>": "arxiv-133392", "<|cite_31|>": "ss-854911", "<|cite_32|>": "arxiv-4181", "<|cite_33|>": "ss-1577163", "<|cite_34|>": "ss-1577163", "<|cite_35|>": "ss-1490250", "<|cite_36|>": "ss-1577163"}
2112.14088
<|paper_start|> Title: Synchronized Audio-Visual Frames with Fractional Positional Encoding for Transformers in Video-to-Text Translation Abstract: Synchronized Audio-Visual Frames with Fractional Positional Encoding for Transformers in Video-to-Text Translation: Video-to-Text (VTT) is the task of automatically generating descriptions for short audio-visual video clips, which can support visually impaired people to understand scenes of a YouTube video for instance. Transformer architectures have shown great performance in both machine translation and image captioning, lacking a straightforward and reproducible application for VTT. However, there is no comprehensive study on different strategies and advice for video description generation including exploiting the accompanying audio with fully self-attentive networks. Thus, we explore promising approaches from image captioning and video processing and apply them to VTT by developing a straightforward Transformer architecture. Additionally, we present a novel way of synchronizing audio and video features in Transformers which we call Fractional Positional Encoding (FPE). We run multiple experiments on the VATEX dataset to determine a configuration applicable to unseen datasets that helps describe short video clips in natural language and improved the CIDEr and BLEU-4 scores by 37.13 and 12.83 points compared to a vanilla Transformer network and achieve state-of-the-art results on the MSR-VTT and MSVD datasets. Also, FPE helps increase the CIDEr score by a relative factor of 8.6%. Introduction Recurrent Neural Networks are a common architecture to model language generation tasks. Especially Long short-term memory (LSTM) Networks in combination with Deep Convolutional Neural Networks are used to generate descriptions of images <|cite_start|> (Reference: Show and Tell: A Neural Image Caption Generator: Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.) <|cite_end|> <|cite_start|> (Reference: Deep Visual-Semantic Alignments for Generating Image Descriptions: We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.) <|cite_end|> <|cite_start|> (Reference: DenseCap: Fully Convolutional Localization Networks for Dense Captioning: We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings.) <|cite_end|>. The architectures have matured over the years and introduced self-attention for LSTM Layers <|cite_start|> (Reference: Show, Attend and Tell: Neural Image Caption Generation with Visual Attention: Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.) <|cite_end|>. These methods have also become more and more popular for machine translation tasks, whose encoder-decoder architecture originally inspired the Show and Tell model of Vinyals~et~al. <|cite_start|> (Reference: Show and Tell: A Neural Image Caption Generator: Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.) <|cite_end|>. Recently, Vaswani~et~al. <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|> introduced a simple network architecture that is solely based on attention mechanisms and gets rid of convolutions altogether. Given the massive improvements in the task of sequence transduction and machine translation, it is natural to adapt this technique to Image Captioning <|cite_start|> (Reference: Meshed-Memory Transformer for Image Captioning: Transformer-based architectures represent the state of the art in sequence modeling tasks like machine translation and language understanding. Their applicability to multi-modal contexts like image captioning, however, is still largely under-explored. With the aim of filling this gap, we present M$^2$ - a Meshed Transformer with Memory for Image Captioning. The architecture improves both the image encoding and the language generation steps: it learns a multi-level representation of the relationships between image regions integrating learned a priori knowledge, and uses a mesh-like connectivity at decoding stage to exploit low- and high-level features. Experimentally, we investigate the performance of the M$^2$ Transformer and different fully-attentive models in comparison with recurrent ones. When tested on COCO, our proposal achieves a new state of the art in single-model and ensemble configurations on the "Karpathy" test split and on the online test server. We also assess its performances when describing objects unseen in the training set. Trained models and code for reproducing the experiments are publicly available at: https://github.com/aimagelab/meshed-memory-transformer.) <|cite_end|>. In this work, we focus on the Video-to-Text (VTT) task, which is actually quite similar to Image Captioning. We develop a model that is easy to implement and yet generates high-quality captions. We start with a Transformer modified to cope with video inputs as baseline and investigate several improvements by adopting various techniques from the domain of Image Captioning. We focus on promising extensions in order to develop a model which is easy to reproduce. Ultimately, we present a way to easily align video and audio features independent of their respective sampling rates. We align the features by extending the Positional Encoding to support fractional positions. \\ Our contributions are as follows: \begin{itemize} \item We develop a simple Transformer model for generating descriptions for short video clips. We reuse and adopt promising approaches from Image Captioning and human action classification for video clips that does not consist of an ensemble of multiple models. \item We present a combination of learning rate schedules that increases performance and shortens convergence time for VTT. \item Finally, we introduce Fractional Positional Encoding (FPE), an extension to the traditional Positional Encoding, which allows to synchronize video and audio frames dependent on their respective sampling rate. By using FPE, we improve our CIDEr score by 37.13 points in comparison to the baseline. Furthermore, we achieve state-of-the-art scores on the MSVD and MSR-VTT datasets. \end{itemize} Related Work Generating captions automatically from images is a task that has been widely studied. Most image captioning models are inspired by the machine translation encoder-decoder architecture and come with a vision CNN encoder and a language generating Recurrent Neural Network (RNN) <|cite_start|> (Reference: Show and Tell: A Neural Image Caption Generator: Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.) <|cite_end|> <|cite_start|> (Reference: Deep Visual-Semantic Alignments for Generating Image Descriptions: We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.) <|cite_end|> <|cite_start|> (Reference: Long-term Recurrent Convolutional Networks for Visual Recognition and Description: Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep"' in that they can be compositional in spatial and temporal "layers". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.) <|cite_end|>. Shortly after these inital works on Image Captioning, visual attention mechanisms have shown to benefit image description generation <|cite_start|> (Reference: Show, Attend and Tell: Neural Image Caption Generation with Visual Attention: Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.) <|cite_end|> <|cite_start|> (Reference: Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering: Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr / SPICE / BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.) <|cite_end|>. Video-to-Text (VTT) is the natural continuation to Image Captioning. Instead of generating short descriptions for still images, VTT tries to infer descriptions from short video clips. Pan et al. <|cite_start|> (Reference: Jointly Modeling Embedding and Translation to Bridge Video and Language: Automatically describing video content with natural language is a fundamental challenge of multimedia. Recurrent Neural Networks (RNN), which models sequence dynamics, has attracted increasing attention on visual interpretation. However, most existing approaches generate a word locally with given previous words and the visual content, while the relationship between sentence semantics and visual content is not holistically exploited. As a result, the generated sentences may be contextually correct but the semantics (e.g., subjects, verbs or objects) are not true. This paper presents a novel unified framework, named Long Short-Term Memory with visual-semantic Embedding (LSTM-E), which can simultaneously explore the learning of LSTM and visual-semantic embedding. The former aims to locally maximize the probability of generating the next word given previous words and visual content, while the latter is to create a visual-semantic embedding space for enforcing the relationship between the semantics of the entire sentence and visual content. Our proposed LSTM-E consists of three components: a 2-D and/or 3-D deep convolutional neural networks for learning powerful video representation, a deep RNN for generating sentences, and a joint embedding model for exploring the relationships between visual content and sentence semantics. The experiments on YouTube2Text dataset show that our proposed LSTM-E achieves to-date the best reported performance in generating natural sentences: 45.3% and 31.0% in terms of BLEU@4 and METEOR, respectively. We also demonstrate that LSTM-E is superior in predicting Subject-Verb-Object (SVO) triplets to several state-of-the-art techniques.) <|cite_end|> use an encoder that utilizes 3D and 2D CNN features while the decoder is LSTM based. Many other works <|cite_start|> (Reference: {Video Captioning with Attention-based LSTM and Semantic Consistency: Recent progress in using long short-term memory (LSTM) for image captioning has motivated the exploration of their applications for video captioning. By taking a video as a sequence of features, an LSTM model is trained on video-sentence pairs and learns to associate a video to a sentence. However, most existing methods compress an entire video shot or frame into a static representation, without considering attention mechanism which allows for selecting salient features. Furthermore, existing approaches usually model the translating error, but ignore the correlations between sentence semantics and visual content. To tackle these issues, we propose a novel end-to-end framework named aLSTMs, an attention-based LSTM model with semantic consistency, to transfer videos to natural sentences. This framework integrates attention mechanism with LSTM to capture salient structures of video, and explores the correlation between multimodal representations (i.e., words and visual content) for generating sentences with rich semantic content. Specifically, we first propose an attention mechanism that uses the dynamic weighted sum of local two-dimensional convolutional neural network representations. Then, an LSTM decoder takes these visual features at time <inline-formula><tex-math notation="LaTeX">$t$</tex-math></inline-formula> and the word-embedding feature at time <inline-formula><tex-math notation="LaTeX">$t$</tex-math></inline-formula><inline-formula><tex-math notation="LaTeX"> $-$</tex-math></inline-formula>1 to generate important words. Finally, we use multimodal embedding to map the visual and sentence features into a joint space to guarantee the semantic consistence of the sentence description and the video visual content. Experiments on the benchmark datasets demonstrate that our method using single feature can achieve competitive or even better results than the state-of-the-art baselines for video captioning in both BLEU and METEOR.) <|cite_end|> <|cite_start|> (Reference: Semantic Compositional Networks for Visual Captioning: A Semantic Compositional Network (SCN) is developed for image captioning, in which semantic concepts (i.e., tags) are detected from the image, and the probability of each tag is used to compose the parameters in a long short-term memory (LSTM) network. The SCN extends each weight matrix of the LSTM to an ensemble of tag-dependent weight matrices. The degree to which each member of the ensemble is used to generate an image caption is tied to the image-dependent probability of the corresponding tag. In addition to captioning images, we also extend the SCN to generate captions for video clips. We qualitatively analyze semantic composition in SCNs, and quantitatively evaluate the algorithm on three benchmark datasets: COCO, Flickr30k, and Youtube2Text. Experimental results show that the proposed method significantly outperforms prior state-of-the-art approaches, across multiple evaluation metrics.) <|cite_end|> <|cite_start|> (Reference: Stylenet: Generating attractive visual captions with styles: We propose a novel framework named StyleNet to address the task of generating attractive captions for images and videos with different styles. To this end, we devise a novel model component, named factored LSTM, which automatically distills the style factors in the monolingual text corpus. Then at runtime, we can explicitly control the style in the caption generation process so as to produce attractive visual captions with the desired style. Our approach achieves this goal by leveraging two sets of data: 1) factual image/video-caption paired data, and 2) stylized monolingual text data (e.g., romantic and humorous sentences). We show experimentally that StyleNet outperforms existing approaches for generating visual captions with different styles, measured in both automatic and human evaluation metrics on the newly collected FlickrStyle10K image caption dataset, which contains 10K Flickr images with corresponding humorous and romantic captions.) <|cite_end|> make use of 2D and/or 3D features in the encoder and generate the descriptions with an LSTM decoder. Similar to Image Captioning, works in VTT have adopted traditional attention mechanisms <|cite_start|> (Reference: Video Captioning with Multi-Faceted Attention: Recently, video captioning has been attracting an increasing amount of interest, due to its potential for improving accessibility and information retrieval. While existing methods rely on different kinds of visual features and model structures, they do not fully exploit relevant semantic information. We present an extensible approach to jointly leverage several sorts of visual features and semantic attributes. Our novel architecture builds on LSTMs for sentence generation, with several attention layers and two multimodal layers. The attention mechanism learns to automatically select the most salient visual features or semantic attributes, and the multimodal layer yields overall representations for the input and outputs of the sentence generation component. Experimental results on the challenging MSVD and MSR-VTT datasets show that our framework outperforms the state-of-the-art approaches, while ground truth based semantic attributes are able to further elevate the output quality to a near-human level.) <|cite_end|> <|cite_start|> (Reference: Reconstruction Network for Video Captioning: In this paper, the problem of describing visual contents of a video sequence with natural language is addressed. Unlike previous video captioning work mainly exploiting the cues of video contents to make a language description, we propose a reconstruction network (RecNet) with a novel encoder-decoder-reconstructor architecture, which leverages both the forward (video to sentence) and backward (sentence to video) flows for video captioning. Specifically, the encoder-decoder makes use of the forward flow to produce the sentence description based on the encoded video semantic features. Two types of reconstructors are customized to employ the backward flow and reproduce the video features based on the hidden state sequence generated by the decoder. The generation loss yielded by the encoder-decoder and the reconstruction loss introduced by the reconstructor are jointly drawn into training the proposed RecNet in an end-to-end fashion. Experimental results on benchmark datasets demonstrate that the proposed reconstructor can boost the encoder-decoder models and leads to significant gains in video caption accuracy.) <|cite_end|> <|cite_start|> (Reference: M3: Multimodal Memory Modelling for Video Captioning: Video captioning which automatically translates video clips into natural language sentences is a very important task in computer vision. By virtue of recent deep learning technologies, video captioning has made great progress. However, learning an effective mapping from the visual sequence space to the language space is still a challenging problem due to the long-term multimodal dependency modelling and semantic misalignment. Inspired by the facts that memory modelling poses potential advantages to long-term sequential problems [35] and working memory is the key factor of visual attention [33], we propose a Multimodal Memory Model (M3) to describe videos, which builds a visual and textual shared memory to model the long-term visual-textual dependency and further guide visual attention on described visual targets to solve visual-textual alignments. Specifically, similar to [10], the proposed M3 attaches an external memory to store and retrieve both visual and textual contents by interacting with video and sentence with multiple read and write operations. To evaluate the proposed model, we perform experiments on two public datasets: MSVD and MSR-VTT. The experimental results demonstrate that our method outperforms most of the state-of-the-art methods in terms of BLEU and METEOR.) <|cite_end|> <|cite_start|> (Reference: Memory-Attended Recurrent Network for Video Captioning: Typical techniques for video captioning follow the encoder-decoder framework, which can only focus on one source video being processed. A potential disadvantage of such design is that it cannot capture the multiple visual context information of a word appearing in more than one relevant videos in training data. To tackle this limitation, we propose the Memory-Attended Recurrent Network (MARN) for video captioning, in which a memory structure is designed to explore the full-spectrum correspondence between a word and its various similar visual contexts across videos in training data. Thus, our model is able to achieve a more comprehensive understanding for each word and yield higher captioning quality. Furthermore, the built memory structure enables our method to model the compatibility between adjacent words explicitly instead of asking the model to learn implicitly, as most existing models do. Extensive validation on two real-word datasets demonstrates that our MARN consistently outperforms state-of-the-art methods.) <|cite_end|> <|cite_start|> (Reference: SibNet: Sibling Convolutional Encoder for Video Captioning: Visual captioning, the task of describing an image or a video using one or few sentences, is a challenging task owing to the complexity of understanding the copious visual information and describing it using natural language. Motivated by the success of applying neural networks for machine translation, previous work applies sequence to sequence learning to translate videos into sentences. In this work, different from previous work that encodes visual information using a single flow, we introduce a novel Sibling Convolutional Encoder (SibNet) for visual captioning, which employs a dual-branch architecture to collaboratively encode videos. The first content branch encodes visual content information of the video with an autoencoder, capturing the visual appearance information of the video as other networks often do. While the second semantic branch encodes semantic information of the video via visual-semantic joint embedding, which brings complementary representation by considering the semantics when extracting features from videos. Then both branches are effectively combined with soft-attention mechanism and finally fed into a RNN decoder to generate captions. With our SibNet explicitly capturing both content and semantic information, the proposed model can better represent the rich information in videos. To validate the advantages of the proposed model, we conduct experiments on two benchmarks for video captioning, YouTube2Text and MSR-VTT. Our results demonstrate that the proposed SibNet consistently outperforms existing methods across different evaluation metrics.) <|cite_end|> <|cite_start|> (Reference: Object-aware Aggregation with Bidirectional Temporal Graph for Video Captioning: Video captioning aims to automatically generate natural language descriptions of video content, which has drawn a lot of attention recent years. Generating accurate and fine-grained captions needs to not only understand the global content of video, but also capture the detailed object information. Meanwhile, video representations have great impact on the quality of generated captions. Thus, it is important for video captioning to capture salient objects with their detailed temporal dynamics, and represent them using discriminative spatio-temporal representations. In this paper, we propose a new video captioning approach based on object-aware aggregation with bidirectional temporal graph (OA-BTG), which captures detailed temporal dynamics for salient objects in video, and learns discriminative spatio-temporal representations by performing object-aware local feature aggregation on detected object regions. The main novelties and advantages are: (1) Bidirectional temporal graph: A bidirectional temporal graph is constructed along and reversely along the temporal order, which provides complementary ways to capture the temporal trajectories for each salient object. (2) Object-aware aggregation: Learnable VLAD (Vector of Locally Aggregated Descriptors) models are constructed on object temporal trajectories and global frame sequence, which performs object-aware aggregation to learn discriminative representations. A hierarchical attention mechanism is also developed to distinguish different contributions of multiple objects. Experiments on two widely-used datasets demonstrate our OA-BTG achieves state-of-the-art performance in terms of BLEU@4, METEOR and CIDEr metrics.) <|cite_end|> <|cite_start|> (Reference: Motion Guided Spatial Attention for Video Captioning: Sequence-to-sequence models incorporated with attention mechanism have shown promising improvements on video captioning. While there is rich information both inside and between frames, spatial attention is rarely explored and motion information is usually handled by 3D-CNNs as just another modality for fusion. On the other hand, researches about human perception suggest that apparent motion can attract attention. Motivated by this, we aim to learn spatial attention on video frames under the guidance of motion information for caption generation. We present a novel video captioning framework by utilizing Motion Guided Spatial Attention (MGSA). The proposed MGSA exploits the motion between video frames by learning spatial attention from stacked optical flow images with a custom CNN. To further relate the spatial attention maps of video frames, we designed a Gated Attention Recurrent Unit (GARU) to adaptively incorporate previous attention maps. The whole framework can be trained in an end-to-end manner. We evaluate our approach on two benchmark datasets, MSVD and MSR-VTT. The experiments show that our designed model can generate better video representation and state of the art results are obtained under popular evaluation metrics such as BLEU@4, CIDEr, and METEOR.) <|cite_end|> <|cite_start|> (Reference: Controllable Video Captioning with POS Sequence Guidance Based on Gated Fusion Network: In this paper, we propose to guide the video caption generation with Part-of-Speech (POS) information, based on a gated fusion of multiple representations of input videos. We construct a novel gated fusion network, with one particularly designed cross-gating (CG) block, to effectively encode and fuse different types of representations, e.g., the motion and content features of an input video. One POS sequence generator relies on this fused representation to predict the global syntactic structure, which is thereafter leveraged to guide the video captioning generation and control the syntax of the generated sentence. Specifically, a gating strategy is proposed to dynamically and adaptively incorporate the global syntactic POS information into the decoder for generating each word. Experimental results on two benchmark datasets, namely MSR-VTT and MSVD, demonstrate that the proposed model can well exploit complementary information from multiple representations, resulting in improved performances. Moreover, the generated global POS information can well capture the global syntactic structure of the sentence, and thus be exploited to control the syntactic structure of the description. Such POS information not only boosts the video captioning performance but also improves the diversity of the generated captions. Our code is at: https://github.com/vsislab/Controllable_XGating.) <|cite_end|> <|cite_start|> (Reference: Joint syntax representation learning and visual cue translation for video captioning: Video captioning is a challenging task that involves not only visual perception but also syntax representation learning. Recent progress in video captioning has been achieved through visual perception, but syntax representation learning is still under-explored. We propose a novel video captioning approach that takes into account both visual perception and syntax representation learning to generate accurate descriptions of videos. Specifically, we use sentence templates composed of Part-of-Speech (POS) tags to represent the syntax structure of captions, and accordingly, syntax representation learning is performed by directly inferring POS tags from videos. The visual perception is implemented by a mixture model which translates visual cues into lexical words that are conditional on the learned syntactic structure of sentences. Thus, a video captioning task consists of two sub-tasks: video POS tagging and visual cue translation, which are jointly modeled and trained in an end-to-end fashion. Evaluations on three public benchmark datasets demonstrate that our proposed method achieves substantially better performance than the state-of-the-art methods, which validates the superiority of joint modeling of syntax representation learning and visual perception for video captioning.) <|cite_end|> <|cite_start|> (Reference: Object Relational Graph with Teacher-Recommended Learning for Video Captioning: Taking full advantage of the information from both vision and language is critical for the video captioning task. Existing models lack adequate visual representation due to the neglect of interaction between object, and sufficient training for content-related words due to long-tailed problems. In this paper, we propose a complete video captioning system including both a novel model and an effective training strategy. Specifically, we propose an object relational graph (ORG) based encoder, which captures more detailed interaction features to enrich visual representation. Meanwhile, we design a teacher-recommended learning (TRL) method to make full use of the successful external language model (ELM) to integrate the abundant linguistic knowledge into the caption model. The ELM generates more semantically similar word proposals which extend the ground-truth words used for training to deal with the long-tailed problem. Experimental evaluations on three benchmarks: MSVD, MSR-VTT and VATEX show the proposed ORG-TRL system achieves state-of-the-art performance. Extensive ablation studies and visualizations illustrate the effectiveness of our system.) <|cite_end|> and use object-level features <|cite_start|> (Reference: Object-aware Aggregation with Bidirectional Temporal Graph for Video Captioning: Video captioning aims to automatically generate natural language descriptions of video content, which has drawn a lot of attention recent years. Generating accurate and fine-grained captions needs to not only understand the global content of video, but also capture the detailed object information. Meanwhile, video representations have great impact on the quality of generated captions. Thus, it is important for video captioning to capture salient objects with their detailed temporal dynamics, and represent them using discriminative spatio-temporal representations. In this paper, we propose a new video captioning approach based on object-aware aggregation with bidirectional temporal graph (OA-BTG), which captures detailed temporal dynamics for salient objects in video, and learns discriminative spatio-temporal representations by performing object-aware local feature aggregation on detected object regions. The main novelties and advantages are: (1) Bidirectional temporal graph: A bidirectional temporal graph is constructed along and reversely along the temporal order, which provides complementary ways to capture the temporal trajectories for each salient object. (2) Object-aware aggregation: Learnable VLAD (Vector of Locally Aggregated Descriptors) models are constructed on object temporal trajectories and global frame sequence, which performs object-aware aggregation to learn discriminative representations. A hierarchical attention mechanism is also developed to distinguish different contributions of multiple objects. Experiments on two widely-used datasets demonstrate our OA-BTG achieves state-of-the-art performance in terms of BLEU@4, METEOR and CIDEr metrics.) <|cite_end|> <|cite_start|> (Reference: Spatio-Temporal Dynamics and Semantic Attribute Enriched Visual Encoding for Video Captioning: Automatic generation of video captions is a fundamental challenge in computer vision. Recent techniques typically employ a combination of Convolutional Neural Networks (CNNs) and Recursive Neural Networks (RNNs) for video captioning. These methods mainly focus on tailoring sequence learning through RNNs for better caption generation, whereas off-the-shelf visual features are borrowed from CNNs. We argue that careful designing of visual features for this task is equally important, and present a visual feature encoding technique to generate semantically rich captions using Gated Recurrent Units (GRUs). Our method embeds rich temporal dynamics in visual features by hierarchically applying Short Fourier Transform to CNN features of the whole video. It additionally derives high level semantics from an object detector to enrich the representation with spatial dynamics of the detected objects. The final representation is projected to a compact space and fed to a language model. By learning a relatively simple language model comprising two GRU layers, we establish new state-of-the-art on MSVD and MSR-VTT datasets for METEOR and ROUGE_L metrics.) <|cite_end|> <|cite_start|> (Reference: Object Relational Graph with Teacher-Recommended Learning for Video Captioning: Taking full advantage of the information from both vision and language is critical for the video captioning task. Existing models lack adequate visual representation due to the neglect of interaction between object, and sufficient training for content-related words due to long-tailed problems. In this paper, we propose a complete video captioning system including both a novel model and an effective training strategy. Specifically, we propose an object relational graph (ORG) based encoder, which captures more detailed interaction features to enrich visual representation. Meanwhile, we design a teacher-recommended learning (TRL) method to make full use of the successful external language model (ELM) to integrate the abundant linguistic knowledge into the caption model. The ELM generates more semantically similar word proposals which extend the ground-truth words used for training to deal with the long-tailed problem. Experimental evaluations on three benchmarks: MSVD, MSR-VTT and VATEX show the proposed ORG-TRL system achieves state-of-the-art performance. Extensive ablation studies and visualizations illustrate the effectiveness of our system.) <|cite_end|> <|cite_start|> (Reference: Normalized and Geometry-Aware Self-Attention Network for Image Captioning: Self-attention (SA) network has shown profound value in image captioning. In this paper, we improve SA from two aspects to promote the performance of image captioning. First, we propose Normalized Self-Attention (NSA), a reparameterization of SA that brings the benefits of normalization inside SA. While normalization is previously only applied outside SA, we introduce a novel normalization method and demonstrate that it is both possible and beneficial to perform it on the hidden activations inside SA. Second, to compensate for the major limit of Transformer that it fails to model the geometry structure of the input objects, we propose a class of Geometry-aware Self-Attention (GSA) that extends SA to explicitly and efficiently consider the relative geometry relations between the objects in the image. To construct our image captioning model, we combine the two modules and apply it to the vanilla self-attention network. We extensively evaluate our proposals on MS-COCO image captioning dataset and superior results are achieved when comparing to state-of-the-art approaches. Further experiments on three challenging tasks, i.e. video captioning, machine translation, and visual question answering, show the generality of our methods.) <|cite_end|> in the encoder to improve the generation of descriptions. One big leap for machine translation was the introduction of the Transformer architecture by Vaswani et al. <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|>. By replacing recurrence with self-attention modules, they better utilized long-term dependencies and improved the state-of-the-art at a fraction of the training cost. Similar to the recurrent machine translation models, the Transformer architecture was quickly adopted in the task of image captioning <|cite_start|> (Reference: Entangled transformer for image captioning: In image captioning, the typical attention mechanisms are arduous to identify the equivalent visual signals especially when predicting highly abstract words. This phenomenon is known as the semantic gap between vision and language. This problem can be overcome by providing semantic attributes that are homologous to language. Thanks to the inherent recurrent nature and gated operating mechanism, Recurrent Neural Network (RNN) and its variants are the dominating architectures in image captioning. However, when designing elaborate attention mechanisms to integrate visual inputs and semantic attributes, RNN-like variants become unflexible due to their complexities. In this paper, we investigate a Transformer-based sequence modeling framework, built only with attention layers and feedforward layers. To bridge the semantic gap, we introduce EnTangled Attention (ETA) that enables the Transformer to exploit semantic and visual information simultaneously. Furthermore, Gated Bilateral Controller (GBC) is proposed to guide the interactions between the multimodal information. We name our model as ETA-Transformer. Remarkably, ETA-Transformer achieves state-of-the-art performance on the MSCOCO image captioning dataset. The ablation studies validate the improvements of our proposed modules.) <|cite_end|> <|cite_start|> (Reference: Meshed-Memory Transformer for Image Captioning: Transformer-based architectures represent the state of the art in sequence modeling tasks like machine translation and language understanding. Their applicability to multi-modal contexts like image captioning, however, is still largely under-explored. With the aim of filling this gap, we present M$^2$ - a Meshed Transformer with Memory for Image Captioning. The architecture improves both the image encoding and the language generation steps: it learns a multi-level representation of the relationships between image regions integrating learned a priori knowledge, and uses a mesh-like connectivity at decoding stage to exploit low- and high-level features. Experimentally, we investigate the performance of the M$^2$ Transformer and different fully-attentive models in comparison with recurrent ones. When tested on COCO, our proposal achieves a new state of the art in single-model and ensemble configurations on the "Karpathy" test split and on the online test server. We also assess its performances when describing objects unseen in the training set. Trained models and code for reproducing the experiments are publicly available at: https://github.com/aimagelab/meshed-memory-transformer.) <|cite_end|> <|cite_start|> (Reference: Image Captioning through Image Transformer: Automatic captioning of images is a task that combines the challenges of image analysis and text generation. One important aspect in captioning is the notion of attention: How to decide what to describe and in which order. Inspired by the successes in text analysis and translation, previous work have proposed the \textit{transformer} architecture for image captioning. However, the structure between the \textit{semantic units} in images (usually the detected regions from object detection model) and sentences (each single word) is different. Limited work has been done to adapt the transformer's internal architecture to images. In this work, we introduce the \textbf{\textit{image transformer}}, which consists of a modified encoding transformer and an implicit decoding transformer, motivated by the relative spatial relationship between image regions. Our design widen the original transformer layer's inner architecture to adapt to the structure of images. With only regions feature as inputs, our model achieves new state-of-the-art performance on both MSCOCO offline and online testing benchmarks.) <|cite_end|> <|cite_start|> (Reference: Multimodal Transformer with Multi-View Visual Representation for Image Captioning: Image captioning aims to automatically generate a natural language description of a given image, and most state-of-the-art models have adopted an encoder-decoder framework. The framework consists of a convolution neural network (CNN)-based image encoder that extracts region-based visual features from the input image, and an recurrent neural network (RNN)-based caption decoder that generates the output caption words based on the visual features with the attention mechanism. Despite the success of existing studies, current methods only model the co-attention that characterizes the inter-modal interactions while neglecting the self-attention that characterizes the intra-modal interactions. Inspired by the success of the Transformer model in machine translation, here we extend it to a Multimodal Transformer (MT) model for image captioning. Compared to existing image captioning approaches, the MT model simultaneously captures intra- and inter-modal interactions in a unified attention block. Due to the in-depth modular composition of such attention blocks, the MT model can perform complex multimodal reasoning and output accurate captions. Moreover, to further improve the image captioning performance, multi-view visual features are seamlessly introduced into the MT model. We quantitatively and qualitatively evaluate our approach using the benchmark MSCOCO image captioning dataset and conduct extensive ablation studies to investigate the reasons behind its effectiveness. The experimental results show that our method significantly outperforms the previous state-of-the-art methods. With an ensemble of seven models, our solution ranks the 1st place on the real-time leaderboard of the MSCOCO image captioning challenge at the time of the writing of this paper.) <|cite_end|>. As Transformers operate on sequences of features, it is easy to modify this architecture to describe short video clips. Various other video description datasets depicting everyday activities have been presented <|cite_start|> (Reference: {MSR-VTT: A large video description dataset for bridging video and language: While there has been increasing interest in the task of describing video with natural language, current computer vision algorithms are still severely limited in terms of the variability and complexity of the videos and their associated language that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on specific fine-grained domains with limited videos and simple descriptions. While researchers have provided several benchmark datasets for image captioning, we are not aware of any large-scale video description dataset with comprehensive categories yet diverse video content. In this paper we present MSR-VTT (standing for "MSRVideo to Text") which is a new large-scale video benchmark for video understanding, especially the emerging task of translating video to text. This is achieved by collecting 257 popular queries from a commercial video search engine, with 118 videos for each query. In its current version, MSR-VTT provides 10K web video clips with 41.2 hours and 200K clip-sentence pairs in total, covering the most comprehensive categories and diverse visual content, and representing the largest dataset in terms of sentence and vocabulary. Each clip is annotated with about 20 natural sentences by 1,327 AMT workers. We present a detailed analysis of MSR-VTT in comparison to a complete set of existing datasets, together with a summarization of different state-of-the-art video-to-text approaches. We also provide an extensive evaluation of these approaches on this dataset, showing that the hybrid Recurrent Neural Networkbased approach, which combines single-frame and motion representations with soft-attention pooling strategy, yields the best generalization capability on MSR-VTT.) <|cite_end|> <|cite_start|> (Reference: Augmenting home exercise programmes in primary care physiotherapy: a pilot randomised controlled trial of the 'Exercise Buddy' model: Background: Non-ambulatory people with MS (PwMS) comprise 25% of the MS population. Literature reviews show insufficient evidence exists regarding physiotherapy for this population. A qualitative study suggested benefits from ‘exercise buddies’ who were paid carers delivering a physiotherapy home exercise programme. Aims: To explore the feasibility and effects of ‘exercise buddies’ for non-ambulatory PwMS Methods: 29 non-ambulatory PwMS (age range 43-72) were randomised to 10 weeks of ‘usual care’ (UC) or ‘exercise buddy’ (EB). PwMS were assessed with the Multiple Sclerosis Impact Scale 29 (MSIS) and the Guys Neurological Disability Scale (GNDS) pre and post intervention. Their informal caregivers (12 male, 16 female, aged 21-68) completed the Adult Carer Quality of Life (AC-QoL) questionnaire. Findings: Using ANCOVA to adjust for pre-intervention scores there was no significant differences between groups after treatment on the MSIS-29 physical (p=0.395), MSIS-29 psychological (p=0.176) or GNDS (p=0.177). The ACQOL was also not significantly different between groups post treatment (p=0.432). Using paired t-tests the EB group improved significantly from baseline on the two components of the MSIS-29 (p=0.024, p=0.009), not seen in the UC group. Conclusions: This pilot study found no significant between group differences post treatment. However, good feasibility and significant positive changes from baseline for the EB group warrant further exploratory work in addition to a cost analysis.) <|cite_end|> <|cite_start|> (Reference: Auto-captions on GIF: A Large-scale Video-sentence Dataset for Vision-language Pre-training: In this work, we present Auto-captions on GIF, which is a new large-scale pre-training dataset for generic video understanding. All video-sentence pairs are created by automatically extracting and filtering video caption annotations from billions of web pages. Auto-captions on GIF dataset can be utilized to pre-train the generic feature representation or encoder-decoder structure for video captioning, and other downstream tasks (e.g., sentence localization in videos, video question answering, etc.) as well. We present a detailed analysis of Auto-captions on GIF dataset in comparison to existing video-sentence datasets. We also provide an evaluation of a Transformer-based encoder-decoder structure for vision-language pre-training, which is further adapted to video captioning downstream task and yields the compelling generalizability on MSR-VTT. The dataset is available at \url{http://www.auto-video-captions.top/2020/dataset}.) <|cite_end|> <|cite_start|> (Reference: Collecting Highly Parallel Data for Paraphrase Evaluation: A lack of standard datasets and evaluation metrics has prevented the field of paraphrasing from making the kind of rapid progress enjoyed by the machine translation community over the last 15 years. We address both problems by presenting a novel data collection framework that produces highly parallel text data relatively inexpensively and on a large scale. The highly parallel nature of this data allows us to use simple n-gram comparisons to measure both the semantic adequacy and lexical dissimilarity of paraphrase candidates. In addition to being simple and efficient to compute, experiments show that these metrics correlate highly with human judgments.) <|cite_end|>. In this work, we mainly focus on the VATEX Captioning dataset <|cite_start|> (Reference: Multi-modal Feature Fusion with Feature Attention for VATEX Captioning Challenge 2020: This report describes our model for VATEX Captioning Challenge 2020. First, to gather information from multiple domains, we extract motion, appearance, semantic and audio features. Then we design a feature attention module to attend on different feature when decoding. We apply two types of decoders, top-down and X-LAN and ensemble these models to get the final result. The proposed method outperforms official baseline with a significant gap. We achieve 76.0 CIDEr and 50.0 CIDEr on English and Chinese private test set. We rank 2nd on both English and Chinese private test leaderboard.) <|cite_end|>, which has also been used in the Video-to-Text (VTT) task <|cite_start|> (Reference: Vatex Video Captioning Challenge 2020: Multi-View Features and Hybrid Reward Strategies for Video Captioning: This report describes our solution for the VATEX Captioning Challenge 2020, which requires generating descriptions for the videos in both English and Chinese languages. We identified three crucial factors that improve the performance, namely: multi-view features, hybrid reward, and diverse ensemble. Based on our method of VATEX 2019 challenge, we achieved significant improvements this year with more advanced model architectures, combination of appearance and motion features, and careful hyper-parameters tuning. Our method achieves very competitive results on both of the Chinese and English video captioning tracks.) <|cite_end|> <|cite_start|> (Reference: Multi-modal Feature Fusion with Feature Attention for VATEX Captioning Challenge 2020: This report describes our model for VATEX Captioning Challenge 2020. First, to gather information from multiple domains, we extract motion, appearance, semantic and audio features. Then we design a feature attention module to attend on different feature when decoding. We apply two types of decoders, top-down and X-LAN and ensemble these models to get the final result. The proposed method outperforms official baseline with a significant gap. We achieve 76.0 CIDEr and 50.0 CIDEr on English and Chinese private test set. We rank 2nd on both English and Chinese private test leaderboard.) <|cite_end|> <|cite_start|> (Reference: NITS-VC System for VATEX Video Captioning Challenge 2020: Video captioning is process of summarising the content, event and action of the video into a short textual form which can be helpful in many research areas such as video guided machine translation, video sentiment analysis and providing aid to needy individual. In this paper, a system description of the framework used for VATEX-2020 video captioning challenge is presented. We employ an encoder-decoder based approach in which the visual features of the video are encoded using 3D convolutional neural network (C3D) and in the decoding phase two Long Short Term Memory (LSTM) recurrent networks are used in which visual features and input captions are fused separately and final output is generated by performing element-wise product between the output of both LSTMs. Our model is able to achieve BLEU scores of 0.20 and 0.22 on public and private test data sets respectively.) <|cite_end|> <|cite_start|> (Reference: Object Relational Graph with Teacher-Recommended Learning for Video Captioning: Taking full advantage of the information from both vision and language is critical for the video captioning task. Existing models lack adequate visual representation due to the neglect of interaction between object, and sufficient training for content-related words due to long-tailed problems. In this paper, we propose a complete video captioning system including both a novel model and an effective training strategy. Specifically, we propose an object relational graph (ORG) based encoder, which captures more detailed interaction features to enrich visual representation. Meanwhile, we design a teacher-recommended learning (TRL) method to make full use of the successful external language model (ELM) to integrate the abundant linguistic knowledge into the caption model. The ELM generates more semantically similar word proposals which extend the ground-truth words used for training to deal with the long-tailed problem. Experimental evaluations on three benchmarks: MSVD, MSR-VTT and VATEX show the proposed ORG-TRL system achieves state-of-the-art performance. Extensive ablation studies and visualizations illustrate the effectiveness of our system.) <|cite_end|> <|cite_start|> (Reference: Open-book Video Captioning with Retrieve-Copy-Generate Network: Due to the rapid emergence of short videos and the requirement for content understanding and creation, the video captioning task has received increasing attention in recent years. In this paper, we convert traditional video captioning task into a new paradigm, \ie, Open-book Video Captioning, which generates natural language under the prompts of video-content-relevant sentences, not limited to the video itself. To address the open-book video captioning problem, we propose a novel Retrieve-Copy-Generate network, where a pluggable video-to-text retriever is constructed to retrieve sentences as hints from the training corpus effectively, and a copy-mechanism generator is introduced to extract expressions from multi-retrieved sentences dynamically. The two modules can be trained end-to-end or separately, which is flexible and extensible. Our framework coordinates the conventional retrieval-based methods with orthodox encoder-decoder methods, which can not only draw on the diverse expressions in the retrieved sentences but also generate natural and accurate content of the video. Extensive experiments on several benchmark datasets show that our proposed approach surpasses the state-of-the-art performance, indicating the effectiveness and promising of the proposed paradigm in the task of video captioning.) <|cite_end|> <|cite_start|> (Reference: Normalized and Geometry-Aware Self-Attention Network for Image Captioning: Self-attention (SA) network has shown profound value in image captioning. In this paper, we improve SA from two aspects to promote the performance of image captioning. First, we propose Normalized Self-Attention (NSA), a reparameterization of SA that brings the benefits of normalization inside SA. While normalization is previously only applied outside SA, we introduce a novel normalization method and demonstrate that it is both possible and beneficial to perform it on the hidden activations inside SA. Second, to compensate for the major limit of Transformer that it fails to model the geometry structure of the input objects, we propose a class of Geometry-aware Self-Attention (GSA) that extends SA to explicitly and efficiently consider the relative geometry relations between the objects in the image. To construct our image captioning model, we combine the two modules and apply it to the vanilla self-attention network. We extensively evaluate our proposals on MS-COCO image captioning dataset and superior results are achieved when comparing to state-of-the-art approaches. Further experiments on three challenging tasks, i.e. video captioning, machine translation, and visual question answering, show the generality of our methods.) <|cite_end|>. Furthermore, we validate our models on the MSR-VTT <|cite_start|> (Reference: {MSR-VTT: A large video description dataset for bridging video and language: While there has been increasing interest in the task of describing video with natural language, current computer vision algorithms are still severely limited in terms of the variability and complexity of the videos and their associated language that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on specific fine-grained domains with limited videos and simple descriptions. While researchers have provided several benchmark datasets for image captioning, we are not aware of any large-scale video description dataset with comprehensive categories yet diverse video content. In this paper we present MSR-VTT (standing for "MSRVideo to Text") which is a new large-scale video benchmark for video understanding, especially the emerging task of translating video to text. This is achieved by collecting 257 popular queries from a commercial video search engine, with 118 videos for each query. In its current version, MSR-VTT provides 10K web video clips with 41.2 hours and 200K clip-sentence pairs in total, covering the most comprehensive categories and diverse visual content, and representing the largest dataset in terms of sentence and vocabulary. Each clip is annotated with about 20 natural sentences by 1,327 AMT workers. We present a detailed analysis of MSR-VTT in comparison to a complete set of existing datasets, together with a summarization of different state-of-the-art video-to-text approaches. We also provide an extensive evaluation of these approaches on this dataset, showing that the hybrid Recurrent Neural Networkbased approach, which combines single-frame and motion representations with soft-attention pooling strategy, yields the best generalization capability on MSR-VTT.) <|cite_end|> and MSVD <|cite_start|> (Reference: Collecting Highly Parallel Data for Paraphrase Evaluation: A lack of standard datasets and evaluation metrics has prevented the field of paraphrasing from making the kind of rapid progress enjoyed by the machine translation community over the last 15 years. We address both problems by presenting a novel data collection framework that produces highly parallel text data relatively inexpensively and on a large scale. The highly parallel nature of this data allows us to use simple n-gram comparisons to measure both the semantic adequacy and lexical dissimilarity of paraphrase candidates. In addition to being simple and efficient to compute, experiments show that these metrics correlate highly with human judgments.) <|cite_end|> datasets. <|paper_end|>
[ "<|reference_start|> {Video Captioning with Attention-based LSTM and Semantic Consistency: Recent progress in using long short-term memory (LSTM) for image captioning has motivated the exploration of their applications for video captioning. By taking a video as a sequence of features, an LSTM model is trained on video-sentence pairs and learns to associate a video to a sentence. However, most existing methods compress an entire video shot or frame into a static representation, without considering attention mechanism which allows for selecting salient features. Furthermore, existing approaches usually model the translating error, but ignore the correlations between sentence semantics and visual content. To tackle these issues, we propose a novel end-to-end framework named aLSTMs, an attention-based LSTM model with semantic consistency, to transfer videos to natural sentences. This framework integrates attention mechanism with LSTM to capture salient structures of video, and explores the correlation between multimodal representations (i.e., words and visual content) for generating sentences with rich semantic content. Specifically, we first propose an attention mechanism that uses the dynamic weighted sum of local two-dimensional convolutional neural network representations. Then, an LSTM decoder takes these visual features at time <inline-formula><tex-math notation=\"LaTeX\">$t$</tex-math></inline-formula> and the word-embedding feature at time <inline-formula><tex-math notation=\"LaTeX\">$t$</tex-math></inline-formula><inline-formula><tex-math notation=\"LaTeX\"> $-$</tex-math></inline-formula>1 to generate important words. Finally, we use multimodal embedding to map the visual and sentence features into a joint space to guarantee the semantic consistence of the sentence description and the video visual content. Experiments on the benchmark datasets demonstrate that our method using single feature can achieve competitive or even better results than the state-of-the-art baselines for video captioning in both BLEU and METEOR. <|reference_end|>", "<|reference_start|> Object Relational Graph with Teacher-Recommended Learning for Video Captioning: Taking full advantage of the information from both vision and language is critical for the video captioning task. Existing models lack adequate visual representation due to the neglect of interaction between object, and sufficient training for content-related words due to long-tailed problems. In this paper, we propose a complete video captioning system including both a novel model and an effective training strategy. Specifically, we propose an object relational graph (ORG) based encoder, which captures more detailed interaction features to enrich visual representation. Meanwhile, we design a teacher-recommended learning (TRL) method to make full use of the successful external language model (ELM) to integrate the abundant linguistic knowledge into the caption model. The ELM generates more semantically similar word proposals which extend the ground-truth words used for training to deal with the long-tailed problem. Experimental evaluations on three benchmarks: MSVD, MSR-VTT and VATEX show the proposed ORG-TRL system achieves state-of-the-art performance. Extensive ablation studies and visualizations illustrate the effectiveness of our system. <|reference_end|>", "<|reference_start|> {MSR-VTT: A large video description dataset for bridging video and language: While there has been increasing interest in the task of describing video with natural language, current computer vision algorithms are still severely limited in terms of the variability and complexity of the videos and their associated language that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on specific fine-grained domains with limited videos and simple descriptions. While researchers have provided several benchmark datasets for image captioning, we are not aware of any large-scale video description dataset with comprehensive categories yet diverse video content. In this paper we present MSR-VTT (standing for \"MSRVideo to Text\") which is a new large-scale video benchmark for video understanding, especially the emerging task of translating video to text. This is achieved by collecting 257 popular queries from a commercial video search engine, with 118 videos for each query. In its current version, MSR-VTT provides 10K web video clips with 41.2 hours and 200K clip-sentence pairs in total, covering the most comprehensive categories and diverse visual content, and representing the largest dataset in terms of sentence and vocabulary. Each clip is annotated with about 20 natural sentences by 1,327 AMT workers. We present a detailed analysis of MSR-VTT in comparison to a complete set of existing datasets, together with a summarization of different state-of-the-art video-to-text approaches. We also provide an extensive evaluation of these approaches on this dataset, showing that the hybrid Recurrent Neural Networkbased approach, which combines single-frame and motion representations with soft-attention pooling strategy, yields the best generalization capability on MSR-VTT. <|reference_end|>", "<|reference_start|> Normalized and Geometry-Aware Self-Attention Network for Image Captioning: Self-attention (SA) network has shown profound value in image captioning. In this paper, we improve SA from two aspects to promote the performance of image captioning. First, we propose Normalized Self-Attention (NSA), a reparameterization of SA that brings the benefits of normalization inside SA. While normalization is previously only applied outside SA, we introduce a novel normalization method and demonstrate that it is both possible and beneficial to perform it on the hidden activations inside SA. Second, to compensate for the major limit of Transformer that it fails to model the geometry structure of the input objects, we propose a class of Geometry-aware Self-Attention (GSA) that extends SA to explicitly and efficiently consider the relative geometry relations between the objects in the image. To construct our image captioning model, we combine the two modules and apply it to the vanilla self-attention network. We extensively evaluate our proposals on MS-COCO image captioning dataset and superior results are achieved when comparing to state-of-the-art approaches. Further experiments on three challenging tasks, i.e. video captioning, machine translation, and visual question answering, show the generality of our methods. <|reference_end|>" ]
[ 13, 28, 35, 45 ]
{"<|multi_cite_1_1|>": "arxiv-68898", "<|multi_cite_1_2|>": "arxiv-69800", "<|multi_cite_1_3|>": "arxiv-87927", "<|cite_2|>": "arxiv-72863", "<|cite_3|>": "arxiv-68898", "<|cite_4|>": "arxiv-126595", "<|cite_5|>": "arxiv-239807", "<|multi_cite_6_1|>": "arxiv-68898", "<|multi_cite_6_2|>": "arxiv-69800", "<|multi_cite_6_3|>": "arxiv-68874", "<|multi_cite_7_1|>": "arxiv-72863", "<|multi_cite_7_2|>": "arxiv-130256", "<|cite_8|>": "arxiv-77398", "<|multi_cite_9_1|>": "ss-922818", "<|multi_cite_9_2|>": "arxiv-110907", "<|multi_cite_9_3|>": "ss-1272516", "<|multi_cite_10_1|>": "arxiv-111540", "<|multi_cite_10_2|>": "arxiv-153344", "<|multi_cite_10_3|>": "ss-680089", "<|multi_cite_10_4|>": "arxiv-203528", "<|multi_cite_10_5|>": "ss-1364734", "<|multi_cite_10_6|>": "arxiv-209080", "<|multi_cite_10_7|>": "ss-1663185", "<|multi_cite_10_8|>": "arxiv-220551", "<|multi_cite_10_9|>": "ss-786186", "<|multi_cite_10_10|>": "arxiv-250589", "<|multi_cite_11_1|>": "arxiv-209080", "<|multi_cite_11_2|>": "arxiv-193132", "<|multi_cite_11_3|>": "arxiv-250589", "<|multi_cite_11_4|>": "arxiv-254614", "<|cite_12|>": "arxiv-126595", "<|multi_cite_13_1|>": "ss-1519654", "<|multi_cite_13_2|>": "arxiv-239807", "<|multi_cite_13_3|>": "arxiv-262352", "<|multi_cite_13_4|>": "arxiv-204881", "<|multi_cite_14_1|>": "ss-785672", "<|multi_cite_14_2|>": "ss-679705", "<|multi_cite_14_3|>": "arxiv-276456", "<|multi_cite_14_4|>": "ss-683108", "<|cite_15|>": "arxiv-269666", "<|multi_cite_16_1|>": "arxiv-230566", "<|multi_cite_16_2|>": "arxiv-269666", "<|multi_cite_16_3|>": "arxiv-269997", "<|multi_cite_16_4|>": "arxiv-250589", "<|multi_cite_16_5|>": "arxiv-326278", "<|multi_cite_16_6|>": "arxiv-254614", "<|cite_17|>": "ss-785672", "<|cite_18|>": "ss-683108"}
2011.02241
<|paper_start|> Title: The Forchheim Image Database for Camera Identification in the Wild Abstract: The Forchheim Image Database for Camera Identification in the Wild: Image provenance can represent crucial knowledge in criminal investigation and journalistic fact checking. In the last two decades, numerous algorithms have been proposed for obtaining information on the source camera and distribution history of an image. For a fair ranking of these techniques, it is important to rigorously assess their performance on practically relevant test cases. To this end, a number of datasets have been proposed. However, we argue that there is a gap in existing databases: to our knowledge, there is currently no dataset that simultaneously satisfies two goals, namely a) to cleanly separate scene content and forensic traces, and b) to support realistic post-processing like social media recompression. In this work, we propose the Forchheim Image Database (FODB) to close this gap. It consists of more than 23,000 images of 143 scenes by 27 smartphone cameras, and it allows to cleanly separate image content from forensic artifacts. Each image is provided in 6 different qualities: the original camera-native version, and five copies from social networks. We demonstrate the usefulness of FODB in an evaluation of methods for camera identification. We report three findings. First, the recently proposed general-purpose EfficientNet remarkably outperforms several dedicated forensic CNNs both on clean and compressed images. Second, classifiers obtain a performance boost even on unknown post-processing after augmentation by artificial degradations. Third, FODB's clean separation of scene content and forensic traces imposes important, rigorous boundary conditions for algorithm benchmarking. Introduction With the emergence of affordable smartphones, it became straightforward to record images and videos and to share them via social networks. However, this opportunity can also be abused for unlawful purposes. For instance, multimedia samples can depict illicit content like CSEM/CSAM, they may violate copyright, or may be intentionally aimed at deceiving the viewer. In such cases, authorship and authenticity of multimedia items can be a central question for criminal prosecution. This motivated researchers to develop numerous image forensics algorithms over the last two decades. Initial methods mostly model imaging artifacts <|cite_start|> (Reference: Digital camera identification from sensor pattern noise: In this paper, we propose a new method for the problem of digital camera identification from its images based on the sensor's pattern noise. For each camera under investigation, we first determine its reference pattern noise, which serves as a unique identification fingerprint. This is achieved by averaging the noise obtained from multiple images using a denoising filter. To identify the camera from a given image, we consider the reference pattern noise as a spread-spectrum watermark, whose presence in the image is established by using a correlation detector. Experiments on approximately 320 images taken with nine consumer digital cameras are used to estimate false alarm rates and false rejection rates. Additionally, we study how the error rates change with common image processing, such as JPEG compression or gamma correction.) <|cite_end|> <|cite_start|> (Reference: A survey of image forgery detection: : We are undoubtedly living in an age where we are exposed to a remarkable array of visual imagery. While we may have historically had confidence in the integrity of this imagery, today’s digital technology has begun to erode this trust. From the tabloid magazines to the fashion industry, main-stream media outlets, scientific journals, political campaigns, courtrooms, and the photo hoaxes that land in our email in-boxes, doctored photographs are appearing with a growing frequency and sophistication. Over the past five years, the field of digital forensics has emerged to help return some trust to digital images. Here I review the state of the art in this new and exciting field. Digital watermarking has been proposed as a means by which an image can be authenticated (see, for example, [21, 5] for general surveys). The drawback of this approach is that a watermark must be inserted at the time of recording, which would limit this approach to specially equipped digital cameras. In contrast to these approaches, passive techniques for image forensics operate in the absence of any watermark or signature. These techniques work on the assumption that although digital forgeries may leave no visual clues of having been tampered with, they may alter the underlying statistics of an image. The set of image forensic tools can be roughly categorized into five categories: (1) pixel-based techniques detect statistical anomalies introduced at the pixel level; (2) format-based techniques leverage the statistical correlations introduced by a specific lossy compression scheme; (3) camera-based techniques exploit artifacts introduced by the camera lens, sensor or on-chip post-processing; (4) physically-based techniques explicitly model and detect anomalies in the three dimensional interaction between physical objects, light, and the camera; and (5) geometric-based techniques make measurements of objects in the world and their positions relative to the camera. I have selected several representative forensic tools within each of these categories to review. In so doing, I have undoubtedly omitted some worthy papers. My hope, however, is that this survey offers a representative sampling of the emerging field of image forgery detection.) <|cite_end|> <|cite_start|> (Reference: Forensic Camera Model Identification: ) <|cite_end|>. More recently, deep learning-based approaches <|cite_start|> (Reference: A Survey of Deep Learning-Based Source Image Forensics: Image source forensics is widely considered as one of the most effective ways to verify in a blind way digital image authenticity and integrity. In the last few years, many researchers have applied data-driven approaches to this task, inspired by the excellent performance obtained by those techniques on computer vision problems. In this survey, we present the most important data-driven algorithms that deal with the problem of image source forensics. To make order in this vast field, we have divided the area in five sub-topics: source camera identification, recaptured image forensic, computer graphics (CG) image forensic, GAN-generated image detection, and source social network identification. Moreover, we have included the works on anti-forensics and counter anti-forensics. For each of these tasks, we have highlighted advantages and limitations of the methods currently proposed in this promising and rich research field.) <|cite_end|> <|cite_start|> (Reference: Fighting Fake News: Image Splice Detection via Learned Self-Consistency: Advances in photo editing and manipulation tools have made it significantly easier to create fake imagery. Learning to detect such manipulations, however, remains a challenging problem due to the lack of sufficient amounts of manipulated training data. In this paper, we propose a learning algorithm for detecting visual image manipulations that is trained only using a large dataset of real photographs. The algorithm uses the automatically recorded photo EXIF metadata as supervisory signal for training a model to determine whether an image is self-consistent -- that is, whether its content could have been produced by a single imaging pipeline. We apply this self-consistency model to the task of detecting and localizing image splices. The proposed method obtains state-of-the-art performance on several image forensics benchmarks, despite never seeing any manipulated images at training. That said, it is merely a step in the long quest for a truly general purpose visual forensics tool.) <|cite_end|> <|cite_start|> (Reference: Learning Rich Features for Image Manipulation Detection: Image manipulation detection is different from traditional semantic object detection because it pays more attention to tampering artifacts than to image content, which suggests that richer features need to be learned. We propose a two-stream Faster R-CNN network and train it endto- end to detect the tampered regions given a manipulated image. One of the two streams is an RGB stream whose purpose is to extract features from the RGB image input to find tampering artifacts like strong contrast difference, unnatural tampered boundaries, and so on. The other is a noise stream that leverages the noise features extracted from a steganalysis rich model filter layer to discover the noise inconsistency between authentic and tampered regions. We then fuse features from the two streams through a bilinear pooling layer to further incorporate spatial co-occurrence of these two modalities. Experiments on four standard image manipulation datasets demonstrate that our two-stream framework outperforms each individual stream, and also achieves state-of-the-art performance compared to alternative methods with robustness to resizing and compression.) <|cite_end|> <|cite_start|> (Reference: Noiseprint: a CNN-based camera model fingerprint: Forensic analyses of digital images rely heavily on the traces of in-camera and out-camera processes left on the acquired images. Such traces represent a sort of camera fingerprint. If one is able to recover them, by suppressing the high-level scene content and other disturbances, a number of forensic tasks can be easily accomplished. A notable example is the PRNU pattern, which can be regarded as a device fingerprint, and has received great attention in multimedia forensics. In this paper we propose a method to extract a camera model fingerprint, called noiseprint, where the scene content is largely suppressed and model-related artifacts are enhanced. This is obtained by means of a Siamese network, which is trained with pairs of image patches coming from the same (label +1) or different (label -1) cameras. Although noiseprints can be used for a large variety of forensic tasks, here we focus on image forgery localization. Experiments on several datasets widespread in the forensic community show noiseprint-based methods to provide state-of-the-art performance.) <|cite_end|> <|cite_start|> (Reference: First Steps Toward Camera Model Identification with Convolutional Neural Networks: Detecting the camera model used to shoot a picture enables to solve a wide series of forensic problems, from copyright infringement to ownership attribution. For this reason, the forensic community has developed a set of camera model identification algorithms that exploit characteristic traces left on acquired images by the processing pipelines specific of each camera model. In this paper, we investigate a novel approach to solve camera model identification problem. Specifically, we propose a data-driven algorithm based on convolutional neural networks, which learns features characterizing each camera model directly from the acquired pictures. Results on a well-known dataset of 18 camera models show that: (i) the proposed method outperforms up-to-date state-of-the-art algorithms on classification of 64x64 color image patches; (ii) features learned by the proposed network generalize to camera models never used for training.) <|cite_end|> <|cite_start|> (Reference: RemNet: remnant convolutional neural network for camera model identification: ) <|cite_end|> <|cite_start|> (Reference: Image Origin Classification Based on Social Network Provenance: Recognizing information about the origin of a digital image has been individuated as a crucial task to be tackled by the image forensic scientific community. Understanding something on the previous history of an image could be strategic to address any successive assessment to be made on it: knowing the kind of device used for acquisition or, better, the model of the camera could focus investigations in a specific direction. Sometimes just revealing that a determined post-processing, such as an interpolation or a filtering, has been performed on an image could be of fundamental importance to go back to its provenance. This paper locates in such a context and proposes an innovative method to inquire if an image derives from a social network and, in particular, try to distinguish from, which one has been downloaded. The technique is based on the assumption that each social network applies a peculiar and mostly unknown manipulation that, however, leaves some distinctive traces on the image; such traces can be extracted to feature every platform. By resorting at trained classifiers, the presented methodology is satisfactorily able to discern different social network origins. Experimental results carried out on diverse image datasets and in various operative conditions witness that such a distinction is possible. In addition, the proposed method is also able to go back to the original JPEG quality factor the image had before being uploaded on a social network.) <|cite_end|> <|cite_start|> (Reference: Identifying Image Provenance: An Analysis of Mobile Instant Messaging Apps: Studying the impact of sharing platforms like social networks and messaging services on multimedia content nowadays represents a due step in multimedia forensics research. In this framework, we study the characteristics of images that are uploaded and shared through three popular mobile messaging apps combined with two different sending mobile operating systems (OS). In our analysis, we consider information contained both in the image signal and in the metadata of the image file. We show that it is generally possible to identify a posteriori the last app and the OS that have been used for uploading. This is done by considering different scenarios involving images shared both once and twice. Moreover, we show that, by leveraging the knowledge of the last sharing app and system, it is possible to retrieve information on the previous sharing step for double shared images. In relation to prior works, a discussion on the influence of the rescaling and recompression mechanism - usually performed differently through apps and OSs - is also proposed, and the feasibility of retrieving the compression parameters of the image before being shared is assessed.) <|cite_end|> <|cite_start|> (Reference: Image Provenance Analysis at Scale: Prior art has shown it is possible to estimate, through image processing and computer vision techniques, the types and parameters of transformations that have been applied to the content of individual images to obtain new images. Given a large corpus of images and a query image, an interesting further step is to retrieve the set of original images whose content is present in the query image, as well as the detailed sequences of transformations that yield the query image given the original images. This is a problem that recently has received the name of image provenance analysis. In these times of public media manipulation ( e.g., fake news and meme sharing), obtaining the history of image transformations is relevant for fact checking and authorship verification, among many other applications. This article presents an end-to-end processing pipeline for image provenance analysis, which works at real-world scale. It employs a cutting-edge image filtering solution that is custom-tailored for the problem at hand, as well as novel techniques for obtaining the provenance graph that expresses how the images, as nodes, are ancestrally connected. A comprehensive set of experiments for each stage of the pipeline is provided, comparing the proposed solution with state-of-the-art results, employing previously published datasets. In addition, this work introduces a new dataset of real-world provenance cases from the social media site Reddit, along with baseline results.) <|cite_end|> achieve state-of-the-art results. These techniques enable a forensic analyst to detect and localize manipulations <|cite_start|> (Reference: A survey of image forgery detection: : We are undoubtedly living in an age where we are exposed to a remarkable array of visual imagery. While we may have historically had confidence in the integrity of this imagery, today’s digital technology has begun to erode this trust. From the tabloid magazines to the fashion industry, main-stream media outlets, scientific journals, political campaigns, courtrooms, and the photo hoaxes that land in our email in-boxes, doctored photographs are appearing with a growing frequency and sophistication. Over the past five years, the field of digital forensics has emerged to help return some trust to digital images. Here I review the state of the art in this new and exciting field. Digital watermarking has been proposed as a means by which an image can be authenticated (see, for example, [21, 5] for general surveys). The drawback of this approach is that a watermark must be inserted at the time of recording, which would limit this approach to specially equipped digital cameras. In contrast to these approaches, passive techniques for image forensics operate in the absence of any watermark or signature. These techniques work on the assumption that although digital forgeries may leave no visual clues of having been tampered with, they may alter the underlying statistics of an image. The set of image forensic tools can be roughly categorized into five categories: (1) pixel-based techniques detect statistical anomalies introduced at the pixel level; (2) format-based techniques leverage the statistical correlations introduced by a specific lossy compression scheme; (3) camera-based techniques exploit artifacts introduced by the camera lens, sensor or on-chip post-processing; (4) physically-based techniques explicitly model and detect anomalies in the three dimensional interaction between physical objects, light, and the camera; and (5) geometric-based techniques make measurements of objects in the world and their positions relative to the camera. I have selected several representative forensic tools within each of these categories to review. In so doing, I have undoubtedly omitted some worthy papers. My hope, however, is that this survey offers a representative sampling of the emerging field of image forgery detection.) <|cite_end|> <|cite_start|> (Reference: Fighting Fake News: Image Splice Detection via Learned Self-Consistency: Advances in photo editing and manipulation tools have made it significantly easier to create fake imagery. Learning to detect such manipulations, however, remains a challenging problem due to the lack of sufficient amounts of manipulated training data. In this paper, we propose a learning algorithm for detecting visual image manipulations that is trained only using a large dataset of real photographs. The algorithm uses the automatically recorded photo EXIF metadata as supervisory signal for training a model to determine whether an image is self-consistent -- that is, whether its content could have been produced by a single imaging pipeline. We apply this self-consistency model to the task of detecting and localizing image splices. The proposed method obtains state-of-the-art performance on several image forensics benchmarks, despite never seeing any manipulated images at training. That said, it is merely a step in the long quest for a truly general purpose visual forensics tool.) <|cite_end|> <|cite_start|> (Reference: Learning Rich Features for Image Manipulation Detection: Image manipulation detection is different from traditional semantic object detection because it pays more attention to tampering artifacts than to image content, which suggests that richer features need to be learned. We propose a two-stream Faster R-CNN network and train it endto- end to detect the tampered regions given a manipulated image. One of the two streams is an RGB stream whose purpose is to extract features from the RGB image input to find tampering artifacts like strong contrast difference, unnatural tampered boundaries, and so on. The other is a noise stream that leverages the noise features extracted from a steganalysis rich model filter layer to discover the noise inconsistency between authentic and tampered regions. We then fuse features from the two streams through a bilinear pooling layer to further incorporate spatial co-occurrence of these two modalities. Experiments on four standard image manipulation datasets demonstrate that our two-stream framework outperforms each individual stream, and also achieves state-of-the-art performance compared to alternative methods with robustness to resizing and compression.) <|cite_end|> <|cite_start|> (Reference: Noiseprint: a CNN-based camera model fingerprint: Forensic analyses of digital images rely heavily on the traces of in-camera and out-camera processes left on the acquired images. Such traces represent a sort of camera fingerprint. If one is able to recover them, by suppressing the high-level scene content and other disturbances, a number of forensic tasks can be easily accomplished. A notable example is the PRNU pattern, which can be regarded as a device fingerprint, and has received great attention in multimedia forensics. In this paper we propose a method to extract a camera model fingerprint, called noiseprint, where the scene content is largely suppressed and model-related artifacts are enhanced. This is obtained by means of a Siamese network, which is trained with pairs of image patches coming from the same (label +1) or different (label -1) cameras. Although noiseprints can be used for a large variety of forensic tasks, here we focus on image forgery localization. Experiments on several datasets widespread in the forensic community show noiseprint-based methods to provide state-of-the-art performance.) <|cite_end|>, and to identify the source device <|cite_start|> (Reference: Forensic Camera Model Identification: ) <|cite_end|> <|cite_start|> (Reference: A Survey of Deep Learning-Based Source Image Forensics: Image source forensics is widely considered as one of the most effective ways to verify in a blind way digital image authenticity and integrity. In the last few years, many researchers have applied data-driven approaches to this task, inspired by the excellent performance obtained by those techniques on computer vision problems. In this survey, we present the most important data-driven algorithms that deal with the problem of image source forensics. To make order in this vast field, we have divided the area in five sub-topics: source camera identification, recaptured image forensic, computer graphics (CG) image forensic, GAN-generated image detection, and source social network identification. Moreover, we have included the works on anti-forensics and counter anti-forensics. For each of these tasks, we have highlighted advantages and limitations of the methods currently proposed in this promising and rich research field.) <|cite_end|> <|cite_start|> (Reference: Digital camera identification from sensor pattern noise: In this paper, we propose a new method for the problem of digital camera identification from its images based on the sensor's pattern noise. For each camera under investigation, we first determine its reference pattern noise, which serves as a unique identification fingerprint. This is achieved by averaging the noise obtained from multiple images using a denoising filter. To identify the camera from a given image, we consider the reference pattern noise as a spread-spectrum watermark, whose presence in the image is established by using a correlation detector. Experiments on approximately 320 images taken with nine consumer digital cameras are used to estimate false alarm rates and false rejection rates. Additionally, we study how the error rates change with common image processing, such as JPEG compression or gamma correction.) <|cite_end|> <|cite_start|> (Reference: First Steps Toward Camera Model Identification with Convolutional Neural Networks: Detecting the camera model used to shoot a picture enables to solve a wide series of forensic problems, from copyright infringement to ownership attribution. For this reason, the forensic community has developed a set of camera model identification algorithms that exploit characteristic traces left on acquired images by the processing pipelines specific of each camera model. In this paper, we investigate a novel approach to solve camera model identification problem. Specifically, we propose a data-driven algorithm based on convolutional neural networks, which learns features characterizing each camera model directly from the acquired pictures. Results on a well-known dataset of 18 camera models show that: (i) the proposed method outperforms up-to-date state-of-the-art algorithms on classification of 64x64 color image patches; (ii) features learned by the proposed network generalize to camera models never used for training.) <|cite_end|> <|cite_start|> (Reference: RemNet: remnant convolutional neural network for camera model identification: ) <|cite_end|> or distribution history of images or videos <|cite_start|> (Reference: Image Origin Classification Based on Social Network Provenance: Recognizing information about the origin of a digital image has been individuated as a crucial task to be tackled by the image forensic scientific community. Understanding something on the previous history of an image could be strategic to address any successive assessment to be made on it: knowing the kind of device used for acquisition or, better, the model of the camera could focus investigations in a specific direction. Sometimes just revealing that a determined post-processing, such as an interpolation or a filtering, has been performed on an image could be of fundamental importance to go back to its provenance. This paper locates in such a context and proposes an innovative method to inquire if an image derives from a social network and, in particular, try to distinguish from, which one has been downloaded. The technique is based on the assumption that each social network applies a peculiar and mostly unknown manipulation that, however, leaves some distinctive traces on the image; such traces can be extracted to feature every platform. By resorting at trained classifiers, the presented methodology is satisfactorily able to discern different social network origins. Experimental results carried out on diverse image datasets and in various operative conditions witness that such a distinction is possible. In addition, the proposed method is also able to go back to the original JPEG quality factor the image had before being uploaded on a social network.) <|cite_end|> <|cite_start|> (Reference: Identifying Image Provenance: An Analysis of Mobile Instant Messaging Apps: Studying the impact of sharing platforms like social networks and messaging services on multimedia content nowadays represents a due step in multimedia forensics research. In this framework, we study the characteristics of images that are uploaded and shared through three popular mobile messaging apps combined with two different sending mobile operating systems (OS). In our analysis, we consider information contained both in the image signal and in the metadata of the image file. We show that it is generally possible to identify a posteriori the last app and the OS that have been used for uploading. This is done by considering different scenarios involving images shared both once and twice. Moreover, we show that, by leveraging the knowledge of the last sharing app and system, it is possible to retrieve information on the previous sharing step for double shared images. In relation to prior works, a discussion on the influence of the rescaling and recompression mechanism - usually performed differently through apps and OSs - is also proposed, and the feasibility of retrieving the compression parameters of the image before being shared is assessed.) <|cite_end|> <|cite_start|> (Reference: Image Provenance Analysis at Scale: Prior art has shown it is possible to estimate, through image processing and computer vision techniques, the types and parameters of transformations that have been applied to the content of individual images to obtain new images. Given a large corpus of images and a query image, an interesting further step is to retrieve the set of original images whose content is present in the query image, as well as the detailed sequences of transformations that yield the query image given the original images. This is a problem that recently has received the name of image provenance analysis. In these times of public media manipulation ( e.g., fake news and meme sharing), obtaining the history of image transformations is relevant for fact checking and authorship verification, among many other applications. This article presents an end-to-end processing pipeline for image provenance analysis, which works at real-world scale. It employs a cutting-edge image filtering solution that is custom-tailored for the problem at hand, as well as novel techniques for obtaining the provenance graph that expresses how the images, as nodes, are ancestrally connected. A comprehensive set of experiments for each stage of the pipeline is provided, comparing the proposed solution with state-of-the-art results, employing previously published datasets. In addition, this work introduces a new dataset of real-world provenance cases from the social media site Reddit, along with baseline results.) <|cite_end|>. In this work, we limit our focus on the latter two tasks on images. \begin{figure}[!t] \centering \vspace{3pt} \includegraphics[width=0.325\linewidth]{figures/D08_img_facebook_0062.jpg} \includegraphics[width=0.325\linewidth]{figures/D15_img_facebook_0062.jpg} \includegraphics[width=0.325\linewidth]{figures/D09_img_facebook_0062.jpg}\\ \vspace{3pt} \includegraphics[width=0.325\linewidth]{figures/D22_img_facebook_0129.jpg} \includegraphics[width=0.325\linewidth]{figures/D08_img_facebook_0129.jpg} \includegraphics[width=0.325\linewidth]{figures/D21_img_facebook_0129.jpg}\\ \vspace{3pt} \includegraphics[width=0.325\linewidth]{figures/D07_img_facebook_0058.jpg} \includegraphics[width=0.325\linewidth]{figures/D08_img_facebook_0058.jpg} \includegraphics[width=0.325\linewidth]{figures/D01_img_facebook_0058.jpg}\\ \vspace{3pt} \includegraphics[width=0.325\linewidth]{figures/D02_img_facebook_0054.jpg} \includegraphics[width=0.325\linewidth]{figures/D06_img_facebook_0054.jpg} \includegraphics[width=0.325\linewidth]{figures/D01_img_facebook_0054.jpg}\\ \caption{Example images from the Forchheim Image Database} \label{fig:example_images} \end{figure} The assessment of the real-world applicability of algorithms requires consistent evaluation protocols with standard benchmark datasets. In 2010, Gloe and B\"ohme proposed the Dresden Image Database (DIDB) <|cite_start|> (Reference: The dresden image database for benchmarking digital image forensics: This paper introduces and documents a novel image database specifically built for the purpose of development and bench-marking of camera-based digital forensic techniques. More than 14,000 images of various indoor and outdoor scenes have been acquired under controlled and thus widely comparable conditions from altogether 73 digital cameras. The cameras were drawn from only 25 different models to ensure that device-specific and model-specific characteristics can be disentangled and studied separately, as validated with results in this paper. In addition, auxiliary images for the estimation of device-specific sensor noise pattern were collected for each camera. Another subset of images to study model-specific JPEG compression algorithms has been compiled for each model. The 'Dresden Image Database' will be made freely available for scientific purposes when this accompanying paper is presented. The database is intended to become a useful resource for researchers and forensic investigators. Using a standard database as a benchmark not only makes results more comparable and reproducible, but it is also more economical and avoids potential copyright and privacy issues that go along with self-sampled benchmark sets from public photo communities on the Internet.) <|cite_end|>, the first large-scale benchmark for camera identification algorithms. It consists of nearly 17,000 images of 73 devices depicting 83 scenes. All devices record the same scenes. This is particularly important for aligning training/test splits with the scene content. Doing so prevents the danger of opening a side channel through scene content, which may lead to overly optimistic results <|cite_start|> (Reference: Forensic Camera Model Identification: ) <|cite_end|> <|cite_start|> (Reference: First Steps Toward Camera Model Identification with Convolutional Neural Networks: Detecting the camera model used to shoot a picture enables to solve a wide series of forensic problems, from copyright infringement to ownership attribution. For this reason, the forensic community has developed a set of camera model identification algorithms that exploit characteristic traces left on acquired images by the processing pipelines specific of each camera model. In this paper, we investigate a novel approach to solve camera model identification problem. Specifically, we propose a data-driven algorithm based on convolutional neural networks, which learns features characterizing each camera model directly from the acquired pictures. Results on a well-known dataset of 18 camera models show that: (i) the proposed method outperforms up-to-date state-of-the-art algorithms on classification of 64x64 color image patches; (ii) features learned by the proposed network generalize to camera models never used for training.) <|cite_end|>. The DIDB became in the past 10 years one of the most important benchmark datasets in the research community. However, it only consists of DSLR and compact cameras, whereas today most images are recorded with smartphones. Also postprocessed versions of the images from social network sharing are not part of this dataset. More recently, Shullani~\textit{et al.} proposed VISION <|cite_start|> (Reference: VISION: a video and image dataset for source identification: ) <|cite_end|>, an image and video database for benchmarking forensic algorithms. It contains over 34,000 images in total, from 35 smartphones and tablet cameras. A subset of the images has been shared through Facebook and Whatsapp. This enables to investigate the impact of realistic post-processing on forensic traces. A limitation of VISION is that the images show arbitrary scenes. Thus, a training and evaluation split by scenes is not possible. Moreover, the scene content of images from the same camera are in some cases highly correlated. This may be no issue for methods that strictly operate on noise residuals (e.g., PRNU-based fingerprinting <|cite_start|> (Reference: Digital camera identification from sensor pattern noise: In this paper, we propose a new method for the problem of digital camera identification from its images based on the sensor's pattern noise. For each camera under investigation, we first determine its reference pattern noise, which serves as a unique identification fingerprint. This is achieved by averaging the noise obtained from multiple images using a denoising filter. To identify the camera from a given image, we consider the reference pattern noise as a spread-spectrum watermark, whose presence in the image is established by using a correlation detector. Experiments on approximately 320 images taken with nine consumer digital cameras are used to estimate false alarm rates and false rejection rates. Additionally, we study how the error rates change with common image processing, such as JPEG compression or gamma correction.) <|cite_end|>). However, mixed scene content can open a side-channel for end-to-end Convolutional Neural Networks (CNNs), which potentially leads to overly optimistic evaluation results. In this paper, we propose the Forchheim Image Database (FODB) as a new benchmark that combines the advantages of DIDB and VISION. It consists of 143 scenes, each captured with 27 smartphone cameras. Each image has been shared through the 5 social media apps by Facebook, Instagram, Telegram, Twitter, and Whatsapp. This yields a total of over 23,000 JPEG images. Examples from the database are shown in Fig.~\ref{fig:example_images}. FODB allows training/test splits without scene overlap, and simultaneously supports robustness evaluations under real-world post-processing. Hence, it allows rigorous camera association benchmarking on real-world post-processing. To demonstrate the use of the dataset, we perform a benchmark of CNN-based camera identification, which brings insights into relative CNN performances, generalization to unseen post-processing, and performance impacts of scene splitting. In summary, our main contributions are: \begin{itemize} \item We propose FODB, a new large-scale database for evaluating image forensics algorithms in the wild, which is available at \texttt{\url{https://faui1-files.cs.fau.de/public/mmsec/datasets/fodb/}}. \item We employ EfficientNet <|cite_start|> (Reference: EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet. To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet.) <|cite_end|> for camera identification on FODB and show that it signficantly outperforms targeted forensic CNNs across almost all qualities. \item We show that degradation during training sigificantly boosts robustness even for unseen post-processing. \item We demonstrate the importance of scene splitting for learning-based camera identification \end{itemize} The remainder of the paper is organized as follows: We review image provenance benchmarks in Sec.~\ref{sec:related_work}. The proposed database FODB is described in Sec.~\ref{sec:database}. In Sec.~\ref{sec:method}, we describe our evaluation protocol for camera identification. The results of this evaluation are presented in Sec.~\ref{sec:results}. Section~\ref{sec:conclusion} concludes the paper. Related Work \label{sec:related_work} In a number of existing datasets, different cameras replicate the same set of scenes. This allows to split the images into training and evaluation subsets such that scenes are disjoint. The first large-scale forensic benchmark to support such a splitting policy is the Dresden Image Database <|cite_start|> (Reference: The dresden image database for benchmarking digital image forensics: This paper introduces and documents a novel image database specifically built for the purpose of development and bench-marking of camera-based digital forensic techniques. More than 14,000 images of various indoor and outdoor scenes have been acquired under controlled and thus widely comparable conditions from altogether 73 digital cameras. The cameras were drawn from only 25 different models to ensure that device-specific and model-specific characteristics can be disentangled and studied separately, as validated with results in this paper. In addition, auxiliary images for the estimation of device-specific sensor noise pattern were collected for each camera. Another subset of images to study model-specific JPEG compression algorithms has been compiled for each model. The 'Dresden Image Database' will be made freely available for scientific purposes when this accompanying paper is presented. The database is intended to become a useful resource for researchers and forensic investigators. Using a standard database as a benchmark not only makes results more comparable and reproducible, but it is also more economical and avoids potential copyright and privacy issues that go along with self-sampled benchmark sets from public photo communities on the Internet.) <|cite_end|>, as stated in the previous section. Cheng~\textit{et al.} propose the NUS dataset <|cite_start|> (Reference: Illuminant Estimation for Color Constancy: Why Spatial-Domain Methods Work and the Role of the Color Distribution: Color constancy is a well-studied topic in color vision. Methods are generally categorized as (1) low-level statistical methods, (2) gamut-based methods, and (3) learning-based methods. In this work, we distinguish methods depending on whether they work directly from color values (i.e., color domain) or from values obtained from the image's spatial information (e.g., image gradients/frequencies). We show that spatial information does not provide any additional information that cannot be obtained directly from the color distribution and that the indirect aim of spatial-domain methods is to obtain large color differences for estimating the illumination direction. This finding allows us to develop a simple and efficient illumination estimation method that chooses bright and dark pixels using a projection distance in the color distribution and then applies principal component analysis to estimate the illumination direction. Our method gives state-of-the-art results on existing public color constancy datasets as well as on our newly collected dataset (NUS dataset) containing 1736 images from eight different high-end consumer cameras.) <|cite_end|>, with 1,736 images of over 200 scenes, each recorded with 8 DSLR cameras. In another work <|cite_start|> (Reference: Beyond white: Ground truth colors for color constancy correction: A limitation in color constancy research is the inability to establish ground truth colors for evaluating corrected images. Many existing datasets contain images of scenes with a color chart included, however, only the chart's neutral colors (grayscale patches) are used to provide the ground truth for illumination estimation and correction. This is because the corrected neutral colors are known to lie along the achromatic line in the camera's color space (i.e. R=G=B), the correct RGB values of the other color patches are not known. As a result, most methods estimate a 3*3 diagonal matrix that ensures only the neutral colors are correct. In this paper, we describe how to overcome this limitation. Specifically, we show that under certain illuminations, a diagonal 3*3 matrix is capable of correcting not only neutral colors, but all the colors in a scene. This finding allows us to find the ground truth RGB values for the color chart in the camera's color space. We show how to use this information to correct all the images in existing datasets to have correct colors. Working from these new color corrected datasets, we describe how to modify existing color constancy algorithms to perform better image correction.) <|cite_end|>, Cheng~\textit{et al.} recorded additional 944 indoor images. Also in this dataset, each scene is captured with each camera. Although the NUS dataset is presented as an illuminant estimation benchmark, it can directly be used for camera identification, and the acquisition protocols allow scene splitting similar to DIDB. Abdelhamed~\textit{et al.} propose the Smartphone Image Denoising Dataset (SIDD) <|cite_start|> (Reference: A high-quality denoising dataset for smartphone cameras: The last decade has seen an astronomical shift from imaging with DSLR and point-and-shoot cameras to imaging with smartphone cameras. Due to the small aperture and sensor size, smartphone images have notably more noise than their DSLR counterparts. While denoising for smartphone images is an active research area, the research community currently lacks a denoising image dataset representative of real noisy images from smartphone cameras with high-quality ground truth. We address this issue in this paper with the following contributions. We propose a systematic procedure for estimating ground truth for noisy images that can be used to benchmark denoising performance for smartphone cameras. Using this procedure, we have captured a dataset - the Smartphone Image Denoising Dataset (SIDD) - of ~30,000 noisy images from 10 scenes under different lighting conditions using five representative smartphone cameras and generated their ground truth images. We used this dataset to benchmark a number of denoising algorithms. We show that CNN-based methods perform better when trained on our high-quality dataset than when trained using alternative strategies, such as low-ISO images used as a proxy for ground truth data.) <|cite_end|> of about 30,000 images. It consists of 10 indoor scenes under different settings captured with 5 smartphone cameras. The dataset targets image denoising, but can also be used for benchmarking camera identification algorithms with proper scene splitting. Nowadays, images are often distributed via social networks and by that undergo compression to save memory and bandwidth. Therefore, it is important to assess the performance of forensic algorithms in the presence of such post-processing. Unfortunately, social network sharing has not been relevant during conception of these three datasets. Hence, none of these three datasets comes with images that have already been passed through social networks. While a user of the dataset could in principle pass the images through social networks by herself (given permission by the creators of the datasets), it would still be a remarkably tedious procedure. For example, we estimate that it would require at least a month of work to upload and download the 17,000 images of DIDB through various social networks due to limitations on automated image uploading and downloading on most of their smartphone apps. In 2018, the IEEE Signal Processing Society hosted a challenge for camera model identification, which amongst other aspects addressed algorithm performance under general post-processing. The training dataset consists of 2,750 images of arbitrary scenes from 10 cameras. The test dataset contains original images, as well as images that are recompressed with random JPEG quality, rescaling, or gamma correction. In the VISION database by Shullani~\textit{et al.}, around 7,500 images of 35 smartphone cameras have been shared via Facebook in two qualities, and via Whatsapp <|cite_start|> (Reference: VISION: a video and image dataset for source identification: ) <|cite_end|>. It consists of about 30,000 images in 4 qualities levels that enable evaluations of the impact of post-processing. Guidice~\textit{et al.} propose a method for detecting the social network and software used to share an image <|cite_start|> (Reference: A Classification Engine for Image Ballistics of Social Data: Image Forensics has already achieved great results for the source camera identification task on images. Standard approaches for data coming from Social Network Platforms cannot be applied due to different processes involved (e.g., scaling, compression, etc.). Over 1 billion images are shared each day on the Internet and obtaining information about their history from the moment they were acquired could be exploited for investigation purposes. In this paper, a classification engine for the reconstruction of the history of an image, is presented. Specifically, exploiting K-NN and decision trees classifiers and a-priori knowledge acquired through image analysis, we propose an automatic approach that can understand which Social Network Platform has processed an image and the software application used to perform the image upload. The engine makes use of proper alterations introduced by each platform as features. Results, in terms of global accuracy on a dataset of 2720 images, confirm the effectiveness of the proposed strategy.) <|cite_end|>. To this end, they recorded images with 8 cameras of various types including 4 smartphones. Subsequently, they shared them via 10 social networks and two operating systems (OS) to obtain 2,720 images. Caldelli~\textit{et al.} also investigate social network provenance <|cite_start|> (Reference: Image Origin Classification Based on Social Network Provenance: Recognizing information about the origin of a digital image has been individuated as a crucial task to be tackled by the image forensic scientific community. Understanding something on the previous history of an image could be strategic to address any successive assessment to be made on it: knowing the kind of device used for acquisition or, better, the model of the camera could focus investigations in a specific direction. Sometimes just revealing that a determined post-processing, such as an interpolation or a filtering, has been performed on an image could be of fundamental importance to go back to its provenance. This paper locates in such a context and proposes an innovative method to inquire if an image derives from a social network and, in particular, try to distinguish from, which one has been downloaded. The technique is based on the assumption that each social network applies a peculiar and mostly unknown manipulation that, however, leaves some distinctive traces on the image; such traces can be extracted to feature every platform. By resorting at trained classifiers, the presented methodology is satisfactorily able to discern different social network origins. Experimental results carried out on diverse image datasets and in various operative conditions witness that such a distinction is possible. In addition, the proposed method is also able to go back to the original JPEG quality factor the image had before being uploaded on a social network.) <|cite_end|>. They used 1,000 TIFF images from UCID <|cite_start|> (Reference: UCID: An Uncompressed Color Image Database: Standardised image databases or rather the lack of them are one of the main weaknesses in the field of content based image retrieval (CBIR). Authors often use their own images or do not specify the source of their datasets. Naturally this makes comparison of results somewhat difficult. While a first approach towards a common colour image set has been taken by the MPEG 7 committee their database does not cater for all strands of research in the CBIR community. In particular as the MPEG-7 images only exist in compressed form it does not allow for an objective evaluation of image retrieval algorithms that operate in the compressed domain or to judge the influence image compression has on the performance of CBIR algorithms. In this paper we introduce a new dataset, UCID (pronounced "use it") - an Uncompressed Colour Image Dataset which tries to bridge this gap. The UCID dataset currently consists of 1338 uncompressed images together with a ground truth of a series of query images with corresponding models that an ideal CBIR algorithm would retrieve. While its initial intention was to provide a dataset for the evaluation of compressed domain algorithms, the UCID database also represents a good benchmark set for the evaluation of any kind of CBIR method as well as an image set that can be used to evaluate image compression and colour quantisation algorithms.) <|cite_end|>, an earlier image retrieval database. These images are compressed with different JPEG qualities and shared on 3 social networks, which results in 30,000 images. However, all images in UCID stem from a single camera, which does not allow for camera identification. Phan~\textit{et al.} investigate traces of instant messenging apps and the host OS. They used 350 images out of 35 devices from the VISION dataset and shared them either once or twice with three messengers and two OSs <|cite_start|> (Reference: Identifying Image Provenance: An Analysis of Mobile Instant Messaging Apps: Studying the impact of sharing platforms like social networks and messaging services on multimedia content nowadays represents a due step in multimedia forensics research. In this framework, we study the characteristics of images that are uploaded and shared through three popular mobile messaging apps combined with two different sending mobile operating systems (OS). In our analysis, we consider information contained both in the image signal and in the metadata of the image file. We show that it is generally possible to identify a posteriori the last app and the OS that have been used for uploading. This is done by considering different scenarios involving images shared both once and twice. Moreover, we show that, by leveraging the knowledge of the last sharing app and system, it is possible to retrieve information on the previous sharing step for double shared images. In relation to prior works, a discussion on the influence of the rescaling and recompression mechanism - usually performed differently through apps and OSs - is also proposed, and the feasibility of retrieving the compression parameters of the image before being shared is assessed.) <|cite_end|>. This leads to a total of 350 original images, 2,100 single-shared images and 6,300 double-shared images. In a subsequent work, Phan~\textit{et al.} consider up to three-fold sharing on social media platforms <|cite_start|> (Reference: Tracking Multiple Image Sharing on Social Networks: Social Networks (SN) and Instant Messaging Apps (IMA) are more and more engaging people in their personal relations taking possession of an important part of their daily life. Huge amounts of multimedia contents, mainly photos, are poured and successively shared on these networks so quickly that is not possible to follow their paths. This last issue surely grants anonymity and impunity thus it consequently makes easier to commit crimes such as reputation attack and cyberbullying. In fact, contents published within a restricted group of friends on an IMA can be rapidly delivered and viewed on a SN by acquaintances and then by strangers without any sort of tracking. In a forensic scenario (e.g., during an investigation), succeeding in understanding this flow could be strategic, thus allowing to reveal all the intermediate steps a certain content has followed. This work aims at tracking multiple sharing on social networks, by extracting specific traces left by each SN within the image file, due to the process each of them applies, to perform a multi-class classification. Innovative strategies, based on deep learning, are proposed and satisfactory results are achieved in recovering till triple up-downloads.) <|cite_end|>. They build two datasets. The first dataset is based on the raw image database RAISE <|cite_start|> (Reference: RAISE: a raw images dataset for digital image forensics: Digital forensics is a relatively new research area which aims at authenticating digital media by detecting possible digital forgeries. Indeed, the ever increasing availability of multimedia data on the web, coupled with the great advances reached by computer graphical tools, makes the modification of an image and the creation of visually compelling forgeries an easy task for any user. This in turns creates the need of reliable tools to validate the trustworthiness of the represented information. In such a context, we present here RAISE, a large dataset of 8156 high-resolution raw images, depicting various subjects and scenarios, properly annotated and available together with accompanying metadata. Such a wide collection of untouched and diverse data is intended to become a powerful resource for, but not limited to, forensic researchers by providing a common benchmark for a fair comparison, testing and evaluation of existing and next generation forensic algorithms. In this paper we describe how RAISE has been collected and organized, discuss how digital image forensics and many other multimedia research areas may benefit of this new publicly available benchmark dataset and test a very recent forensic technique for JPEG compression detection.) <|cite_end|>. The images are compressed in JPEG format and shared up to three times on three social networks, which yields a total of 35,100 images. The second dataset is based one based on VISION. Here, 510 images are shared up to three times, to obtain about additional 20,000 images. The above stated datasets <|cite_start|> (Reference: VISION: a video and image dataset for source identification: ) <|cite_end|> <|cite_start|> (Reference: A Classification Engine for Image Ballistics of Social Data: Image Forensics has already achieved great results for the source camera identification task on images. Standard approaches for data coming from Social Network Platforms cannot be applied due to different processes involved (e.g., scaling, compression, etc.). Over 1 billion images are shared each day on the Internet and obtaining information about their history from the moment they were acquired could be exploited for investigation purposes. In this paper, a classification engine for the reconstruction of the history of an image, is presented. Specifically, exploiting K-NN and decision trees classifiers and a-priori knowledge acquired through image analysis, we propose an automatic approach that can understand which Social Network Platform has processed an image and the software application used to perform the image upload. The engine makes use of proper alterations introduced by each platform as features. Results, in terms of global accuracy on a dataset of 2720 images, confirm the effectiveness of the proposed strategy.) <|cite_end|> <|cite_start|> (Reference: Image Origin Classification Based on Social Network Provenance: Recognizing information about the origin of a digital image has been individuated as a crucial task to be tackled by the image forensic scientific community. Understanding something on the previous history of an image could be strategic to address any successive assessment to be made on it: knowing the kind of device used for acquisition or, better, the model of the camera could focus investigations in a specific direction. Sometimes just revealing that a determined post-processing, such as an interpolation or a filtering, has been performed on an image could be of fundamental importance to go back to its provenance. This paper locates in such a context and proposes an innovative method to inquire if an image derives from a social network and, in particular, try to distinguish from, which one has been downloaded. The technique is based on the assumption that each social network applies a peculiar and mostly unknown manipulation that, however, leaves some distinctive traces on the image; such traces can be extracted to feature every platform. By resorting at trained classifiers, the presented methodology is satisfactorily able to discern different social network origins. Experimental results carried out on diverse image datasets and in various operative conditions witness that such a distinction is possible. In addition, the proposed method is also able to go back to the original JPEG quality factor the image had before being uploaded on a social network.) <|cite_end|> <|cite_start|> (Reference: Identifying Image Provenance: An Analysis of Mobile Instant Messaging Apps: Studying the impact of sharing platforms like social networks and messaging services on multimedia content nowadays represents a due step in multimedia forensics research. In this framework, we study the characteristics of images that are uploaded and shared through three popular mobile messaging apps combined with two different sending mobile operating systems (OS). In our analysis, we consider information contained both in the image signal and in the metadata of the image file. We show that it is generally possible to identify a posteriori the last app and the OS that have been used for uploading. This is done by considering different scenarios involving images shared both once and twice. Moreover, we show that, by leveraging the knowledge of the last sharing app and system, it is possible to retrieve information on the previous sharing step for double shared images. In relation to prior works, a discussion on the influence of the rescaling and recompression mechanism - usually performed differently through apps and OSs - is also proposed, and the feasibility of retrieving the compression parameters of the image before being shared is assessed.) <|cite_end|> <|cite_start|> (Reference: Tracking Multiple Image Sharing on Social Networks: Social Networks (SN) and Instant Messaging Apps (IMA) are more and more engaging people in their personal relations taking possession of an important part of their daily life. Huge amounts of multimedia contents, mainly photos, are poured and successively shared on these networks so quickly that is not possible to follow their paths. This last issue surely grants anonymity and impunity thus it consequently makes easier to commit crimes such as reputation attack and cyberbullying. In fact, contents published within a restricted group of friends on an IMA can be rapidly delivered and viewed on a SN by acquaintances and then by strangers without any sort of tracking. In a forensic scenario (e.g., during an investigation), succeeding in understanding this flow could be strategic, thus allowing to reveal all the intermediate steps a certain content has followed. This work aims at tracking multiple sharing on social networks, by extracting specific traces left by each SN within the image file, due to the process each of them applies, to perform a multi-class classification. Innovative strategies, based on deep learning, are proposed and satisfactory results are achieved in recovering till triple up-downloads.) <|cite_end|> allow benchmarking social network provenance algorithms. With the exception of the dataset by Caldelli~\emph{et al.} which consists of only one source camera <|cite_start|> (Reference: Image Origin Classification Based on Social Network Provenance: Recognizing information about the origin of a digital image has been individuated as a crucial task to be tackled by the image forensic scientific community. Understanding something on the previous history of an image could be strategic to address any successive assessment to be made on it: knowing the kind of device used for acquisition or, better, the model of the camera could focus investigations in a specific direction. Sometimes just revealing that a determined post-processing, such as an interpolation or a filtering, has been performed on an image could be of fundamental importance to go back to its provenance. This paper locates in such a context and proposes an innovative method to inquire if an image derives from a social network and, in particular, try to distinguish from, which one has been downloaded. The technique is based on the assumption that each social network applies a peculiar and mostly unknown manipulation that, however, leaves some distinctive traces on the image; such traces can be extracted to feature every platform. By resorting at trained classifiers, the presented methodology is satisfactorily able to discern different social network origins. Experimental results carried out on diverse image datasets and in various operative conditions witness that such a distinction is possible. In addition, the proposed method is also able to go back to the original JPEG quality factor the image had before being uploaded on a social network.) <|cite_end|>, they are also suitable for evaluating camera identification algorithms and their robustness for simulated and real-world <|cite_start|> (Reference: VISION: a video and image dataset for source identification: ) <|cite_end|> <|cite_start|> (Reference: A Classification Engine for Image Ballistics of Social Data: Image Forensics has already achieved great results for the source camera identification task on images. Standard approaches for data coming from Social Network Platforms cannot be applied due to different processes involved (e.g., scaling, compression, etc.). Over 1 billion images are shared each day on the Internet and obtaining information about their history from the moment they were acquired could be exploited for investigation purposes. In this paper, a classification engine for the reconstruction of the history of an image, is presented. Specifically, exploiting K-NN and decision trees classifiers and a-priori knowledge acquired through image analysis, we propose an automatic approach that can understand which Social Network Platform has processed an image and the software application used to perform the image upload. The engine makes use of proper alterations introduced by each platform as features. Results, in terms of global accuracy on a dataset of 2720 images, confirm the effectiveness of the proposed strategy.) <|cite_end|> <|cite_start|> (Reference: Identifying Image Provenance: An Analysis of Mobile Instant Messaging Apps: Studying the impact of sharing platforms like social networks and messaging services on multimedia content nowadays represents a due step in multimedia forensics research. In this framework, we study the characteristics of images that are uploaded and shared through three popular mobile messaging apps combined with two different sending mobile operating systems (OS). In our analysis, we consider information contained both in the image signal and in the metadata of the image file. We show that it is generally possible to identify a posteriori the last app and the OS that have been used for uploading. This is done by considering different scenarios involving images shared both once and twice. Moreover, we show that, by leveraging the knowledge of the last sharing app and system, it is possible to retrieve information on the previous sharing step for double shared images. In relation to prior works, a discussion on the influence of the rescaling and recompression mechanism - usually performed differently through apps and OSs - is also proposed, and the feasibility of retrieving the compression parameters of the image before being shared is assessed.) <|cite_end|> <|cite_start|> (Reference: Tracking Multiple Image Sharing on Social Networks: Social Networks (SN) and Instant Messaging Apps (IMA) are more and more engaging people in their personal relations taking possession of an important part of their daily life. Huge amounts of multimedia contents, mainly photos, are poured and successively shared on these networks so quickly that is not possible to follow their paths. This last issue surely grants anonymity and impunity thus it consequently makes easier to commit crimes such as reputation attack and cyberbullying. In fact, contents published within a restricted group of friends on an IMA can be rapidly delivered and viewed on a SN by acquaintances and then by strangers without any sort of tracking. In a forensic scenario (e.g., during an investigation), succeeding in understanding this flow could be strategic, thus allowing to reveal all the intermediate steps a certain content has followed. This work aims at tracking multiple sharing on social networks, by extracting specific traces left by each SN within the image file, due to the process each of them applies, to perform a multi-class classification. Innovative strategies, based on deep learning, are proposed and satisfactory results are achieved in recovering till triple up-downloads.) <|cite_end|> post-processing. Two further large-scale camera identification benchmarks are SOCRatES <|cite_start|> (Reference: SOCRatES: A Database of Realistic Data for SOurce Camera REcognition on Smartphones: SOCRatES: SOurce Camera REcognition on Smartphones, is an image and video database especially designed for source digital camera recognition on smartphones. It answers to two specific needs, the need of wider pools of data for the developing and benchmarking of image forensic techniques, and the need to move the application of these techniques on smartphones, since, nowadays, they are the most employed devices for image capturing and video recording. What makes SOCRatES different from all previous published databases is that it is collected by the smartphone owners themselves, introducing a great heterogeneity and realness in the data. SOCRatES is currently made up of about 9.700 images and 1000 videos captured with 103 different smartphones of 15 different makes and about 60 different models. With 103 different devices, SOCRatES is the database for source digital camera identification that includes the highest number of different sensors. In this paper we describe SOCRatES and we present a baseline assessment based on the Sensor Pattern Noise) <|cite_end|> and the Daxing Smartphone Identification Dataset (DSID) <|cite_start|> (Reference: Daxing Smartphone Identification Dataset: Over the past few years, the imaging device has changed from digital cameras to smartphone cameras. With the popularity of mobile Internet applications, there explode massive digital images and videos captured by such smartphones, which are nearly held one per person. Consequently, the capturing source of images/videos delivers valuable identity information for criminal investigations and critical forensic evidence. It is significant to address the source identification of smartphone images/videos. In this paper, we build a Daxing smartphone identification dataset, which collects images and videos from extensive smartphones of different brands, models and devices. Specifically, the dataset includes 43 400 images and 1,400 videos captured by 90 smartphones of 22 models belonging to 5 brands. For example, there are 23 smartphone devices for the iPhone 6S (Plus) model. To the best of our knowledge, Daxing dataset uses the largest amount of smartphones for image/video source identification compared with other related datasets, as well as the highest numbers of devices per model and captured images/videos. The dataset has been released as a free and open-source for scientific researchers and criminal investigators.) <|cite_end|>. SOCRatES contains 9,700 images by 103 smartphones of 60 models, and thus is currently the database with largest number of devices. DSID consists of 43,400 images from 90 devices of 22 models, which currently is to our knowledge the database with the largest number of images and devices per model. Unfortunately, none of these benchmark datasets supports scene splitting, such that it is currently not possible to investigate social media-related artifacts on split scenes. However, we argue in line with previous works <|cite_start|> (Reference: Forensic Camera Model Identification: ) <|cite_end|> <|cite_start|> (Reference: First Steps Toward Camera Model Identification with Convolutional Neural Networks: Detecting the camera model used to shoot a picture enables to solve a wide series of forensic problems, from copyright infringement to ownership attribution. For this reason, the forensic community has developed a set of camera model identification algorithms that exploit characteristic traces left on acquired images by the processing pipelines specific of each camera model. In this paper, we investigate a novel approach to solve camera model identification problem. Specifically, we propose a data-driven algorithm based on convolutional neural networks, which learns features characterizing each camera model directly from the acquired pictures. Results on a well-known dataset of 18 camera models show that: (i) the proposed method outperforms up-to-date state-of-the-art algorithms on classification of 64x64 color image patches; (ii) features learned by the proposed network generalize to camera models never used for training.) <|cite_end|> that scene splitting is important during evaluation. It removes \emph{by design} the threat of leaking side-channel information from the scene content into the evaluation. Such leakage may lead to an overestimation of the performance, as we will show in Sec.~\ref{sec:results}. The proposed Forchheim Image Database FODB closes this gap: it jointly allows a rigorous scene splitting policy, and enables to investigate the effect of social media post-processing on forensic algorithms. <|paper_end|>
[ "<|reference_start|> Forensic Camera Model Identification: <|reference_end|>", "<|reference_start|> Digital camera identification from sensor pattern noise: In this paper, we propose a new method for the problem of digital camera identification from its images based on the sensor's pattern noise. For each camera under investigation, we first determine its reference pattern noise, which serves as a unique identification fingerprint. This is achieved by averaging the noise obtained from multiple images using a denoising filter. To identify the camera from a given image, we consider the reference pattern noise as a spread-spectrum watermark, whose presence in the image is established by using a correlation detector. Experiments on approximately 320 images taken with nine consumer digital cameras are used to estimate false alarm rates and false rejection rates. Additionally, we study how the error rates change with common image processing, such as JPEG compression or gamma correction. <|reference_end|>", "<|reference_start|> Image Origin Classification Based on Social Network Provenance: Recognizing information about the origin of a digital image has been individuated as a crucial task to be tackled by the image forensic scientific community. Understanding something on the previous history of an image could be strategic to address any successive assessment to be made on it: knowing the kind of device used for acquisition or, better, the model of the camera could focus investigations in a specific direction. Sometimes just revealing that a determined post-processing, such as an interpolation or a filtering, has been performed on an image could be of fundamental importance to go back to its provenance. This paper locates in such a context and proposes an innovative method to inquire if an image derives from a social network and, in particular, try to distinguish from, which one has been downloaded. The technique is based on the assumption that each social network applies a peculiar and mostly unknown manipulation that, however, leaves some distinctive traces on the image; such traces can be extracted to feature every platform. By resorting at trained classifiers, the presented methodology is satisfactorily able to discern different social network origins. Experimental results carried out on diverse image datasets and in various operative conditions witness that such a distinction is possible. In addition, the proposed method is also able to go back to the original JPEG quality factor the image had before being uploaded on a social network. <|reference_end|>", "<|reference_start|> Forensic Camera Model Identification: <|reference_end|>" ]
[ 25, 28, 43, 53 ]
{"<|multi_cite_1_1|>": "ss-1522138", "<|multi_cite_1_2|>": "ss-795041", "<|multi_cite_1_3|>": "ss-1557979", "<|multi_cite_2_1|>": "ss-1299739", "<|multi_cite_2_2|>": "arxiv-158037", "<|multi_cite_2_3|>": "arxiv-158296", "<|multi_cite_2_4|>": "arxiv-170218", "<|multi_cite_2_5|>": "arxiv-93335", "<|multi_cite_2_6|>": "ss-1299740", "<|multi_cite_2_7|>": "ss-1299741", "<|multi_cite_2_8|>": "ss-1299742", "<|multi_cite_2_9|>": "arxiv-145908", "<|multi_cite_3_1|>": "ss-795041", "<|multi_cite_3_2|>": "arxiv-158037", "<|multi_cite_3_3|>": "arxiv-158296", "<|multi_cite_3_4|>": "arxiv-170218", "<|multi_cite_4_1|>": "ss-1557979", "<|multi_cite_4_2|>": "ss-1299739", "<|multi_cite_4_3|>": "ss-1522138", "<|multi_cite_4_4|>": "arxiv-93335", "<|multi_cite_4_5|>": "ss-1299740", "<|multi_cite_5_1|>": "ss-1299741", "<|multi_cite_5_2|>": "ss-1299742", "<|multi_cite_5_3|>": "arxiv-145908", "<|cite_6|>": "ss-1052638", "<|multi_cite_7_1|>": "ss-1557979", "<|multi_cite_7_2|>": "arxiv-93335", "<|cite_8|>": "ss-1052639", "<|cite_9|>": "ss-1522138", "<|cite_10|>": "arxiv-206505", "<|cite_11|>": "ss-1052638", "<|cite_12|>": "ss-1299743", "<|cite_13|>": "ss-1298481", "<|cite_14|>": "ss-1262480", "<|cite_16|>": "ss-1052639", "<|cite_17|>": "arxiv-108262", "<|cite_18|>": "ss-1299741", "<|cite_19|>": "ss-1299744", "<|cite_20|>": "ss-1299742", "<|cite_21|>": "ss-1522134", "<|cite_22|>": "ss-1216613", "<|multi_cite_23_2|>": "ss-1052639", "<|multi_cite_23_3|>": "arxiv-108262", "<|multi_cite_23_4|>": "ss-1299741", "<|multi_cite_23_5|>": "ss-1299742", "<|multi_cite_23_6|>": "ss-1522134", "<|cite_24|>": "ss-1299741", "<|multi_cite_26_1|>": "ss-1052639", "<|multi_cite_26_2|>": "arxiv-108262", "<|multi_cite_26_3|>": "ss-1299742", "<|multi_cite_26_4|>": "ss-1522134", "<|cite_27|>": "ss-1299745", "<|cite_28|>": "ss-1299746", "<|multi_cite_29_1|>": "ss-1557979", "<|multi_cite_29_2|>": "arxiv-93335"}
2205.03409-1
. Among them, FFHQ consists of $70,000$ high-quality images whose initial size exceeds $1024 \times 1024$. Based on the FFHQ dataset, some recent works <|cite_start|> (Reference: Towards Real-World Blind Face Restoration with Generative Facial Prior: Blind face restoration usually relies on facial priors, such as facial geometry prior or reference prior, to restore realistic and faithful details. However, very low-quality inputs cannot offer accurate geometric prior while high-quality references are inaccessible, limiting the applicability in real-world scenarios. In this work, we propose GFP-GAN that leverages rich and diverse priors encapsulated in a pretrained face GAN for blind face restoration. This Generative Facial Prior (GFP) is incorporated into the face restoration process via novel channel-split spatial feature transform layers, which allow our method to achieve a good balance of realness and fidelity. Thanks to the powerful generative facial prior and delicate designs, our GFP-GAN could jointly restore facial details and enhance colors with just a single forward pass, while GAN inversion methods require expensive image-specific optimization at inference. Extensive experiments show that our method achieves superior performance to prior art on both synthetic and real-world datasets.) <|cite_end|> <|cite_start|> (Reference: GAN Prior Embedded Network for Blind Face Restoration in the Wild: Blind face restoration (BFR) from severely degraded face images in the wild is a very challenging problem. Due to the high illness of the problem and the complex unknown degradation, directly training a deep neural network (DNN) usually cannot lead to acceptable results. Existing generative adversarial network (GAN) based methods can produce better results but tend to generate over-smoothed restorations. In this work, we propose a new method by first learning a GAN for high-quality face image generation and embedding it into a U-shaped DNN as a prior decoder, then fine-tuning the GAN prior embedded DNN with a set of synthesized low-quality face images. The GAN blocks are designed to ensure that the latent code and noise input to the GAN can be respectively generated from the deep and shallow features of the DNN, controlling the global face structure, local face details and background of the reconstructed image. The proposed GAN prior embedded network (GPEN) is easy-to-implement, and it can generate visually photo-realistic results. Our experiments demonstrated that the proposed GPEN achieves significantly superior results to state-of-the-art BFR methods both quantitatively and qualitatively, especially for the restoration of severely degraded face images in the wild. The source code and models can be found at https://github.com/yangxy/GPEN.) <|cite_end|> <|cite_start|> (Reference: Progressive Semantic-Aware Style Transformation for Blind Face Restoration: Face restoration is important in face image processing, and has been widely studied in recent years. However, previous works often fail to generate plausible high quality (HQ) results for real-world low quality (LQ) face images. In this paper, we propose a new progressive semantic-aware style transformation framework, named PSFR-GAN, for face restoration. Specifically, instead of using an encoder-decoder framework as previous methods, we formulate the restoration of LQ face images as a multi-scale progressive restoration procedure through semantic-aware style transformation. Given a pair of LQ face image and its corresponding parsing map, we first generate a multi-scale pyramid of the inputs, and then progressively modulate different scale features from coarse-to-fine in a semantic-aware style transfer way. Compared with previous networks, the proposed PSFR-GAN makes full use of the semantic (parsing maps) and pixel (LQ images) space information from different scales of input pairs. In addition, we further introduce a semantic aware style loss which calculates the feature style loss for each semantic region individually to improve the details of face textures. Finally, we pretrain a face parsing network which can generate decent parsing maps from real-world LQ face images. Experiment results show that our model trained with synthetic data can not only produce more realistic high-resolution results for synthetic LQ inputs and but also generalize better to natural LQ face images compared with state-of-the-art methods. Codes are available at https://github.com/chaofengc/PSFRGAN.) <|cite_end|>have achieved superior performance and can restore faces with faithful textures. Due to the low cost of taking high-definition face pictures and abundant online resources, it is easy to construct such high-quality image datasets without complicated pre-processing. In contrast to the abundant face image datasets, the most commonly used datasets in VFSR are VoxCeleb1 <|cite_start|> (Reference: VoxCeleb: a large-scale speaker identification dataset: Most existing datasets for speaker identification contain samples obtained under quite constrained conditions, and are usually hand-annotated, hence limited in size. The goal of this paper is to generate a large scale text-independent speaker identification dataset collected 'in the wild'. We make two contributions. First, we propose a fully automated pipeline based on computer vision techniques to create the dataset from open-source media. Our pipeline involves obtaining videos from YouTube; performing active speaker verification using a two-stream synchronization Convolutional Neural Network (CNN), and confirming the identity of the speaker using CNN based facial recognition. We use this pipeline to curate VoxCeleb which contains hundreds of thousands of 'real world' utterances for over 1,000 celebrities. Our second contribution is to apply and compare various state of the art speaker identification techniques on our dataset to establish baseline performance. We show that a CNN based architecture obtains the best performance for both identification and verification.) <|cite_end|>and VoxCeleb2 <|cite_start|> (Reference: VoxCeleb2: Deep Speaker Recognition: The objective of this paper is speaker recognition under noisy and unconstrained conditions. We make two key contributions. First, we introduce a very large-scale audio-visual speaker recognition dataset collected from open-source media. Using a fully automated pipeline, we curate VoxCeleb2 which contains over a million utterances from over 6,000 speakers. This is several times larger than any publicly available speaker recognition dataset. Second, we develop and compare Convolutional Neural Network (CNN) models and training strategies that can effectively recognise identities from voice under various conditions. The models trained on the VoxCeleb2 dataset surpass the performance of previous works on a benchmark dataset by a significant margin.) <|cite_end|>. Although these two datasets contain numerous utterances of celebrities, the resolution and quality of most videos are so poor that the models trained on these datasets do not have adequate ability to restore high-quality frames as SFSR methods. In order to fill the gap between the image face dataset and video face dataset, we propose a pipeline to extract high-quality face clips from web videos, and construct a high-quality video face dataset (VFHQ), which could promote the development of the VFSR field. \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{figs/dataset_comparison.pdf} \caption{Visual comparisons between the two datasets: VoxCeleb1 (\textbf{top}) and VFHQ (\textbf{bottom}). Images are randomly selected from the dataset. VFHQ images have much higher quality. \textbf{Zoom in for best view}} \label{fig:dataset_comparison} \vspace{-0.5cm} \end{figure} <|paper_end|>
[ "<|reference_start|> Towards Real-World Blind Face Restoration with Generative Facial Prior: Blind face restoration usually relies on facial priors, such as facial geometry prior or reference prior, to restore realistic and faithful details. However, very low-quality inputs cannot offer accurate geometric prior while high-quality references are inaccessible, limiting the applicability in real-world scenarios. In this work, we propose GFP-GAN that leverages rich and diverse priors encapsulated in a pretrained face GAN for blind face restoration. This Generative Facial Prior (GFP) is incorporated into the face restoration process via novel channel-split spatial feature transform layers, which allow our method to achieve a good balance of realness and fidelity. Thanks to the powerful generative facial prior and delicate designs, our GFP-GAN could jointly restore facial details and enhance colors with just a single forward pass, while GAN inversion methods require expensive image-specific optimization at inference. Extensive experiments show that our method achieves superior performance to prior art on both synthetic and real-world datasets. <|reference_end|>", "<|reference_start|> GAN Prior Embedded Network for Blind Face Restoration in the Wild: Blind face restoration (BFR) from severely degraded face images in the wild is a very challenging problem. Due to the high illness of the problem and the complex unknown degradation, directly training a deep neural network (DNN) usually cannot lead to acceptable results. Existing generative adversarial network (GAN) based methods can produce better results but tend to generate over-smoothed restorations. In this work, we propose a new method by first learning a GAN for high-quality face image generation and embedding it into a U-shaped DNN as a prior decoder, then fine-tuning the GAN prior embedded DNN with a set of synthesized low-quality face images. The GAN blocks are designed to ensure that the latent code and noise input to the GAN can be respectively generated from the deep and shallow features of the DNN, controlling the global face structure, local face details and background of the reconstructed image. The proposed GAN prior embedded network (GPEN) is easy-to-implement, and it can generate visually photo-realistic results. Our experiments demonstrated that the proposed GPEN achieves significantly superior results to state-of-the-art BFR methods both quantitatively and qualitatively, especially for the restoration of severely degraded face images in the wild. The source code and models can be found at https://github.com/yangxy/GPEN. <|reference_end|>", "<|reference_start|> VoxCeleb: a large-scale speaker identification dataset: Most existing datasets for speaker identification contain samples obtained under quite constrained conditions, and are usually hand-annotated, hence limited in size. The goal of this paper is to generate a large scale text-independent speaker identification dataset collected 'in the wild'. We make two contributions. First, we propose a fully automated pipeline based on computer vision techniques to create the dataset from open-source media. Our pipeline involves obtaining videos from YouTube; performing active speaker verification using a two-stream synchronization Convolutional Neural Network (CNN), and confirming the identity of the speaker using CNN based facial recognition. We use this pipeline to curate VoxCeleb which contains hundreds of thousands of 'real world' utterances for over 1,000 celebrities. Our second contribution is to apply and compare various state of the art speaker identification techniques on our dataset to establish baseline performance. We show that a CNN based architecture obtains the best performance for both identification and verification. <|reference_end|>", "<|reference_start|> VoxCeleb2: Deep Speaker Recognition: The objective of this paper is speaker recognition under noisy and unconstrained conditions. We make two key contributions. First, we introduce a very large-scale audio-visual speaker recognition dataset collected from open-source media. Using a fully automated pipeline, we curate VoxCeleb2 which contains over a million utterances from over 6,000 speakers. This is several times larger than any publicly available speaker recognition dataset. Second, we develop and compare Convolutional Neural Network (CNN) models and training strategies that can effectively recognise identities from voice under various conditions. The models trained on the VoxCeleb2 dataset surpass the performance of previous works on a benchmark dataset by a significant margin. <|reference_end|>" ]
[ 0, 1, 3, 4 ]
{"<|multi_cite_1_1|>": "ss-1050423", "<|multi_cite_1_2|>": "ss-940312", "<|multi_cite_1_3|>": "arxiv-128911", "<|multi_cite_2_1|>": "arxiv-290638", "<|multi_cite_2_2|>": "arxiv-141599", "<|multi_cite_2_3|>": "ss-754034", "<|multi_cite_3_1|>": "arxiv-154859", "<|multi_cite_3_2|>": "ss-1353350", "<|multi_cite_3_3|>": "arxiv-210136", "<|multi_cite_3_4|>": "arxiv-282182", "<|multi_cite_4_1|>": "arxiv-314558", "<|multi_cite_4_2|>": "arxiv-340670", "<|cite_5|>": "arxiv-184253", "<|multi_cite_6_1|>": "arxiv-290638", "<|multi_cite_6_2|>": "arxiv-314558", "<|multi_cite_6_3|>": "arxiv-340670", "<|multi_cite_7_1|>": "ss-1009350", "<|multi_cite_7_2|>": "arxiv-236199", "<|multi_cite_7_3|>": "arxiv-248434", "<|cite_8|>": "arxiv-127738", "<|cite_9|>": "arxiv-162541", "<|multi_cite_10_1|>": "ss-677921", "<|multi_cite_10_2|>": "arxiv-167988", "<|multi_cite_10_3|>": "arxiv-298402", "<|multi_cite_11_1|>": "arxiv-167988", "<|multi_cite_11_2|>": "arxiv-298402", "<|multi_cite_12_1|>": "arxiv-203120", "<|multi_cite_12_2|>": "arxiv-307649", "<|cite_13|>": "arxiv-307649", "<|multi_cite_14_1|>": "ss-805363", "<|multi_cite_14_2|>": "arxiv-105885", "<|cite_15|>": "arxiv-314455", "<|multi_cite_16_1|>": "ss-940312", "<|multi_cite_16_2|>": "arxiv-87200", "<|multi_cite_16_3|>": "arxiv-105885", "<|multi_cite_16_4|>": "ss-680309", "<|multi_cite_16_5|>": "arxiv-150830", "<|multi_cite_16_6|>": "arxiv-165107", "<|multi_cite_16_7|>": "arxiv-171034", "<|multi_cite_16_8|>": "arxiv-255192", "<|multi_cite_16_9|>": "ss-940316", "<|multi_cite_16_10|>": "arxiv-268998", "<|multi_cite_16_11|>": "ss-1273271", "<|multi_cite_17_1|>": "arxiv-141599", "<|multi_cite_17_2|>": "arxiv-102286", "<|multi_cite_17_3|>": "arxiv-290638", "<|multi_cite_17_4|>": "ss-754034", "<|multi_cite_17_5|>": "ss-764773", "<|multi_cite_18_1|>": "arxiv-154859", "<|multi_cite_18_2|>": "ss-1353350", "<|multi_cite_19_1|>": "arxiv-314558", "<|multi_cite_19_2|>": "arxiv-340670", "<|multi_cite_20_1|>": "arxiv-236199", "<|multi_cite_20_2|>": "arxiv-248434", "<|multi_cite_20_3|>": "arxiv-226012", "<|multi_cite_21_1|>": "arxiv-141148", "<|multi_cite_21_2|>": "arxiv-307649", "<|multi_cite_21_3|>": "arxiv-203120", "<|cite_22|>": "ss-1257437", "<|cite_23|>": "arxiv-69417", "<|cite_24|>": "ss-709593", "<|cite_25|>": "ss-768664", "<|cite_26|>": "arxiv-184253", "<|multi_cite_27_1|>": "arxiv-314558", "<|multi_cite_27_2|>": "arxiv-340670", "<|multi_cite_27_3|>": "arxiv-290638", "<|cite_28|>": "arxiv-127738", "<|cite_29|>": "arxiv-162541"}
2405.09115
<|paper_start|> Title: Hybrid Meta-Solving for Practical Quantum Computing Abstract: Hybrid Meta-Solving for Practical Quantum Computing: The advent of quantum algorithms has initiated a discourse on the potential for quantum speedups for optimization problems. However, several factors still hinder a practical realization of the potential benefits. These include the lack of advanced, error-free quantum hardware, the absence of accessible software stacks for seamless integration and interaction, and the lack of methods that allow us to leverage the theoretical advantages to real-world use cases. This paper works towards the creation of an accessible hybrid software stack for solving optimization problems, aiming to create a fundamental platform that can utilize quantum technologies to enhance the solving process. We introduce a novel approach that we call Hybrid Meta-Solving, which combines classical and quantum optimization techniques to create customizable and extensible hybrid solvers. We decompose mathematical problems into multiple sub-problems that can be solved by classical or quantum solvers, and propose techniques to semi-automatically build the best solver for a given problem. Implemented in our ProvideQ toolbox prototype, Meta-Solving provides interactive workflows for accessing quantum computing capabilities. Our evaluation demonstrates the applicability of Meta-Solving in industrial use cases. It shows that we can reuse state-of-the-art classical algorithms and extend them with quantum computing techniques. Our approach is designed to be at least as efficient as state-of-the-art classical techniques, while having the potential to outperform them if future advances in the quantum domain are made. Introduction \label{sec:introduction} Quantum algorithms have demonstrated theoretical advantages over their classical counterparts in addressing problems such as unstructured database search <|cite_start|> (Reference: {A fast quantum mechanical algorithm for database search: were proposed in the early 1980’s [Benioff80] and shown to be at least as powerful as classical computers an important but not surprising result, since classical computers, at the deepest level, ultimately follow the laws of quantum mechanics. The description of quantum mechanical computers was formalized in the late 80’s and early 90’s [Deutsch85][BB92] [BV93] [Yao93] and they were shown to be more powerful than classical computers on various specialized problems. In early 1994, [Shor94] demonstrated that a quantum mechanical computer could efficiently solve a well-known problem for which there was no known efficient algorithm using classical computers. This is the problem of integer factorization, i.e. testing whether or not a given integer, N, is prime, in a time which is a finite power of o (logN) . ----------------------------------------------) <|cite_end|>, factorization <|cite_start|> (Reference: Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer: A digital computer is generally believed to be an efficient universal computing device; that is, it is believed to be able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems that are generally thought to be hard on classical computers and that have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, for example, the number of digits of the integer to be factored.) <|cite_end|>, and testing whether a function is constant <|cite_start|> (Reference: Rapid solution of problems by quantum computation: A class of problems is described which can be solved more efficiently by quantum computation than by any classical or stochastic method. The quantum computation solves the problem with certainty in exponentially less time than any classical deterministic computation.) <|cite_end|>. These theoretical advantages have prompted a discussion of potential applications of quantum technologies to the optimization domain, with the objective of retrieving practical quantum speedups and creating more efficient solvers. Nevertheless, we are currently in the early stages of quantum computing, where the practical quantum advantages for optimization problems have yet to be realized <|cite_start|> (Reference: Limitations of optimization algorithms on noisy quantum devices: ) <|cite_end|> <|cite_start|> (Reference: Greedy Gradient-free Adaptive Variational Quantum Algorithms on a Noisy Intermediate Scale Quantum Computer: Hybrid quantum-classical adaptive Variational Quantum Eigensolvers (VQE) already hold the potential to outperform classical computing for simulating quantum many-body systems. However, their practical implementation on current quantum processing units (QPUs) is very challenging due to the noisy evaluation of a polynomially scaling number of observables, undertaken for operator selection and optimisation of a high-dimensional cost function. To overcome this, we propose new techniques to execute adaptive algorithms on a 25-qubit error-mitigated QPU coupled to a GPU-accelerated HPC simulator. Targeting physics applications, we compute the ground state of a 25-body Ising model using the newly introduced Greedy Gradient-free Adaptive VQE (CGA-VQE) requiring only five circuit measurements per iteration, regardless of the number of qubits and size of the operator pool. Towards chemistry, we combine the GGA-VQE and Overlap-ADAPT-VQE algorithms to approximate a molecular system ground state. We show that the QPU successfully executes the algorithms and yields the correct choice of parametrised unitary operators. While the QPU evaluation of the resulting ansatz wave-function is polluted by hardware noise, a single final evaluation of the sought-after observables on a classical GPU-accelerated/noiseless simulator allows the recovery of the correct approximation of the ground state, thus highlighting the need for hybrid quantum-classical observable measurement.) <|cite_end|>. There are several factors currently preventing us from achieving quantum supremacy, a major one being the availability of scalable quantum hardware with large numbers of qubits with high connectivity and efficient error correction. However, the availability of advanced quantum hardware does not guarantee the development of superior solvers. We have yet to identify methods for translating the theoretical speedups into practical applications. To this end, we must create software stacks that facilitate the integration of quantum solutions into broader computational pipelines, where they can operate in conjunction with classical computers in an efficient and effective manner. Moreover, it is necessary to investigate how the theoretical advantages of quantum computing can be applied in actual computational pipelines, where information between classical and quantum computers must be transferred continuously. Currently, encoding information on quantum computers requires extensive transformation techniques. For instance, when creating oracles to apply Grover's algorithm <|cite_start|> (Reference: {A fast quantum mechanical algorithm for database search: were proposed in the early 1980’s [Benioff80] and shown to be at least as powerful as classical computers an important but not surprising result, since classical computers, at the deepest level, ultimately follow the laws of quantum mechanics. The description of quantum mechanical computers was formalized in the late 80’s and early 90’s [Deutsch85][BB92] [BV93] [Yao93] and they were shown to be more powerful than classical computers on various specialized problems. In early 1994, [Shor94] demonstrated that a quantum mechanical computer could efficiently solve a well-known problem for which there was no known efficient algorithm using classical computers. This is the problem of integer factorization, i.e. testing whether or not a given integer, N, is prime, in a time which is a finite power of o (logN) . ----------------------------------------------) <|cite_end|>, or when transforming constrained algorithms into Quadratic Unconstrained Binary Optimization (QUBO) Problems to apply quantum approximation algorithms <|cite_start|> (Reference: {A quantum approximate optimization algorithm: We introduce a quantum algorithm that produces approximate solutions for combinatorial optimization problems. The algorithm depends on an integer p ≥ 1 and the quality of the approximation improves as p is increased. The quantum circuit that implements the algorithm consists of unitary gates whose locality is at most the locality of the objective function whose optimum is sought. The depth of the circuit grows linearly with p times (at worst) the number of constraints. If p is fixed, that is, independent of the input size, the algorithm makes use of efficient classical pre-processing. If p grows with the input size a different strategy is proposed. We study the algorithm as applied to MaxCut on regular graphs and analyze its performance on 2-regular and 3-regular graphs for fixed p . For p = 1, on 3-regular graphs the quantum algorithm always finds a cut that is at least 0.6924 times the size of the optimal cut.) <|cite_end|>. Next, quantum computing must be made more accessible to the general public. Vendors of quantum solutions require their users to utilize their frameworks in a manner that is opaque to the user, limiting their ability to adapt the framework to diverse real-world problems <|cite_start|> (Reference: Hybrid Quantum Solvers in Production: how to succeed in the NISQ era?: Hybrid quantum computing is considered the present and the future within the field of quantum computing. Far from being a passing fad, this trend cannot be considered just a stopgap to address the limitations of NISQ-era devices. The foundations linking both computing paradigms will remain robust over time. The contribution of this work is twofold: first, we describe and categorize some of the most frequently used hybrid solvers, resorting to two different taxonomies recently published in the literature. Secondly, we put a special focus on two solvers that are currently deployed in real production and that have demonstrated to be near the real industry. These solvers are the LeapHybridBQMSampler contained in D-Wave's Hybrid Solver Service and Quantagonia's Hybrid Solver. We analyze the performance of both methods using as benchmarks four combinatorial optimization problems.) <|cite_end|>. In other instances, users are provided with only basic programming kits and frameworks, such as Qiskit, Pennylane <|cite_start|> (Reference: PennyLane: Automatic differentiation of hybrid quantum-classical computations: PennyLane is a Python 3 software framework for differentiable programming of quantum computers. The library provides a unified architecture for near-term quantum computing devices, supporting both qubit and continuous-variable paradigms. PennyLane's core feature is the ability to compute gradients of variational quantum circuits in a way that is compatible with classical techniques such as backpropagation. PennyLane thus extends the automatic differentiation algorithms common in optimization and machine learning to include quantum and hybrid computations. A plugin system makes the framework compatible with any gate-based quantum simulator or hardware. We provide plugins for hardware providers including the Xanadu Cloud, Amazon Braket, and IBM Quantum, allowing PennyLane optimizations to be run on publicly accessible quantum devices. On the classical front, PennyLane interfaces with accelerated machine learning libraries such as TensorFlow, PyTorch, JAX, and Autograd. PennyLane can be used for the optimization of variational quantum eigensolvers, quantum approximate optimization, quantum machine learning models, and many other applications.) <|cite_end|>, or Qrisp <|cite_start|> (Reference: Qrisp: A Framework for Compilable High-Level Programming of Gate-Based Quantum Computers: While significant progress has been made on the hardware side of quantum computing, support for high-level quantum programming abstractions remains underdeveloped compared to classical programming languages. In this article, we introduce Qrisp, a framework designed to bridge several gaps between high-level programming paradigms in state-of-the-art software engineering and the physical reality of today's quantum hardware. The framework aims to provide a systematic approach to quantum algorithm development such that they can be effortlessly implemented, maintained and improved. We propose a number of programming abstractions that are inspired by classical paradigms, yet consistently focus on the particular needs of a quantum developer. Unlike many other high-level language approaches, Qrisp's standout feature is its ability to compile programs to the circuit level, making them executable on most existing physical backends. The introduced abstractions enable the Qrisp compiler to leverage algorithm structure for increased compilation efficiency. Finally, we present a set of code examples, including an implementation of Shor's factoring algorithm. For the latter, the resulting circuit shows significantly reduced quantum resource requirements, strongly supporting the claim that systematic quantum algorithm development can give quantitative benefits.) <|cite_end|>. These frameworks require users to possess advanced expertise and to implement the core functionality themselves. Both of these options are suboptimal, as users should not be forced to identify opportunities and implement quantum applications themselves. Rather, they should have the ability to customize quantum application pipelines to optimize their performance and meet their custom needs. Ultimately, an abstraction layer that covers both the classical and quantum parts of the computation is needed. This paper works towards the creation of an accessible hybrid software stack for solving optimization problems, aiming to create a fundamental platform that can utilize quantum technologies to enhance the solving process. We introduce a concept called Hybrid Meta-Solving, which combines the advantages of classical and quantum optimization in hybrid solution strategies to create new, powerful ways to solve well-known mathematical problems. Meta-Solving describes the decomposition of a mathematical problem into multiple sub-problems, each of which can be solved by a selection of solvers. Using expert knowledge, empirical data, and established heuristics, we can compare potential classical and quantum solvers for a subroutine and find the best solver for the given problem. This paper outlines the fundamental concepts of Meta-Solving and illustrates how these concepts can be utilized to create interactive, semi-automated workflows. We explain how users can utilize those workflows to exploit the potential of quantum computing and find efficient solutions for given algorithmic problems. A first prototype implementing the fundamentals of Meta-Solving is available in our ProvideQ toolbox <|cite_start|> (Reference: Providing Quantum Readiness: The Vision of the ProvideQ Toolbox: : Quantum computing has the potential to exponentially accelerate the solution of specific problems compared to classical computing. However, the accessibility to quantum computing is currently limited due to the technical challenges posed by quantum devices. Additionally, implementing quantum algorithms is challenging because of the complex nature of quantum systems. To address these challenges, we introduce the ProvideQ Toolbox, a framework designed to enhance the accessibility of quantum computing, especially for optimization problems. Our toolbox includes a range of classical and quantum state-of-the-art optimization algorithms and employs meta-solver strategies to determine the best quantum optimization algorithm or quantum subroutine in a classical algorithm for a given optimization problem. The ProvideQ Toolbox can be used via a web-based frontend and an API and can be seamlessly integrated with multiple quantum computing backends and classical optimization frameworks.) <|cite_end|>. Our evaluation demonstrates that our Meta-Solving concept is applicable to realistic problems and reaches at least the same performance as classical state-of-the-art approaches. While we are not yet able to reach actual quantum speedups, we show how a fundamental platform that integrates classical and quantum techniques can be created. Related Work \label{sec:background} This section briefly introduces the background to quantum computing and presents state-of-the-art approaches and existing work related to our Meta-Solving concept. \subsection{Current-era Quantum Computing} Today we are in what is known as the Noisy Intermediate-Scale Quantum (NISQ) <|cite_start|> (Reference: Quantum Computing: Researchers are optimistic, but a practical device is years away.) <|cite_end|> era. Medium-scale quantum computers with a few hundred qubits are available and can be programmed using a gate-based programming model. However, the hardware is still noisy, and it requires expensive error-mitigation measures to produce reasonable results even for very small problems <|cite_start|> (Reference: NISQ Computers: A Path to Quantum Supremacy: The quest for quantum advantage, wherein quantum computers surpass the computational capabilities of classical computers executing state-of-the-art algorithms on well-defined tasks, represents a pivotal race in the domain of quantum computing. NISQ (Noisy Intermediate-Scale Quantum) computing has witnessed remarkable advancements, culminating in significant milestones on the journey towards the realization of universal fault-tolerant quantum computers. This transformative turning point, known as quantum supremacy, has been achieved amid a series of breakthroughs, signifying the dawn of the quantum era. Quantum hardware has undergone substantial integration and architectural evolution, contrasting with its nascent stages. In this review, we critically examine the quantum supremacy experiments conducted thus far, shedding light on their implications and contributions to the evolving landscape of quantum computing. Additionally, we endeavor to illuminate a range of cutting-edge proof-of-principle investigations in the realm of applied quantum computing, providing an insightful overview of the current state of applied quantum research and its prospective influence across diverse scientific, industrial, and technological frontiers.) <|cite_end|>. A plethora of quantum algorithms are currently being studied on small, error-prone quantum computers or simulators, as well as through theoretical means. Algorithms such as Grover <|cite_start|> (Reference: {A fast quantum mechanical algorithm for database search: were proposed in the early 1980’s [Benioff80] and shown to be at least as powerful as classical computers an important but not surprising result, since classical computers, at the deepest level, ultimately follow the laws of quantum mechanics. The description of quantum mechanical computers was formalized in the late 80’s and early 90’s [Deutsch85][BB92] [BV93] [Yao93] and they were shown to be more powerful than classical computers on various specialized problems. In early 1994, [Shor94] demonstrated that a quantum mechanical computer could efficiently solve a well-known problem for which there was no known efficient algorithm using classical computers. This is the problem of integer factorization, i.e. testing whether or not a given integer, N, is prime, in a time which is a finite power of o (logN) . ----------------------------------------------) <|cite_end|>, Shor <|cite_start|> (Reference: Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer: A digital computer is generally believed to be an efficient universal computing device; that is, it is believed to be able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems that are generally thought to be hard on classical computers and that have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, for example, the number of digits of the integer to be factored.) <|cite_end|>, and Deutsch-Jozsa <|cite_start|> (Reference: Rapid solution of problems by quantum computation: A class of problems is described which can be solved more efficiently by quantum computation than by any classical or stochastic method. The quantum computation solves the problem with certainty in exponentially less time than any classical deterministic computation.) <|cite_end|> were designed even before the first quantum computers became available. These algorithms provide theoretically proven advantages, but we are currently unable to leverage them in practice due to a number of factors, including the fact that they were designed for fault-tolerant quantum computers, which are not yet available. To make quantum computing viable in the near future, NISQ-tailored quantum algorithms such as the Quantum Approximate Optimization Algorithm (QAOA) <|cite_start|> (Reference: {A quantum approximate optimization algorithm: We introduce a quantum algorithm that produces approximate solutions for combinatorial optimization problems. The algorithm depends on an integer p ≥ 1 and the quality of the approximation improves as p is increased. The quantum circuit that implements the algorithm consists of unitary gates whose locality is at most the locality of the objective function whose optimum is sought. The depth of the circuit grows linearly with p times (at worst) the number of constraints. If p is fixed, that is, independent of the input size, the algorithm makes use of efficient classical pre-processing. If p grows with the input size a different strategy is proposed. We study the algorithm as applied to MaxCut on regular graphs and analyze its performance on 2-regular and 3-regular graphs for fixed p . For p = 1, on 3-regular graphs the quantum algorithm always finds a cut that is at least 0.6924 times the size of the optimal cut.) <|cite_end|> and the Variational Quantum Eigensolver (VQE) <|cite_start|> (Reference: A variational eigenvalue solver on a photonic quantum processor: ) <|cite_end|> have been developed. However, it has not yet been demonstrated that the NISQ-tailored algorithms can provide actual speedups. \subsection{Quantum Computing Platforms for Optimization} With the constant improvement of the capacities of actual quantum hardware and the new possibilities for algorithms executed on it, various endeavours have started working on an abstraction layer that relieves the end user from deciding between the numerous options. Formally, one can integrate these options in a modular decision tree with a set of options then forming a so-called Solution Path, and recommend Solution Paths based on various metrics and characteristics of the application <|cite_start|> (Reference: stating the issues and recommending solutions.: ) <|cite_end|>. Finding and evaluating good Solution Paths for application problems like vehicle routing is hard, however, and requires extensive domain knowledge along with hardware improvements and computational tests <|cite_start|> (Reference: Quantum-assisted quantum compiling: Compiling quantum algorithms for near-term quantum computers (accounting for connectivity and native gate alphabets) is a major challenge that has received significant attention both by industry and academia. Avoiding the exponential overhead of classical simulation of quantum dynamics will allow compilation of larger algorithms, and a strategy for this is to evaluate an algorithm's cost on a quantum computer. To this end, we propose a variational hybrid quantum-classical algorithm called quantum-assisted quantum compiling (QAQC). In QAQC, we use the overlap between a target unitaryUand a trainable unitaryVas the cost function to be evaluated on the quantum computer. More precisely, to ensure that QAQC scales well with problem size, our cost involves not only the global overlapTr(V†U)but also the local overlaps with respect to individual qubits. We introduce novel short-depth quantum circuits to quantify the terms in our cost function, and we prove that our cost cannot be efficiently approximated with a classical algorithm under reasonable complexity assumptions. We present both gradient-free and gradient-based approaches to minimizing this cost. As a demonstration of QAQC, we compile various one-qubit gates on IBM's and Rigetti's quantum computers into their respective native gate alphabets. Furthermore, we successfully simulate QAQC up to a problem size of 9 qubits, and these simulations highlight both the scalability of our cost function as well as the noise resilience of QAQC. Future applications of QAQC include algorithm depth compression, black-box compiling, noise mitigation, and benchmarking.) <|cite_end|>. Hybrid solvers are in development, e.g., by Quantagonia or D-Wave <|cite_start|> (Reference: D-Wave: 最近Google宣布在量子计算领域取得了突破性进展,他们用D-Wave量子计算机(图1)在解决某些问题上比传统计算机过程快了1亿倍。果真这样,它将带来人工智能技术的巨大进步。不过,有专家对此表示质疑,认为这些说法夸大其词了。那么Google所谓的D-Wave量子计算机,到底能不能做量子计算,这1亿倍的加速又是怎么回事?) <|cite_end|>, though focusing mostly on annealing methods for now due to their farther maturity. A thorough description and benchmark is found in <|cite_start|> (Reference: Hybrid Quantum Solvers in Production: how to succeed in the NISQ era?: Hybrid quantum computing is considered the present and the future within the field of quantum computing. Far from being a passing fad, this trend cannot be considered just a stopgap to address the limitations of NISQ-era devices. The foundations linking both computing paradigms will remain robust over time. The contribution of this work is twofold: first, we describe and categorize some of the most frequently used hybrid solvers, resorting to two different taxonomies recently published in the literature. Secondly, we put a special focus on two solvers that are currently deployed in real production and that have demonstrated to be near the real industry. These solvers are the LeapHybridBQMSampler contained in D-Wave's Hybrid Solver Service and Quantagonia's Hybrid Solver. We analyze the performance of both methods using as benchmarks four combinatorial optimization problems.) <|cite_end|>. With the PlanQK platform, a first hardware-agnostic platform and vision on how end users can approach solving various application cases with quantum-enhanced algorithms exists. The efforts for abstraction go beyond optimization and similarly extend to Quantum Machine Learning. Our work focuses on the integration of quantum computing and its existing platforms into classical optimization techniques. We combine the best of both worlds with the goal of building a new platform that gives users the tools they need to take advantage of quantum computing and build efficient hybrid solvers tailored to their needs. \subsection{Classical Optimization and Polylithic Modeling} In the domain of classical mathematical optimization, specialized solvers have been developed to address particular problems with high efficiency, exemplified by the TSP solver Concorde. Additionally, optimization solvers that cater to broader problem classes, such as Linear Programming (LP), Mixed-Integer Linear Programming (MILP), Nonlinear Programming (NLP), and Mixed-Integer Nonlinear Programming (MINLP), exhibit considerable computational power. These solvers have undergone continuous refinement resulting in remarkable speedup over several decades <|cite_start|> (Reference: Progress in mathematical programming solvers from 2001 to 2020: ) <|cite_end|> <|cite_start|> (Reference: MINLP Solver Software: In this article we give a brief overview of the start-of-the-art in software for the solution of mixed integer nonlinear programs (MINLP). We establish several groupings with respect to various features and give concise individual descriptions for each solver. The provided information may guide the selection of a best solver for a particular MINLP problem. Keywords: mixed integer nonlinear programming; solver; software; MINLP; MIQCP) <|cite_end|> and are presently employed to tackle complex, real-world challenges. However, as solvers become more efficient, users strive to build more accurate models, thereby increasing their complexity. Despite the advancements in optimization solver technology, a number of practical optimization challenges remain that are not adequately addressed by current state-of-the-art solutions. In response to this gap, methodologies that decompose a complex, "monolithic" problem into a series of simpler, more manageable subproblems have gained prominence. Termed "polylithic" modeling and solution approaches, this strategy entails the development of customized methods that incorporate multiple models and/or algorithmic components <|cite_start|> (Reference: Polylithic modeling and solution approaches using algebraic modeling systems: ) <|cite_end|>. Here, the solution derived from one model serves as the input for another. Notable examples of such polylithic approaches include decomposition techniques (e.g. Benders <|cite_start|> (Reference: Partitioning procedures for solving mixed-variables programming problems: ) <|cite_end|> and Dantzig-Wolfe <|cite_start|> (Reference: The decomposition algorithm for linear programs: A procedure is presented for the efficient computational solution of linear programs having a certain structural property characteristic of a large class of problems of practical interest. The property makes possible the decomposition of the problem into a sequence of small linear programs whose iterated solutions solve the given problem through a generalization of the simplex method for linear programming. 1. THE DECOMPOSED LINEAR PROGRAM MANY LINEAR programming problems of practical interest have the property that they may be described, in part, as composed of separate linear programming problems tied together by a number of constraints considerably smaller than the total number imposed on the problem. When the matrix of coefficients of such a problem, suitably ordered, is displayed in the usual way, a pattern emerges like that shown in Figure 1. In this figure the constraint matrix has been partitioned into nonzero blocks A1 and By, the right-hand side column of constants correspondingly into b, bl,..., bn; and the "costs,") <|cite_end|> decomposition), advanced MILP and MINLP solvers that integrate presolve strategies with the sequential resolution of subproblems (frequently employing various external sub-solvers) within a Branch and Cut framework, and hybrid methods that integrate constructive heuristics and local search improvement strategies with exact Mathematical Programming algorithms. While polylithic modeling is not inherently dependent on any specific software, algebraic modeling languages such as the General Algebraic Modeling System (GAMS), have demonstrated significant utility in facilitating the implementation of these sophisticated approaches <|cite_start|> (Reference: Polylithic modeling and solution approaches using algebraic modeling systems: ) <|cite_end|>. In our work, we draw inspiration from well-established polylithic approaches in classical optimization and extend them with quantum computing techniques. We actively reuse openly available state-of-the-art solvers and decompositions to maximize the efficiency of our approach. Furthermore, we bundle the existing state-of-the-art into a user-friendly toolbox to enable easy reusability, and extensibility. <|paper_end|>
[ "<|reference_start|> {A fast quantum mechanical algorithm for database search: were proposed in the early 1980’s [Benioff80] and shown to be at least as powerful as classical computers an important but not surprising result, since classical computers, at the deepest level, ultimately follow the laws of quantum mechanics. The description of quantum mechanical computers was formalized in the late 80’s and early 90’s [Deutsch85][BB92] [BV93] [Yao93] and they were shown to be more powerful than classical computers on various specialized problems. In early 1994, [Shor94] demonstrated that a quantum mechanical computer could efficiently solve a well-known problem for which there was no known efficient algorithm using classical computers. This is the problem of integer factorization, i.e. testing whether or not a given integer, N, is prime, in a time which is a finite power of o (logN) . ---------------------------------------------- <|reference_end|>", "<|reference_start|> stating the issues and recommending solutions.: <|reference_end|>", "<|reference_start|> Polylithic modeling and solution approaches using algebraic modeling systems: <|reference_end|>", "<|reference_start|> The decomposition algorithm for linear programs: A procedure is presented for the efficient computational solution of linear programs having a certain structural property characteristic of a large class of problems of practical interest. The property makes possible the decomposition of the problem into a sequence of small linear programs whose iterated solutions solve the given problem through a generalization of the simplex method for linear programming. 1. THE DECOMPOSED LINEAR PROGRAM MANY LINEAR programming problems of practical interest have the property that they may be described, in part, as composed of separate linear programming problems tied together by a number of constraints considerably smaller than the total number imposed on the problem. When the matrix of coefficients of such a problem, suitably ordered, is displayed in the usual way, a pattern emerges like that shown in Figure 1. In this figure the constraint matrix has been partitioned into nonzero blocks A1 and By, the right-hand side column of constants correspondingly into b, bl,..., bn; and the \"costs,\" <|reference_end|>" ]
[ 5, 18, 24, 26 ]
{"<|cite_1|>": "ss-679651", "<|cite_2|>": "ss-855582", "<|cite_3|>": "ss-1516724", "<|multi_cite_4_1|>": "ss-1516788", "<|multi_cite_4_2|>": "ss-1668292", "<|cite_5|>": "ss-679651", "<|cite_6|>": "ss-765879", "<|cite_7|>": "arxiv-576856", "<|cite_9|>": "arxiv-180130", "<|cite_10|>": "arxiv-630617", "<|cite_11|>": "ss-1181399", "<|cite_12|>": "ss-1378997", "<|cite_13|>": "arxiv-544860", "<|cite_14|>": "ss-679651", "<|cite_15|>": "ss-855582", "<|cite_16|>": "ss-1516724", "<|cite_17|>": "ss-765879", "<|cite_18|>": "ss-1032542", "<|cite_19|>": "ss-1181400", "<|cite_20|>": "ss-1181401", "<|cite_22|>": "ss-825606", "<|cite_23|>": "arxiv-576856", "<|multi_cite_27_1|>": "ss-681949", "<|multi_cite_27_2|>": "ss-1181402", "<|cite_28|>": "ss-1181403", "<|cite_29|>": "ss-683975", "<|cite_30|>": "ss-1181404", "<|cite_31|>": "ss-1181403"}
2401.09496
<|paper_start|> Title: Learning to Generalize over Subpartitions for Heterogeneity-aware Domain Adaptive Nuclei Segmentation Abstract: Learning to Generalize over Subpartitions for Heterogeneity-aware Domain Adaptive Nuclei Segmentation: Annotation scarcity and cross-modality/stain data distribution shifts are two major obstacles hindering the application of deep learning models for nuclei analysis, which holds a broad spectrum of potential applications in digital pathology. Recently, unsupervised domain adaptation (UDA) methods have been proposed to mitigate the distributional gap between different imaging modalities for unsupervised nuclei segmentation in histopathology images. However, existing UDA methods are built upon the assumption that data distributions within each domain should be uniform. Based on the over-simplified supposition, they propose to align the histopathology target domain with the source domain integrally, neglecting severe intra-domain discrepancy over subpartitions incurred by mixed cancer types and sampling organs. In this paper, for the first time, we propose to explicitly consider the heterogeneity within the histopathology domain and introduce open compound domain adaptation (OCDA) to resolve the crux. In specific, a two-stage disentanglement framework is proposed to acquire domain-invariant feature representations at both image and instance levels. The holistic design addresses the limitations of existing OCDA approaches which struggle to capture instance-wise variations. Two regularization strategies are specifically devised herein to leverage the rich subpartition-specific characteristics in histopathology images and facilitate subdomain decomposition. Moreover, we propose a dual-branch nucleus shape and structure preserving module to prevent nucleus over-generation and deformation in the synthesized images. Experimental results on both cross-modality and cross-stain scenarios over a broad range of diverse datasets demonstrate the superiority of our method compared with state-of-the-art UDA and OCDA methods. Introduction \label{sec1} Nuclei instance segmentation, which demands both accurate localization and precise boundary delineation of each cell nucleus, plays an essential role in computer-aided digital pathology analysis\; <|cite_start|> (Reference: Cancer nucleus: Morphology and beyond: There are many significant morphological alterations of a nucleus of cancer cell that are detectable by light microscopy on routine staining. These changes are often associated with deranged cellular functions of cancer cell. It is difficult to understand the exact relationship between nuclear morphology and alteration of nuclear structural organization in cancer. Herein, the salient visual and subvisual morphological changes of cancer nuclei and their possible etiology and significance have been reviewed. Diagn. Cytopathol. 2010. © 2009 Wiley‐Liss, Inc.) <|cite_end|>. It captures rich characteristics of cell nuclei clusters, including their spatial distribution information and pleomorphic features, to comprehensively represent the properties of the tumor microenvrionment and is thus valuable for various clinical tasks, such as cancer identification and grading\; <|cite_start|> (Reference: Scoring nuclear pleomorphism in breast cancer: Scoring nuclear pleomorphism in breast cancer) <|cite_end|> <|cite_start|> (Reference: Prognostic value of automatically extracted nuclear morphometric features in whole slide images of male breast cancer: ) <|cite_end|> <|cite_start|> (Reference: Characterization of drug effects on cell cultures from phase-contrast microscopy images: ) <|cite_end|>. Recently, deep learning-based methods have been raised as a popular line of research for nuclei instance segmentation\; <|cite_start|> (Reference: Micro-Net: A unified model for segmentation of various objects in microscopy images: Object segmentation and structure localization are important steps in automated image analysis pipelines for microscopy images. We present a convolution neural network (CNN) based deep learning architecture for segmentation of objects in microscopy images. The proposed network can be used to segment cells, nuclei and glands in fluorescence microscopy and histology images after slight tuning of input parameters. The network trains at multiple resolutions of the input image, connects the intermediate layers for better localization and context and generates the output using multi-resolution deconvolution filters. The extra convolutional layers which bypass the max-pooling operation allow the network to train for variable input intensities and object size and make it robust to noisy data. We compare our results on publicly available data sets and show that the proposed network outperforms recent deep learning algorithms.) <|cite_end|> <|cite_start|> (Reference: Analyzing u-net robustness for single cell nucleus segmentation from phase contrast images: We quantify the robustness of the semantic segmentation model U-Net, applied to single cell nuclei detection, with respect to the following factors: (1) automated vs manual training annotations, (2) quantity of training data, and (3) microscope image focus. The difficulty of obtaining sufficient volumes of accurate manually annotated training data to create an accurate Convolutional Neural Networks (CNN) model is overcome by the temporary use of fluorescent labels to automate the creation of training datasets using traditional image processing algorithms. The accuracy measurement is computed with respect to manually annotated masks which were also created to evaluate the effectiveness of using automated training set generation via the fluorescent images. The metric to compute the accuracy is the false positive/negative rate of cell nuclei detection. The goal is to maximize the true positive rate while minimizing the false positive rate. We found that automated segmentation of fluorescently labeled nuclei provides viable training data without the need for manual segmentation. A training dataset size of four large stitched images with medium cell density was enough to reach a true positive rate above 88% and a false positive rate below 20%.) <|cite_end|> <|cite_start|> (Reference: X-net with different loss functions for cell image segmentation: Convolutional neural network is valid for object segmentation. In recent years, it has been applied to the fields of medicine and cell biology. Each class has a different number of pixels in an image. Therefore, the accuracy of semantic segmentation varies drastically between objects with a large number of pixels and objects with a small number of pixels. In this paper, we propose X-Net that integrates two encoders and decoders to solve this problem. This has the advantage of extracting rich features from two encoders and using two decoders to complement the location information and small objects. By using different loss functions for each decoder, we can use the ensemble of two decoders with different viewpoints. We evaluated our method on the Arabidopsis cell images and Drosophila cell images. Experimental results show that our method achieved better accuracy than the conventional methods.) <|cite_end|> <|cite_start|> (Reference: Panoptic Feature Fusion Net: A Novel Instance Segmentation Paradigm for Biomedical and Biological Images: Instance segmentation is an important task for biomedical and biological image analysis. Due to the complicated background components, the high variability of object appearances, numerous overlapping objects, and ambiguous object boundaries, this task still remains challenging. Recently, deep learning based methods have been widely employed to solve these problems and can be categorized into proposal-free and proposal-based methods. However, both proposal-free and proposal-based methods suffer from information loss, as they focus on either global-level semantic or local-level instance features. To tackle this issue, we present a Panoptic Feature Fusion Net (PFFNet) that unifies the semantic and instance features in this work. Specifically, our proposed PFFNet contains a residual attention feature fusion mechanism to incorporate the instance prediction with the semantic features, in order to facilitate the semantic contextual information learning in the instance branch. Then, a mask quality sub-branch is designed to align the confidence score of each object with the quality of the mask prediction. Furthermore, a consistency regularization mechanism is designed between the semantic segmentation tasks in the semantic and instance branches, for the robust learning of both tasks. Extensive experiments demonstrate the effectiveness of our proposed PFFNet, which outperforms several state-of-the-art methods on various biomedical and biological datasets.) <|cite_end|> <|cite_start|> (Reference: Learning to segment cell nuclei in phase-contrast microscopy from fluorescence images for drug discovery: We describe a method for analyzing geometrical properties of cell nuclei from phase contrast microscopy images. This is useful in drug discovery for quantifying the effect of candidate chemical compounds, bypassing the need for fluorescence imaging. Fluorescence images are then only used for training our nuclei segmentation, avoiding the need for the time consuming expert annotations. Geometry based descriptors are calculated and aggregated and fed into a classifier to distinguish the different types of chemical treatments. The drug treatment can be distinguished from no treatment with accuracy better than 95% from fluorescence images and better than 77% from phase contrast images.) <|cite_end|>. Nevertheless, these methods still have non-negligible weaknesses that they heavily depend on elaborate labeled images for fully-supervised model training\; <|cite_start|> (Reference: CDNet: Centripetal Direction Network for Nuclear Instance Segmentation: Nuclear instance segmentation is a challenging task due to a large number of touching and overlapping nuclei in pathological images. Existing methods cannot effectively recognize the accurate boundary owing to neglecting the relationship between pixels (e.g., direction information). In this paper, we propose a novel Centripetal Direction Net-work (CDNet) for nuclear instance segmentation. Specifically, we define centripetal direction feature as a class of adjacent directions pointing to the nuclear center to rep-resent the spatial relationship between pixels within the nucleus. These direction features are then used to construct a direction difference map to represent the similarity within instances and the differences between instances. Finally, we propose a direction-guided refinement module, which acts as a plug-and-play module to effectively integrate auxiliary tasks and aggregate the features of different branches. Experiments on MoNuSeg and CPM17 datasets show that CDNet is significantly better than the other methods and achieves the state-of-the-art performance. The code is available at https://github.com/honglianghe/CDNet.) <|cite_end|> <|cite_start|> (Reference: Mutual-complementing framework for nuclei detection and segmentation in pathology image: Detection and segmentation of nuclei are fundamental analysis operations in pathology images, the assessments derived from which serve as the gold standard for cancer diagnosis. Manual segmenting nuclei is expensive and time-consuming. What’s more, accurate segmentation detection of nuclei can be challenging due to the large appearance variation, conjoined and overlapping nuclei, and serious degeneration of histological structures. Supervised methods highly rely on massive annotated samples. The existing two unsupervised methods are prone to failure on degenerated samples. This paper proposes a Mutual-Complementing Framework (MCF) for nuclei detection and segmentation in pathology images. Two branches of MCF are trained in the mutual-complementing manner, where the detection branch complements the pseudo mask of the segmentation branch, while the progressive trained segmentation branch complements the missing nucleus templates through calculating the mask residual between the predicted mask and detected result. In the detection branch, two response map fusion strategies and gradient direction based postprocessing are devised to obtain the optimal detection response. Furthermore, the confidence loss combined with the synthetic samples and self-finetuning is adopted to train the segmentation network with only high confidence areas. Extensive experiments demonstrate that MCF achieves comparable performance with only a few nucleus patches as supervision. Especially, MCF possesses good robustness (only dropping by about 6%) on degenerated samples, which are critical and common cases in clinical diagnosis.) <|cite_end|>, and their performance degrades drastically under data distribution shifts (also known as domain shifts, e.g., changes in imaging modality, staining technique and cancer type between training and testing data\; <|cite_start|> (Reference: Robust histopathology image analysis: To label or to synthesize: Detection, segmentation and classification of nuclei are fundamental analysis operations in digital pathology. Existing state-of-the-art approaches demand extensive amount of supervised training data from pathologists and may still perform poorly in images from unseen tissue types. We propose an unsupervised approach for histopathology image segmentation that synthesizes heterogeneous sets of training image patches, of every tissue type. Although our synthetic patches are not always of high quality, we harness the motley crew of generated samples through a generally applicable importance sampling method. This proposed approach, for the first time, re-weighs the training loss over synthetic data so that the ideal (unbiased) generalization loss over the true data distribution is minimized. This enables us to use a random polygon generator to synthesize approximate cellular structures (i.e., nuclear masks) for which no real examples are given in many tissue types, and hence, GAN-based methods are not suited. In addition, we propose a hybrid synthesis pipeline that utilizes textures in real histopathology patches and GAN models, to tackle heterogeneity in tissue textures. Compared with existing state-of-the-art supervised models, our approach generalizes significantly better on cancer types without training data. Even in cancer types with training data, our approach achieves the same performance without supervision cost. We release code and segmentation results on over 5000 Whole Slide Images (WSI) in The Cancer Genome Atlas (TCGA) repository, a dataset that would be orders of magnitude larger than what is available today.) <|cite_end|> <|cite_start|> (Reference: Unsupervised Instance Segmentation in Microscopy Images via Panoptic Domain Adaptation and Task Re-weighting: Unsupervised domain adaptation (UDA) for nuclei instance segmentation is important for digital pathology, as it alleviates the burden of labor-intensive annotation and domain shift across datasets. In this work, we propose a Cycle Consistency Panoptic Domain Adaptive Mask R-CNN (CyC-PDAM) architecture for unsupervised nuclei segmentation in histopathology images, by learning from fluorescence microscopy images. More specifically, we first propose a nuclei inpainting mechanism to remove the auxiliary generated objects in the synthesized images. Secondly, a semantic branch with a domain discriminator is designed to achieve panoptic-level domain adaptation. Thirdly, in order to avoid the influence of the source-biased features, we propose a task re-weighting mechanism to dynamically add trade-off weights for the task-specific loss functions. Experimental results on three datasets indicate that our proposed method outperforms state-of-the-art UDA methods significantly, and demonstrates a similar performance as fully supervised methods.) <|cite_end|>). A promising solution is to introduce unsupervised domain adaptation (UDA) method, which trains a model on the labeled source and unlabeled target domain\; <|cite_start|> (Reference: A review of domain adaptation without target labels: Domain adaptation has become a prominent problem setting in machine learning and related fields. This review asks the question: how can a classifier learn from a source domain and generalize to a target domain? We present a categorization of approaches, divided into, what we refer to as, sample-based, feature-based and inference-based methods. Sample-based methods focus on weighting individual observations during training based on their importance to the target domain. Feature-based methods revolve around on mapping, projecting and representing features such that a source classifier performs well on the target domain and inference-based methods incorporate adaptation into the parameter estimation procedure, for instance through constraints on the optimization procedure. Additionally, we review a number of conditions that allow for formulating bounds on the cross-domain generalization error. Our categorization highlights recurring ideas and raises questions important to further research.) <|cite_end|>. It has recently gained a lot of traction and been regarded as a potential solution to alleviate the domain shift issue and maintain label-efficiency\; <|cite_start|> (Reference: CDTD: A Large-Scale Cross-Domain Benchmark for Instance-Level Image-to-Image Translation and Domain Adaptive Object Detection: ) <|cite_end|>. Notably, there have also been several attempts to perform domain adaptive nuclei instance segmentation\; <|cite_start|> (Reference: Unsupervised Instance Segmentation in Microscopy Images via Panoptic Domain Adaptation and Task Re-weighting: Unsupervised domain adaptation (UDA) for nuclei instance segmentation is important for digital pathology, as it alleviates the burden of labor-intensive annotation and domain shift across datasets. In this work, we propose a Cycle Consistency Panoptic Domain Adaptive Mask R-CNN (CyC-PDAM) architecture for unsupervised nuclei segmentation in histopathology images, by learning from fluorescence microscopy images. More specifically, we first propose a nuclei inpainting mechanism to remove the auxiliary generated objects in the synthesized images. Secondly, a semantic branch with a domain discriminator is designed to achieve panoptic-level domain adaptation. Thirdly, in order to avoid the influence of the source-biased features, we propose a task re-weighting mechanism to dynamically add trade-off weights for the task-specific loss functions. Experimental results on three datasets indicate that our proposed method outperforms state-of-the-art UDA methods significantly, and demonstrates a similar performance as fully supervised methods.) <|cite_end|> <|cite_start|> (Reference: PDAM: A panoptic-level feature alignment framework for unsupervised domain adaptive instance segmentation in microscopy images: In this work, we present an unsupervised domain adaptation (UDA) method, named Panoptic Domain Adaptive Mask R-CNN (PDAM), for unsupervised instance segmentation in microscopy images. Since there currently lack methods particularly for UDA instance segmentation, we first design a Domain Adaptive Mask R-CNN (DAM) as the baseline, with cross-domain feature alignment at the image and instance levels. In addition to the image- and instance-level domain discrepancy, there also exists domain bias at the semantic level in the contextual information. Next, we, therefore, design a semantic segmentation branch with a domain discriminator to bridge the domain gap at the contextual level. By integrating the semantic- and instance-level feature adaptation, our method aligns the cross-domain features at the panoptic level. Third, we propose a task re-weighting mechanism to assign trade-off weights for the detection and segmentation loss functions. The task re-weighting mechanism solves the domain bias issue by alleviating the task learning for some iterations when the features contain source-specific factors. Furthermore, we design a feature similarity maximization mechanism to facilitate instance-level feature adaptation from the perspective of representational learning. Different from the typical feature alignment methods, our feature similarity maximization mechanism separates the domain-invariant and domain-specific features by enlarging their feature distribution dependency. Experimental results on three UDA instance segmentation scenarios with five datasets demonstrate the effectiveness of our proposed PDAM method, which outperforms state-of-the-art UDA methods by a large margin.) <|cite_end|> <|cite_start|> (Reference: DARCNN: Domain Adaptive Region-based Convolutional Neural Network for Unsupervised Instance Segmentation in Biomedical Images: In the biomedical domain, there is an abundance of dense, complex data where objects of interest may be challenging to detect or constrained by limits of human knowledge. Labelled domain specific datasets for supervised tasks are often expensive to obtain, and furthermore discovery of novel distinct objects may be desirable for unbiased scientific discovery. Therefore, we propose leveraging the wealth of annotations in benchmark computer vision datasets to conduct unsupervised instance segmentation for diverse biomedical datasets. The key obstacle is thus overcoming the large domain shift from common to biomedical images. We propose a Domain Adaptive Region-based Convolutional Neural Network (DARCNN), that adapts knowledge of object definition from COCO, a large labelled vision dataset, to multiple biomedical datasets. We introduce a domain separation module, a self-supervised representation consistency loss, and an augmented pseudo-labelling stage within DARCNN to effectively perform domain adaptation across such large domain shifts. We showcase DARCNN's performance for unsupervised instance segmentation on numerous biomedical datasets.) <|cite_end|>. They performed unsupervised nuclei segmentation in histopathology images by exploiting domain-invariant knowledge from another modality (e.g., fluorescence microscopy). \begin{figure}[!t] \centerline{\includegraphics[width=0.7\columnwidth]{inter-cancer.png}} \caption{Examples of histopathology images and cropped regions of different cancer types from the Kumar dataset\; <|cite_start|> (Reference: {A dataset and a technique for generalized nuclear segmentation for computational pathology: Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.) <|cite_end|>. From left to right: liver cancer, kidney cancer, and colon cancer.} \label{fig:inter-cancer} \end{figure} However, the existing approaches consider the target histopathology image domain as homogeneous. They propose to align the target domain integrally with the source domain, whereas the intra-domain heterogeneity of histopathology images is neglected. Due to inconsistent cancer types, histopathology image patches and cropped regions could exhibit diverse patterns and styles at both global image level and local instance level, as depicted in Fig.\;\ref{fig:inter-cancer}. In this case, the conventional UDA method which is designed for uniform target data distribution tends to derive a biased alignment to which only target data with similar distribution to the source data can be successfully aligned\; <|cite_start|> (Reference: Discover, Hallucinate, and Adapt: Open Compound Domain Adaptation for Semantic Segmentation: Unsupervised domain adaptation (UDA) for semantic segmentation has been attracting attention recently, as it could be beneficial for various label-scarce real-world scenarios (e.g., robot control, autonomous driving, medical imaging, etc.). Despite the significant progress in this field, current works mainly focus on a single-source single-target setting, which cannot handle more practical settings of multiple targets or even unseen targets. In this paper, we investigate open compound domain adaptation (OCDA), which deals with mixed and novel situations at the same time, for semantic segmentation. We present a novel framework based on three main design principles: discover, hallucinate, and adapt. The scheme first clusters compound target data based on style, discovering multiple latent domains (discover). Then, it hallucinates multiple latent target domains in source by using image-translation (hallucinate). This step ensures the latent domains in the source and the target to be paired. Finally, target-to-source alignment is learned separately between domains (adapt). In high-level, our solution replaces a hard OCDA problem with much easier multiple UDA problems. We evaluate our solution on standard benchmark GTA to C-driving, and achieved new state-of-the-art results.) <|cite_end|>. Moreover, as these methods only regularize the model according to limited training data, they normally suffer from inferior generalization capability, especially in the realistic clinical scenario where testing images could come from divergent cancer types which do not exist in the training set. To transcend the bottlenecks in these conventional single-source-single-target UDA approaches, it is necessary to explicitly model the heterogeneity within the histopathology image domain. \begin{table}[!t] \caption{Comparison between OCDA and other DA settings.} \begin{adjustbox}{width=\columnwidth,center} \begin{tabular}{c|c|c|c} \toprule DA Setting & Complexity of Target Domain & Availability of Subdomain Label & Existence of Unseen Testing Subdomains\\ \midrule UDA & Uni-modal & --- & --- \\ Multi-target DA & Multi-modal & \Checkmark & \XSolidBrush\\ OCDA & Multi-modal & \XSolidBrush & \Checkmark\\ \bottomrule \end{tabular} \end{adjustbox} \label{tab:setting_compare} \end{table} A trivial solution is to partition the whole target domain into several subdomains, following the settings of multi-target DA\; <|cite_start|> (Reference: Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation: In this work, we address the task of unsupervised domain adaptation (UDA) for semantic segmentation in presence of multiple target domains: The objective is to train a single model that can handle all these domains at test time. Such a multi-target adaptation is crucial for a variety of scenarios that real-world autonomous systems must handle. It is a challenging setup since one faces not only the domain gap between the labeled source set and the unlabeled target set, but also the distribution shifts existing within the latter among the different target domains. To this end, we introduce two adversarial frameworks: (i) multi-discriminator, which explicitly aligns each target domain to its counterparts, and (ii) multi-target knowledge transfer, which learns a target-agnostic model thanks to a multi-teacher/single-student distillation mechanism.The evaluation is done on four newly-proposed multi-target benchmarks for UDA in semantic segmentation. In all tested scenarios, our approaches consistently outperform baselines, setting competitive standards for the novel task.) <|cite_end|> <|cite_start|> (Reference: Cross-Boosted Multi-Target Domain Adaptation for Multi-Modality Histopathology Image Translation and Segmentation: Recent digital pathology workflows mainly focus on mono-modality histopathology image analysis. However, they ignore the complementarity between Haematoxylin & Eosin (H&E) and Immunohistochemically (IHC) stained images, which can provide comprehensive gold standard for cancer diagnosis. To resolve this issue, we propose a cross-boosted multi-target domain adaptation pipeline for multi-modality histopathology images, which contains Cross-frequency Style-auxiliary Translation Network (CSTN) and Dual Cross-boosted Segmentation Network (DCSN). Firstly, CSTN achieves the one-to-many translation from fluorescence microscopy images to H&E and IHC images for providing source domain training data. To generate images with realistic color and texture, Cross-frequency Feature Transfer Module (CFTM) is developed to pertinently restructure and normalize high-frequency content and low-frequency style features from different domains. Then, DCSN fulfills multi-target domain adaptive segmentation, where a dual-branch encoder is introduced, and Bidirectional Cross-domain Boosting Module (BCBM) is designed to implement cross-modality information complementation through bidirectional inter-domain collaboration. Finally, we establish Multi-modality Thymus Histopathology (MThH) dataset, which is the largest publicly available H&E and IHC image benchmark. Experiments on MThH dataset and several public datasets show that the proposed pipeline outperforms state-of-the-art methods on both histopathology image translation and segmentation.) <|cite_end|>. However, such an approach has outstanding limitations that it requires domain labels to indicate the subdomain of each target sample, and it is not flexible with the complexity of target domain (i.e., the number of subdomains). \begin{figure}[!t] \centerline{\includegraphics[width=0.9\columnwidth]{compoundDA.png}} \caption{Illustration of the OCDA setting in a benchmark performing domain adaptation from fluorescence microscopy to histopathology images. Note that, unlike multi-target UDA\; <|cite_start|> (Reference: Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation: In this work, we address the task of unsupervised domain adaptation (UDA) for semantic segmentation in presence of multiple target domains: The objective is to train a single model that can handle all these domains at test time. Such a multi-target adaptation is crucial for a variety of scenarios that real-world autonomous systems must handle. It is a challenging setup since one faces not only the domain gap between the labeled source set and the unlabeled target set, but also the distribution shifts existing within the latter among the different target domains. To this end, we introduce two adversarial frameworks: (i) multi-discriminator, which explicitly aligns each target domain to its counterparts, and (ii) multi-target knowledge transfer, which learns a target-agnostic model thanks to a multi-teacher/single-student distillation mechanism.The evaluation is done on four newly-proposed multi-target benchmarks for UDA in semantic segmentation. In all tested scenarios, our approaches consistently outperform baselines, setting competitive standards for the novel task.) <|cite_end|>, the cancer type of each image patch is unavailable during training.} \label{fig:compoundDA} \end{figure} In this paper, we propose a novel framework from the perspective of open compound domain adaptation (OCDA)\; <|cite_start|> (Reference: Open Compound Domain Adaptation: A typical domain adaptation approach is to adapt models trained on the annotated data in a source domain (e.g., sunny weather) for achieving high performance on the test data in a target domain (e.g., rainy weather). Whether the target contains a single homogeneous domain or multiple heterogeneous domains, existing works always assume that there exist clear distinctions between the domains, which is often not true in practice (e.g., changes in weather). We study an open compound domain adaptation (OCDA) problem, in which the target is a compound of multiple homogeneous domains without domain labels, reflecting realistic data collection from mixed and novel situations. We propose a new approach based on two technical insights into OCDA: 1) a curriculum domain adaptation strategy to bootstrap generalization across domains in a data-driven self-organizing fashion and 2) a memory module to increase the model's agility towards novel domains. Our experiments on digit classification, facial expression recognition, semantic segmentation, and reinforcement learning demonstrate the effectiveness of our approach.) <|cite_end|> to address the intra-domain heterogeneity in the target histopathology dataset. The task of this setting is to transfer knowledge from a labeled source domain to an unlabeled compound target domain, which contains multiple related yet divergent subdomains without domain labels. In addition, the adapted model for OCDA is concurrently expected to possess better generalization capability. Therefore, the model's performance can be maintained when dealing with data from unseen subdomains at the test time, as showcased in Fig.\;\ref{fig:compoundDA}. An extensive comparison between OCDA and other UDA scenarios is illustrated in Table\;\ref{tab:setting_compare}. OCDA is a more realistic yet relatively unexplored setting, with only a few works making an early attempt to provide a solution\; <|cite_start|> (Reference: Open Compound Domain Adaptation: A typical domain adaptation approach is to adapt models trained on the annotated data in a source domain (e.g., sunny weather) for achieving high performance on the test data in a target domain (e.g., rainy weather). Whether the target contains a single homogeneous domain or multiple heterogeneous domains, existing works always assume that there exist clear distinctions between the domains, which is often not true in practice (e.g., changes in weather). We study an open compound domain adaptation (OCDA) problem, in which the target is a compound of multiple homogeneous domains without domain labels, reflecting realistic data collection from mixed and novel situations. We propose a new approach based on two technical insights into OCDA: 1) a curriculum domain adaptation strategy to bootstrap generalization across domains in a data-driven self-organizing fashion and 2) a memory module to increase the model's agility towards novel domains. Our experiments on digit classification, facial expression recognition, semantic segmentation, and reinforcement learning demonstrate the effectiveness of our approach.) <|cite_end|> <|cite_start|> (Reference: Discover, Hallucinate, and Adapt: Open Compound Domain Adaptation for Semantic Segmentation: Unsupervised domain adaptation (UDA) for semantic segmentation has been attracting attention recently, as it could be beneficial for various label-scarce real-world scenarios (e.g., robot control, autonomous driving, medical imaging, etc.). Despite the significant progress in this field, current works mainly focus on a single-source single-target setting, which cannot handle more practical settings of multiple targets or even unseen targets. In this paper, we investigate open compound domain adaptation (OCDA), which deals with mixed and novel situations at the same time, for semantic segmentation. We present a novel framework based on three main design principles: discover, hallucinate, and adapt. The scheme first clusters compound target data based on style, discovering multiple latent domains (discover). Then, it hallucinates multiple latent target domains in source by using image-translation (hallucinate). This step ensures the latent domains in the source and the target to be paired. Finally, target-to-source alignment is learned separately between domains (adapt). In high-level, our solution replaces a hard OCDA problem with much easier multiple UDA problems. We evaluate our solution on standard benchmark GTA to C-driving, and achieved new state-of-the-art results.) <|cite_end|> <|cite_start|> (Reference: Cluster, Split, Fuse, and Update: Meta-Learning for Open Compound Domain Adaptive Semantic Segmentation: Open compound domain adaptation (OCDA) is a domain adaptation setting, where target domain is modeled as a compound of multiple unknown homogeneous domains, which brings the advantage of improved generalization to unseen domains. In this work, we propose a principled meta-learning based approach to OCDA for semantic segmentation, MOCDA, by modeling the unlabeled target domain continuously. Our approach consists of four key steps. First, we cluster target domain into multiple sub-target domains by image styles, extracted in an unsupervised manner. Then, different sub-target domains are split into independent branches, for which batch normalization parameters are learnt to treat them independently. A meta-learner is thereafter deployed to learn to fuse sub-target domain-specific predictions, conditioned upon the style code. Meanwhile, we learn to online update the model by model-agnostic meta-learning (MAML) algorithm, thus to further improve generalization. We validate the benefits of our approach by extensive experiments on synthetic-to-real knowledge transfer benchmark datasets, where we achieve the state-of-the-art performance in both compound and open domains.) <|cite_end|>. Nevertheless, they focus on down-stream tasks like image classification and semantic segmentation, where image-level semantic features are dominant. It is noted that there is an absence of OCDA framework for instance segmentation where local-level instance features are equivalently crucial and indispensible. As for technical defects, the current works mostly propose to split the compound target domain according to the style features of each sample extracted by a pre-trained model and assign unchangable domain labels at the beginning of the training stage. Since style feature extraction is performed via models pre-trained on other tasks, there inevitably exists noise in the encoded style representations, which causes the partition of compound target domain to be inaccurate. Then the model training at each following step would be deteriorated in consequence. Another shortcoming of the existing methods is that they are based on an assumption that the unseen testing subdomain can be constructed as a combination of all seen training subdomains, which is actually incorrect for the histopathology image domain in regard of its complexity and countless attributes contributing to subdomain variations. In addition, we observe that there exists a lack of morphology-level supervision in the image synthesis framework deployed by those methods. As a consequence, the transformed images would lose essential nucleus shape details and incur incorrespondence between images and segmentation annotations. To this end, we propose a novel two-stage disentanglement framework to tackle nuclei instance segmentation in the OCDA setting. It captures the domain-agnostic semantics\;(content) and the domain-specific modality/stain/cancer factors\;(style) seperately at both global image level and local instance level for mutual-complementing. In the first image-level disentanglement stage, we present a cross-domain image translation network to transform source images to target-like ones. In the second stage, we conduct feature disentanglement at local level to further alleviate cross-domain discrepancy in instance-level representations. Considering the aforementioned defects of existing methods, we specifically propose four technical insights. In Stage I, firstly, we integrate the learning of style encoding together with the image translation task and propose a progressive clustering and separation strategy to facilitate style feature extraction during synthesis task learning. Then, we seek inspiration from the recent advances of domain generalization and introduce the style randomization technique\; <|cite_start|> (Reference: Style Augmentation: Data Augmentation via Style Randomization: We introduce style augmentation, a new form of data augmentation based on random style transfer, for improving the robustness of convolutional neural networks (CNN) over both classification and regression based tasks. During training, our style augmentation randomizes texture, contrast and color, while preserving shape and semantic content. This is accomplished by adapting an arbitrary style transfer network to perform style randomization, by sampling input style embeddings from a multivariate normal distribution instead of inferring them from a style image. In addition to standard classification experiments, we investigate the effect of style augmentation (and data augmentation generally) on domain transfer tasks. We find that data augmentation significantly improves robustness to domain shift, and can be used as a simple, domain agnostic alternative to domain adaptation. Comparing style augmentation against a mix of seven traditional augmentation techniques, we find that it can be readily combined with them to improve network performance. We validate the efficacy of our technique with domain transfer experiments in classification and monocular depth estimation, illustrating consistent improvements in generalization.) <|cite_end|> for data augmentation. It strengthens the model's robustness and generalizability to maintain its performance on unseen testing subdomains. Furthermore, we pose a dual-branch morphological regularization on top of the image translation network to minimize nucleus deformation and incorrespondence during translation. In Stage II, we devise a global-local style consistency mechanism to stabilize the instance-level domain-invariant feature generation. Our key contributions can be summarized as follows: \begin{itemize} \item We propose a holistic two-stage disentanglement framework for cross-domain nuclei instance segmentation in the OCDA setting to explicitly address the heterogeneity of histopathology images. To the best of our knowledge, it is the first work to explicitly model the heterogeneity of histopathology images in UDA and design an OCDA framework for instance segmentation. \item To overcome the limitations of the existing OCDA methods, in the global image-level alignment, a progressive clustering and separation strategy is incorporated to benefit the style feature disentanglement. To enhance the model's generalization capability for unseen testing subdomains, we introduce style randomization to generate fake histopathology images in arbitrary style for data augmentation. \item In the local instance-level alignment, we leverage the global-local style consistency to facilitate feature disentanglement and domain-invariant representation learning. \item We further develop a novel regularization module based on semantic masks and object boundaries to preserve shape and structural details of nucleus in image translation. \item We comprehensively evaluate our approach and demonstrate its effectiveness on both cross-modality and cross-stain UDA nuclei instance segmentation. It significantly outperforms the state-of-the-art conventional UDA and OCDA methods for unsupervised domain adaptive nuclei instance segmentation in histopathology images. \end{itemize} <|paper_end|>
[ "<|reference_start|> Prognostic value of automatically extracted nuclear morphometric features in whole slide images of male breast cancer: <|reference_end|>", "<|reference_start|> X-net with different loss functions for cell image segmentation: Convolutional neural network is valid for object segmentation. In recent years, it has been applied to the fields of medicine and cell biology. Each class has a different number of pixels in an image. Therefore, the accuracy of semantic segmentation varies drastically between objects with a large number of pixels and objects with a small number of pixels. In this paper, we propose X-Net that integrates two encoders and decoders to solve this problem. This has the advantage of extracting rich features from two encoders and using two decoders to complement the location information and small objects. By using different loss functions for each decoder, we can use the ensemble of two decoders with different viewpoints. We evaluated our method on the Arabidopsis cell images and Drosophila cell images. Experimental results show that our method achieved better accuracy than the conventional methods. <|reference_end|>", "<|reference_start|> Panoptic Feature Fusion Net: A Novel Instance Segmentation Paradigm for Biomedical and Biological Images: Instance segmentation is an important task for biomedical and biological image analysis. Due to the complicated background components, the high variability of object appearances, numerous overlapping objects, and ambiguous object boundaries, this task still remains challenging. Recently, deep learning based methods have been widely employed to solve these problems and can be categorized into proposal-free and proposal-based methods. However, both proposal-free and proposal-based methods suffer from information loss, as they focus on either global-level semantic or local-level instance features. To tackle this issue, we present a Panoptic Feature Fusion Net (PFFNet) that unifies the semantic and instance features in this work. Specifically, our proposed PFFNet contains a residual attention feature fusion mechanism to incorporate the instance prediction with the semantic features, in order to facilitate the semantic contextual information learning in the instance branch. Then, a mask quality sub-branch is designed to align the confidence score of each object with the quality of the mask prediction. Furthermore, a consistency regularization mechanism is designed between the semantic segmentation tasks in the semantic and instance branches, for the robust learning of both tasks. Extensive experiments demonstrate the effectiveness of our proposed PFFNet, which outperforms several state-of-the-art methods on various biomedical and biological datasets. <|reference_end|>", "<|reference_start|> A review of domain adaptation without target labels: Domain adaptation has become a prominent problem setting in machine learning and related fields. This review asks the question: how can a classifier learn from a source domain and generalize to a target domain? We present a categorization of approaches, divided into, what we refer to as, sample-based, feature-based and inference-based methods. Sample-based methods focus on weighting individual observations during training based on their importance to the target domain. Feature-based methods revolve around on mapping, projecting and representing features such that a source classifier performs well on the target domain and inference-based methods incorporate adaptation into the parameter estimation procedure, for instance through constraints on the optimization procedure. Additionally, we review a number of conditions that allow for formulating bounds on the cross-domain generalization error. Our categorization highlights recurring ideas and raises questions important to further research. <|reference_end|>" ]
[ 2, 6, 7, 13 ]
{"<|cite_2|>": "ss-750770", "<|multi_cite_3_1|>": "ss-750771", "<|multi_cite_3_2|>": "ss-2331299", "<|multi_cite_3_3|>": "ss-750772", "<|multi_cite_4_1|>": "arxiv-155896", "<|multi_cite_4_2|>": "ss-750773", "<|multi_cite_4_3|>": "ss-750774", "<|multi_cite_4_4|>": "arxiv-248424", "<|multi_cite_4_5|>": "ss-750775", "<|multi_cite_5_1|>": "ss-750776", "<|multi_cite_5_2|>": "ss-1368299", "<|multi_cite_6_1|>": "ss-750777", "<|multi_cite_6_2|>": "arxiv-263536", "<|cite_7|>": "arxiv-187745", "<|cite_8|>": "ss-750778", "<|multi_cite_9_1|>": "arxiv-263536", "<|multi_cite_9_2|>": "ss-750779", "<|multi_cite_9_3|>": "arxiv-331971", "<|cite_10|>": "ss-2206158", "<|cite_1|>": "arxiv-372577", "<|multi_cite_11_1|>": "arxiv-361125", "<|multi_cite_11_2|>": "ss-750780", "<|cite_12|>": "arxiv-361125", "<|cite_13|>": "arxiv-222449", "<|multi_cite_14_1|>": "arxiv-222449", "<|multi_cite_14_2|>": "arxiv-372577", "<|multi_cite_14_3|>": "arxiv-310187", "<|cite_15|>": "arxiv-172689"}
2011.00652-1
<|cite_start|> (Reference: 3DSSD: Point-based 3D Single Stage Object Detector: Currently, there have been many kinds of voxel-based 3D single stage detectors, while point-based single stage methods are still underexplored. In this paper, we first present a lightweight and effective point-based 3D single stage object detector, named 3DSSD, achieving a good balance between accuracy and efficiency. In this paradigm, all upsampling layers and refinement stage, which are indispensable in all existing point-based methods, are abandoned to reduce the large computation cost. We novelly propose a fusion sampling strategy in downsampling process to make detection on less representative points feasible. A delicate box prediction network including a candidate generation layer, an anchor-free regression head with a 3D center-ness assignment strategy is designed to meet with our demand of accuracy and speed. Our paradigm is an elegant single stage anchor-free framework, showing great superiority to other existing methods. We evaluate 3DSSD on widely used KITTI dataset and more challenging nuScenes dataset. Our method outperforms all state-of-the-art voxel-based single stage methods by a large margin, and has comparable performance to two stage point-based methods as well, with inference speed more than 25 FPS, 2x faster than former state-of-the-art point-based methods.) <|cite_end|>. Compared to the voxel-based methods, the point-based methods have flexible receptive fields for point cloud feature learning with set abstraction operation, however, they are limited by high computation costs. The methods based on a mixture of representations take both point and voxel inputs and fuse their features at different stages of the networks for 3D object detection, such as PV-RCNN <|cite_start|> (Reference: PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection: We present a novel and high-performance 3D object detection framework, named PointVoxel-RCNN (PV-RCNN), for accurate 3D object detection from point clouds. Our proposed method deeply integrates both 3D voxel Convolutional Neural Network (CNN) and PointNet-based set abstraction to learn more discriminative point cloud features. It takes advantages of efficient learning and high-quality proposals of the 3D voxel CNN and the flexible receptive fields of the PointNet-based networks. Specifically, the proposed framework summarizes the 3D scene with a 3D voxel CNN into a small set of keypoints via a novel voxel set abstraction module to save follow-up computations and also to encode representative scene features. Given the high-quality 3D proposals generated by the voxel CNN, the RoI-grid pooling is proposed to abstract proposal-specific features from the keypoints to the RoI-grid points via keypoint set abstraction with multiple receptive fields. Compared with conventional pooling operations, the RoI-grid feature points encode much richer context information for accurately estimating object confidences and locations. Extensive experiments on both the KITTI dataset and the Waymo Open dataset show that our proposed PV-RCNN surpasses state-of-the-art 3D detection methods with remarkable margins by using only point clouds. Code is available at https://github.com/open-mmlab/OpenPCDet.) <|cite_end|>and SA-SSD <|cite_start|> (Reference: Structure aware single-stage 3D object detection from point cloud: 3D object detection from point cloud data plays an essential role in autonomous driving. Current single-stage detectors are efficient by progressively downscaling the 3D point clouds in a fully convolutional manner. However, the downscaled features inevitably lose spatial information and cannot make full use of the structure information of 3D point cloud, degrading their localization precision. In this work, we propose to improve the localization precision of single-stage detectors by explicitly leveraging the structure information of 3D point cloud. Specifically, we design an auxiliary network which converts the convolutional features in the backbone network back to point-level representations. The auxiliary network is jointly optimized, by two point-level supervisions, to guide the convolutional features in the backbone network to be aware of the object structure. The auxiliary network can be detached after training and therefore introduces no extra computation in the inference stage. Besides, considering that single-stage detectors suffer from the discordance between the predicted bounding boxes and corresponding classification confidences, we develop an efficient part-sensitive warping operation to align the confidences to the predicted bounding boxes. Our proposed detector ranks at the top of KITTI 3D/BEV detection leaderboards and runs at 25 FPS for inference.) <|cite_end|>. These methods can take advantage of both the voxel-based operations (i.e., 3D sparse convolution) and PointNet-based operations (i.e., set abstraction operation) to enable high computational efficiency and flexible receptive fields for improving the 3D detection performance. \subsection{Multi-modal Fusion based 3D Object Detection} To take advantage of camera and LiDAR sensors, various fusion methods have also been proposed. According to the stages of fusion occurring in the whole detection pipeline, they can be summarized into two main categories, including result-level fusion and feature-level fusion. The result-level fusion methods leverage image object detectors to generate 2D region proposals to condense regions of interest for the 3D object detectors <|cite_start|> (Reference: Frustum PointNets for 3D Object Detection from RGB-D Data: In this work, we study 3D object detection from RGB-D data in both indoor and outdoor scenes. While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. However, a key challenge of this approach is how to efficiently localize objects in point clouds of large-scale scenes (region proposal). Instead of solely relying on 3D proposals, our method leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects. Benefited from learning directly in raw point clouds, our method is also able to precisely estimate 3D bounding boxes even under strong occlusion or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability.) <|cite_end|> <|cite_start|> (Reference: Frustum ConvNet: Sliding Frustums to Aggregate Local Point-Wise Features for Amodal 3D Object Detection: In this work, we propose a novel method termed \emph{Frustum ConvNet (F-ConvNet)} for amodal 3D object detection from point clouds. Given 2D region proposals in an RGB image, our method first generates a sequence of frustums for each region proposal, and uses the obtained frustums to group local points. F-ConvNet aggregates point-wise features as frustum-level feature vectors, and arrays these feature vectors as a feature map for use of its subsequent component of fully convolutional network (FCN), which spatially fuses frustum-level features and supports an end-to-end and continuous estimation of oriented boxes in the 3D space. We also propose component variants of F-ConvNet, including an FCN variant that extracts multi-resolution frustum features, and a refined use of F-ConvNet over a reduced 3D space. Careful ablation studies verify the efficacy of these component variants. F-ConvNet assumes no prior knowledge of the working 3D environment and is thus dataset-agnostic. We present experiments on both the indoor SUN-RGBD and outdoor KITTI datasets. F-ConvNet outperforms all existing methods on SUN-RGBD, and at the time of submission it outperforms all published works on the KITTI benchmark. Code has been made available at: {\url{https://github.com/zhixinwang/frustum-convnet}.}) <|cite_end|> <|cite_start|> (Reference: PointFusion: Deep Sensor Fusion for 3D Bounding Box Estimation: We present PointFusion, a generic 3D object detection method that leverages both image and 3D point cloud information. Unlike existing methods that either use multi-stage pipelines or hold sensor and dataset-specific assumptions, PointFusion is conceptually simple and application-agnostic. The image data and the raw point cloud data are independently processed by a CNN and a PointNet architecture, respectively. The resulting outputs are then combined by a novel fusion network, which predicts multiple 3D box hypotheses and their confidences, using the input 3D points as spatial anchors. We evaluate PointFusion on two distinctive datasets: the KITTI dataset that features driving scenes captured with a lidar-camera setup, and the SUN-RGBD dataset that captures indoor environments with RGB-D cameras. Our model is the first one that is able to perform better or on-par with the state-of-the-art on these diverse datasets without any dataset-specific model tuning.) <|cite_end|>. However, the performance of these methods is limited by the accuracy of the camera-based detectors. The feature-level fusion methods jointly reason over multi-sensor inputs and the intermediate features of which are deeply fused <|cite_start|> (Reference: Multi-View 3D Object Detection Network for Autonomous Driving: This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and predicts oriented 3D bounding boxes. We encode the sparse 3D point cloud with a compact multi-view representation. The network is composed of two subnetworks: one for 3D object proposal generation and another for multi-view feature fusion. The proposal network generates 3D candidate boxes efficiently from the bird's eye view representation of 3D point cloud. We design a deep fusion scheme to combine region-wise features from multiple views and enable interactions between intermediate layers of different paths. Experiments on the challenging KITTI benchmark show that our approach outperforms the state-of-the-art by around 25% and 30% AP on the tasks of 3D localization and 3D detection. In addition, for 2D detection, our approach obtains 10.3% higher AP than the state-of-the-art on the hard data among the LIDAR-based methods.) <|cite_end|> <|cite_start|> (Reference: Joint 3D Proposal Generation and Object Detection from View Aggregation: We present AVOD, an Aggregate View Object Detection network for autonomous driving scenarios. The proposed neural network architecture uses LIDAR point clouds and RGB images to generate features that are shared by two subnetworks: a region proposal network (RPN) and a second stage detector network. The proposed RPN uses a novel architecture capable of performing multimodal feature fusion on high resolution feature maps to generate reliable 3D object proposals for multiple object classes in road scenes. Using these proposals, the second stage detection network performs accurate oriented 3D bounding box regression and category classification to predict the extents, orientation, and classification of objects in 3D space. Our proposed architecture is shown to produce state of the art results on the KITTI 3D object detection benchmark while running in real time with a low memory footprint, making it a suitable candidate for deployment on autonomous vehicles. Code is at: https://github.com/kujason/avod) <|cite_end|> <|cite_start|> (Reference: Multi-Task Multi-Sensor Fusion for 3D Object Detection: In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and 3D object detection as well as ground estimation and depth completion. Our experiments show that all these tasks are complementary and help the network learn better representations by fusing information at various levels. Importantly, our approach leads the KITTI benchmark on 2D, 3D and BEV object detection, while being real time.) <|cite_end|> <|cite_start|> (Reference: Deep Continuous Fusion for Multi-Sensor 3D Object Detection: In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. Our proposed continuous fusion layer encode both discrete-state image features as well as continuous geometric information. This enables us to design a novel, reliable and efficient end-to-end learnable 3D object detector based on multiple sensors. Our experimental evaluation on both KITTI as well as a large scale 3D object detection benchmark shows significant improvements over the state of the art.) <|cite_end|> <|cite_start|> (Reference: Cross-Modality 3D Object Detection: In this paper, we focus on exploring the fusion of images and point clouds for 3D object detection in view of the complementary nature of the two modalities, i.e., images possess more semantic information while point clouds specialize in distance sensing. To this end, we present a novel two-stage multi-modal fusion network for 3D object detection, taking both binocular images and raw point clouds as input. The whole architecture facilitates two-stage fusion. The first stage aims at producing 3D proposals through sparse point-wise feature fusion. Within the first stage, we further exploit a joint anchor mechanism that enables the network to utilize 2D-3D classification and regression simultaneously for better proposal generation. The second stage works on the 2D and 3D proposal regions and fuses their dense features. In addition, we propose to use pseudo LiDAR points from stereo matching as a data augmentation method to densify the LiDAR points, as we observe that objects missed by the detection network mostly have too few points especially for far-away objects. Our experiments on the KITTI dataset show that the proposed multi-stage fusion helps the network to learn better representations.) <|cite_end|> <|cite_start|> (Reference: MVX-Net: Multimodal VoxelNet for 3D Object Detection: Many recent works on 3D object detection have focused on designing neural network architectures that can consume point cloud data. While these approaches demonstrate encouraging performance, they are typically based on a single modality and are unable to leverage information from other modalities, such as a camera. Although a few approaches fuse data from different modalities, these methods either use a complicated pipeline to process the modalities sequentially, or perform late-fusion and are unable to learn interaction between different modalities at early stages. In this work, we present PointFusion and VoxelFusion: two simple yet effective early-fusion approaches to combine the RGB and point cloud modalities, by leveraging the recently introduced VoxelNet architecture. Evaluation on the KITTI dataset demonstrates significant improvements in performance over approaches which only use point cloud data. Furthermore, the proposed method provides results competitive with the state-of-the-art multimodal algorithms, achieving top-2 ranking in five of the six bird's eye view and 3D detection categories on the KITTI benchmark, by using a simple single stage network.) <|cite_end|> <|cite_start|> (Reference: PointPainting: Sequential Fusion for 3D Object Detection: Camera and lidar are important sensor modalities for robotics in general and self-driving cars in particular. The sensors provide complementary information offering an opportunity for tight sensor-fusion. Surprisingly, lidar-only methods outperform fusion methods on the main benchmark datasets, suggesting a gap in the literature. In this work, we propose PointPainting: a sequential fusion method to fill this gap. PointPainting works by projecting lidar points into the output of an image-only semantic segmentation network and appending the class scores to each point. The appended (painted) point cloud can then be fed to any lidar-only method. Experiments show large improvements on three different state-of-the art methods, Point-RCNN, VoxelNet and PointPillars on the KITTI and nuScenes datasets. The painted version of PointRCNN represents a new state of the art on the KITTI leaderboard for the bird's-eye view detection task. In ablation, we study how the effects of Painting depends on the quality and format of the semantic segmentation output, and demonstrate how latency can be minimized through pipelining.) <|cite_end|>. MV3D <|cite_start|> (Reference: Multi-View 3D Object Detection Network for Autonomous Driving: This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and predicts oriented 3D bounding boxes. We encode the sparse 3D point cloud with a compact multi-view representation. The network is composed of two subnetworks: one for 3D object proposal generation and another for multi-view feature fusion. The proposal network generates 3D candidate boxes efficiently from the bird's eye view representation of 3D point cloud. We design a deep fusion scheme to combine region-wise features from multiple views and enable interactions between intermediate layers of different paths. Experiments on the challenging KITTI benchmark show that our approach outperforms the state-of-the-art by around 25% and 30% AP on the tasks of 3D localization and 3D detection. In addition, for 2D detection, our approach obtains 10.3% higher AP than the state-of-the-art on the hard data among the LIDAR-based methods.) <|cite_end|>is a pioneering work of this type of method, which takes CV, RV, and BEV as input, and exploits a 3D RPN to generate 3D proposals. AVOD <|cite_start|> (Reference: Joint 3D Proposal Generation and Object Detection from View Aggregation: We present AVOD, an Aggregate View Object Detection network for autonomous driving scenarios. The proposed neural network architecture uses LIDAR point clouds and RGB images to generate features that are shared by two subnetworks: a region proposal network (RPN) and a second stage detector network. The proposed RPN uses a novel architecture capable of performing multimodal feature fusion on high resolution feature maps to generate reliable 3D object proposals for multiple object classes in road scenes. Using these proposals, the second stage detection network performs accurate oriented 3D bounding box regression and category classification to predict the extents, orientation, and classification of objects in 3D space. Our proposed architecture is shown to produce state of the art results on the KITTI 3D object detection benchmark while running in real time with a low memory footprint, making it a suitable candidate for deployment on autonomous vehicles. Code is at: https://github.com/kujason/avod) <|cite_end|>fused the LiDAR BEV and CV features at the intermediate convolutional layer to propose 3D bounding boxes. ContFuse <|cite_start|> (Reference: Deep Continuous Fusion for Multi-Sensor 3D Object Detection: In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. Our proposed continuous fusion layer encode both discrete-state image features as well as continuous geometric information. This enables us to design a novel, reliable and efficient end-to-end learnable 3D object detector based on multiple sensors. Our experimental evaluation on both KITTI as well as a large scale 3D object detection benchmark shows significant improvements over the state of the art.) <|cite_end|>uses continuous convolution to fuse images and LiDAR features on different resolutions. MMF <|cite_start|> (Reference: Multi-Task Multi-Sensor Fusion for 3D Object Detection: In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and 3D object detection as well as ground estimation and depth completion. Our experiments show that all these tasks are complementary and help the network learn better representations by fusing information at various levels. Importantly, our approach leads the KITTI benchmark on 2D, 3D and BEV object detection, while being real time.) <|cite_end|>adds ground estimation and depth estimation to the fusion framework and learns better fusion feature representations while jointly learning multi-tasks. While various sensor fusion networks have been proposed, they do not easily outperform LiDAR-only detectors because they have seldom recognized different aspects of the importance and noise of multi-view features. In the next sections, we will present our proposed MVAF-Net to overcome this challenge. <|paper_end|>
[ "<|reference_start|> Frustum PointNets for 3D Object Detection from RGB-D Data: In this work, we study 3D object detection from RGB-D data in both indoor and outdoor scenes. While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. However, a key challenge of this approach is how to efficiently localize objects in point clouds of large-scale scenes (region proposal). Instead of solely relying on 3D proposals, our method leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects. Benefited from learning directly in raw point clouds, our method is also able to precisely estimate 3D bounding boxes even under strong occlusion or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability. <|reference_end|>", "<|reference_start|> Multi-View 3D Object Detection Network for Autonomous Driving: This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and predicts oriented 3D bounding boxes. We encode the sparse 3D point cloud with a compact multi-view representation. The network is composed of two subnetworks: one for 3D object proposal generation and another for multi-view feature fusion. The proposal network generates 3D candidate boxes efficiently from the bird's eye view representation of 3D point cloud. We design a deep fusion scheme to combine region-wise features from multiple views and enable interactions between intermediate layers of different paths. Experiments on the challenging KITTI benchmark show that our approach outperforms the state-of-the-art by around 25% and 30% AP on the tasks of 3D localization and 3D detection. In addition, for 2D detection, our approach obtains 10.3% higher AP than the state-of-the-art on the hard data among the LIDAR-based methods. <|reference_end|>", "<|reference_start|> Joint 3D Proposal Generation and Object Detection from View Aggregation: We present AVOD, an Aggregate View Object Detection network for autonomous driving scenarios. The proposed neural network architecture uses LIDAR point clouds and RGB images to generate features that are shared by two subnetworks: a region proposal network (RPN) and a second stage detector network. The proposed RPN uses a novel architecture capable of performing multimodal feature fusion on high resolution feature maps to generate reliable 3D object proposals for multiple object classes in road scenes. Using these proposals, the second stage detection network performs accurate oriented 3D bounding box regression and category classification to predict the extents, orientation, and classification of objects in 3D space. Our proposed architecture is shown to produce state of the art results on the KITTI 3D object detection benchmark while running in real time with a low memory footprint, making it a suitable candidate for deployment on autonomous vehicles. Code is at: https://github.com/kujason/avod <|reference_end|>", "<|reference_start|> Deep Continuous Fusion for Multi-Sensor 3D Object Detection: In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. Our proposed continuous fusion layer encode both discrete-state image features as well as continuous geometric information. This enables us to design a novel, reliable and efficient end-to-end learnable 3D object detector based on multiple sensors. Our experimental evaluation on both KITTI as well as a large scale 3D object detection benchmark shows significant improvements over the state of the art. <|reference_end|>" ]
[ 3, 6, 7, 9 ]
{"<|cite_1|>": "ss-772764", "<|cite_2|>": "ss-1094448", "<|cite_3|>": "arxiv-104670", "<|cite_4|>": "arxiv-311696", "<|cite_5|>": "arxiv-191828", "<|cite_6|>": "arxiv-140386", "<|cite_7|>": "arxiv-191828", "<|cite_8|>": "arxiv-140386", "<|cite_9|>": "arxiv-184061", "<|cite_10|>": "arxiv-215887", "<|cite_11|>": "ss-1264356", "<|cite_12|>": "arxiv-184486", "<|cite_13|>": "arxiv-213449", "<|cite_14|>": "ss-746384", "<|cite_15|>": "arxiv-140995", "<|cite_16|>": "arxiv-194067", "<|cite_17|>": "arxiv-141651", "<|cite_18|>": "arxiv-110865", "<|cite_19|>": "arxiv-142455", "<|cite_20|>": "arxiv-311697", "<|cite_21|>": "arxiv-311186", "<|cite_22|>": "arxiv-286186", "<|cite_23|>": "arxiv-197948", "<|cite_24|>": "arxiv-235834", "<|cite_25|>": "arxiv-228865", "<|cite_26|>": "arxiv-110865", "<|cite_27|>": "arxiv-142455", "<|cite_28|>": "arxiv-311697", "<|cite_29|>": "arxiv-197948", "<|cite_30|>": "arxiv-235834", "<|cite_31|>": "arxiv-228865", "<|cite_32|>": "arxiv-234168", "<|cite_33|>": "arxiv-261689", "<|cite_34|>": "ss-959121", "<|cite_35|>": "arxiv-126253", "<|cite_36|>": "ss-1264356", "<|cite_37|>": "ss-763774", "<|cite_38|>": "arxiv-184061", "<|cite_39|>": "arxiv-250022", "<|cite_40|>": "arxiv-191828", "<|cite_41|>": "arxiv-140386", "<|cite_42|>": "ss-1264356", "<|cite_43|>": "arxiv-184486", "<|cite_44|>": "arxiv-213449", "<|cite_45|>": "ss-746384", "<|cite_46|>": "arxiv-241468", "<|cite_47|>": "ss-763774", "<|cite_48|>": "arxiv-140386", "<|cite_49|>": "ss-1264356", "<|cite_50|>": "arxiv-184486", "<|cite_51|>": "ss-746384", "<|cite_52|>": "arxiv-126253", "<|cite_53|>": "arxiv-184061", "<|cite_54|>": "arxiv-215887", "<|cite_55|>": "arxiv-250022", "<|cite_56|>": "arxiv-241468", "<|cite_57|>": "ss-763774", "<|cite_58|>": "arxiv-140995", "<|cite_59|>": "arxiv-194067", "<|cite_60|>": "arxiv-141651", "<|cite_61|>": "arxiv-110865", "<|cite_62|>": "arxiv-142455", "<|cite_63|>": "arxiv-311697", "<|cite_64|>": "arxiv-311186", "<|cite_65|>": "arxiv-286186", "<|cite_66|>": "arxiv-197948", "<|cite_67|>": "arxiv-235834", "<|cite_68|>": "arxiv-110865", "<|cite_69|>": "arxiv-142455", "<|cite_70|>": "arxiv-311186", "<|cite_71|>": "arxiv-311697"}
2107.02591
<|paper_start|> Title: Decision problems for origin-close top-down tree transducers (full version) Abstract: Decision problems for origin-close top-down tree transducers (full version): Tree transductions are binary relations of finite trees. For tree transductions defined by non-deterministic top-down tree transducers, inclusion, equivalence and synthesis problems are known to be undecidable. Adding origin semantics to tree transductions, i.e., tagging each output node with the input node it originates from, is a known way to recover decidability for inclusion and equivalence. The origin semantics is rather rigid, in this work, we introduce a similarity measure for transducers with origin semantics and show that we can decide inclusion, equivalence and synthesis problems for origin-close non-deterministic top-down tree transducers. Introduction \label{sec:intro} In this paper we study decision problems for top-down tree transducers over finite trees with origin semantics. Rounds <|cite_start|> (Reference: Mappings and grammars on trees: ) <|cite_end|> and Thatcher independently invented tree transducers (their model is known today as top-down tree transducer) as a generalization of finite state word transducers in the context of natural language processing and compilers in the beginning of the 1970s. Nowadays, there is a rich landscape of various tree transducer models used in many fields, for example, syntax-directed translation <|cite_start|> (Reference: Syntax-directed semantics: Formal models based on tree transducers: Thank you for downloading syntax directed semantics formal models based on tree transducers. Maybe you have knowledge that, people have search hundreds times for their favorite novels like this syntax directed semantics formal models based on tree transducers, but end up in harmful downloads. Rather than enjoying a good book with a cup of coffee in the afternoon, instead they juggled with some malicious bugs inside their laptop. syntax directed semantics formal models based on tree transducers is available in our digital library an online access to it is set as public so you can download it instantly. Our book servers hosts in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Kindly say, the syntax directed semantics formal models based on tree transducers is universally compatible with any devices to read.) <|cite_end|>, databases <|cite_start|> (Reference: Typechecking for XML transformers: We study the typechecking problem for XML transformers: given an XML transformation program and a DTD for the input XML documents, check whether every result of the program conforms to a specified output DTD. We model XML transformers using a novel device called a k-pebble transducer, that can express most queries without data-value joins in XML-QL, XSLT, and other XML query languages. Types are modeled by regular tree languages, a nobust extension of DTDs. The main result of the paper is that typechecking for k-pebble transducers is decidable. Consequently, typechecking can be performed for a broad range of XML transformation languages, including XML-QL and a fragment of XSLT.) <|cite_end|> <|cite_start|> (Reference: XQuery Streaming by Forest Transducers: Streaming of XML transformations is a challenging task and only very few systems support streaming. Research approaches generally define custom fragments of XQuery and XPath that are amenable to streaming, and then design custom algorithms for each fragment. These languages have several shortcomings. Here we take a more principles approach to the problem of streaming XQuery-based transformations. We start with an elegant transducer model for which many static analysis problems are well-understood: the Macro Forest Transducer (MFT). We show that a large fragment of XQuery can be translated into MFTs --- indeed, a fragment of XQuery, that can express important features that are missing from other XQuery stream engines, such as GCX: our fragment of XQuery supports XPath predicates and let-statements. We then rely on a streaming execution engine for MFTs, one which uses a well-founded set of optimizations from functional programming, such as strictness analysis and deforestation. Our prototype achieves time and memory efficiency comparable to the fastest known engine for XQuery streaming, GCX. This is surprising because our engine relies on the OCaml built in garbage collector and does not use any specialized buffer management, while GCX's efficiency is due to clever and explicit buffer management.) <|cite_end|>, linguistics <|cite_start|> (Reference: The Power of Extended Top-Down Tree Transducers: Extended top-down tree transducers (transducteurs generalises descendants; see [A. Arnold and M. Dauchet, Bi-transductions de forets, in Proceedings of the 3rd International Colloquium on Automata, Languages and Programming, Edinburgh University Press, Edinburgh, 1976, pp. 74-86]) received renewed interest in the field of natural language processing. Here those transducers are extensively and systematically studied. Their main properties are identified and their relation to classical top-down tree transducers is exactly characterized. The obtained properties completely explain the Hasse diagram of the induced classes of tree transformations. In addition, it is shown that most interesting classes of transformations computed by extended top-down tree transducers are not closed under composition.) <|cite_end|> <|cite_start|> (Reference: Shallow local multi-bottom-up tree transducers in statistical machine translation: We present a new translation model integrating the shallow local multi bottomup tree transducer. We perform a largescale empirical evaluation of our obtained system, which demonstrates that we significantly beat a realistic tree-to-tree baseline on the WMT 2009 English! German translation task. As an additional contribution we make the developed software and complete tool-chain publicly available for further experimentation.) <|cite_end|>, programming languages <|cite_start|> (Reference: Composition of functions with accumulating parameters: Many functional programs with accumulating parameters are contained in the class of macro tree transducers. We present a program transformation technique that can be used to solve the efficiency problems due to creation and consumption of intermediate data structures in compositions of such functions, where classical deforestation techniques fail. To do so, given two macro tree transducers under appropriate restrictions, we construct a single macro tree transducer that implements the composition of the two original ones. The imposed restrictions are more liberal than those in the literature on macro tree transducer composition, thus generalising previous results.) <|cite_end|> <|cite_start|> (Reference: Polynomial-time inverse computation for accumulative functions with multiple data traversals: The problem of inverse computation has many potential applications such as serialization/deserialization, providing support for undo, and test-case generation for software testing. In this paper, we propose an inverse computation method that always terminates for a class of functions known as parameter-linear macro tree transducers, which involve multiple data traversals and the use of accumulations. The key to our method is the observation that a function in the class can be regarded as a non-accumulative context-generating transformation without multiple data traversals. Accordingly, we demonstrate that it is easy to achieve terminating inverse computation for the class by context-wise memoization of the inverse computation results. We also show that when we use a tree automaton to express the inverse computation results, the inverse computation runs in time polynomial to the size of the original output and the textual program size.) <|cite_end|>, and security analysis <|cite_start|> (Reference: Transducer-based analysis of cryptographic protocols: ) <|cite_end|>. Unlike tree automata, tree transducers have undecidable inclusion and equivalence problems <|cite_start|> (Reference: Decidability results concerning tree transducers I: A tree transducer is called functional if its induced transformation is a partial mapping. We show that the functionality of tree transducers is decidable. Consequently, the equivalence problem for deterministic tree transducers is also decidable. The latter result was independently achieved by Z . Z A C H A R in [12] for bottomup tree transducers and a restricted class of top-down tree transducers. The solvability of the equivalence problem for generalized deterministic sequential machines is known from [2] and [4]. It was proved in [11] that this positive result can not be generalized for arbitrary, i.e. generalized nondeterministic", sequential machines. Therefore, the equivalence problem for nondeterministic tree transducers is undecidable. Our result can be used to minimize deterministic tree transducers in an effective manner. However, the minimal realizations of a deterministic tree transducer are not isomorphic. We investigate conditions assuring the uniqueness (up to isomorphism) of minimal realizations in certain , classes of tree transducers. Part of the results of the present paper have been announced in [8]. The terminology is used in the sense of [5].) <|cite_end|>. This is already the case for word transducers <|cite_start|> (Reference: The unsolvability of the Equivalence Problem for Λ-Free nondeterministic generalized machines: It is shown that the equivalence problem for A-free nondeterministic generalized machines is unsolvable, and it is observed that this result implies the unsolvability of the equality problem for c-finite languages.) <|cite_end|> <|cite_start|> (Reference: Multitape One-Way Nonwriting Automata: ) <|cite_end|>. The intractability of, e.g., the equivalence problem for transducers (whether two given transducers recognize the same transduction, that is, the same relation) mainly stems from the fact that two transducers recognizing the same transduction may produce their outputs very differently. One transducer may produce its output fast and be ahead of the other. In general, there is an infinite number of transducers for a single transduction. To overcome this difficulty Bojanczyk <|cite_start|> (Reference: Transducers with origin information: Call a string-to-string transducer regular if it can be realised by one of the following equivalent models: mso transductions, two-way deterministic automata with output, and streaming transducers with registers. This paper proposes to treat origin information as part of the semantics of a regular string-to-string transducer. With such semantics, the model admits a machine-independent characterisation, Angluin-style learning in polynomial time, as well as effective characterisations of natural subclasses such as one-way transducers or first-order definable transducers.) <|cite_end|> has introduced origin semantics, that is, additionally, there is an origin function that maps output positions to their originating input positions. The main result of <|cite_start|> (Reference: Transducers with origin information: Call a string-to-string transducer regular if it can be realised by one of the following equivalent models: mso transductions, two-way deterministic automata with output, and streaming transducers with registers. This paper proposes to treat origin information as part of the semantics of a regular string-to-string transducer. With such semantics, the model admits a machine-independent characterisation, Angluin-style learning in polynomial time, as well as effective characterisations of natural subclasses such as one-way transducers or first-order definable transducers.) <|cite_end|> is a machine-independent characterization of transductions defined by deterministic two-way transducers with origin semantics. Word transducers with origin semantics where further investigated in <|cite_start|> (Reference: Which Classes of Origin Graphs Are Generated by Transducers: We study various models of transducers equipped with origin information. We consider the semantics of these models as particular graphs, called origin graphs, and we characterise the families of such graphs recognised by streaming string transducers.) <|cite_end|>, and properties of subclasses of transductions with origin semantics definable by one-way word transducers have been studied in <|cite_start|> (Reference: Synchronizing Relations on Words: ) <|cite_end|> <|cite_start|> (Reference: Closure Properties of Synchronized Relations: A standard approach to define k-ary word relations over a finite alphabet A is through k-tape finite state automata that recognize regular languages L over {1,. .. , k} × A, where (i, a) is interpreted as reading letter a from tape i. Accordingly, a word w ∈ L denotes the tuple (u1,...,uk) ∈ (A*)^k in which ui is the projection of w onto i-labelled letters. While this formalism defines the well-studied class of rational relations, enforcing restrictions on the reading regime from the tapes, which we call synchronization, yields various sub-classes of relations. Such synchronization restrictions are imposed through regular properties on the projection of the language L onto {1,..., k}. In this way, for each regular language C ⊆ {1,..., k}*, one obtains a class Rel(C) of relations. Synchronous, Recognizable, and Length-preserving rational relations are all examples of classes that can be defined in this way. We study basic properties of these classes of relations, in terms of closure under intersection, complement, concatenation, Kleene star and projection. We characterize the classes with each closure property. For the binary case (k = 2) this yields effective procedures.) <|cite_end|>. Under origin semantics, many interesting problems become decidable, e.g., equivalence of one-way word transducers. This is not surprising as a transduction now incorporates \emph{how} it translates an input word into an output word providing much more information. In <|cite_start|> (Reference: Edinburgh Research Explorer Decision Problems of Tree Transducers with Origin - Emmanuel: . A tree transducer with origin translates an input tree into a pair of output tree and origin info. The origin info maps each node in the output tree to the unique input node that created it. In this way, the implementation of the transducer becomes part of its semantics. We show that the landscape of decidable properties changes drastically when origin info is added. For instance, equivalence of nondeterministic top-down and MSO transducers with origin is decidable. Both problems are undecidable without origin. The equivalence of deterministic top-down tree-to-string transducers is decidable with origin, while without origin it is a long standing open problem. With origin, we can decide if a deterministic macro tree transducer can be realized by a deterministic top-down tree transducer; without origin this is an open problem.) <|cite_end|>, the authors have initiated a study of several decision problems for different tree transducer models on finite trees with origin semantics. More concretely, they studied inclusion, equivalence, injectivity and query determinacy problems for top-down tree transducers, tree transducers definable in monadic second order logic, and top-down tree-to-word transducers. They showed (amongst other results) that inclusion and equivalence become decidable for all models except tree-to-string transducers with origin semantics. In general, there has been an interest to incorporate some kind of origin information (i.e., \emph{how} a transduction works) into tree transductions, in order to gain more insight on different tree transductions, see, e.g., <|cite_start|> (Reference: Origin Tracking + + Text Differencing = = Textual Model Differencing: ) <|cite_end|> <|cite_start|> (Reference: Macro tree translations of linear size increase are MSO definable: The first main result is that if a macro tree translation is of linear size increase, i.e., if the size of every output tree is linearly bounded by the size of the corresponding input tree, then the translation is MSO definable (i.e., definable in monadic second-order logic). This gives a new characterization of the MSO definable tree translations in terms of macro tree transducers: they are exactly the macro tree translations of linear size increase. The second main result is that given a macro tree transducer, it can be decided whether or not its translation is MSO definable, and if it is, then an equivalent MSO transducer can be constructed. Similar results hold for attribute grammars, which define a subclass of the macro tree translations.) <|cite_end|> <|cite_start|> (Reference: Tree Transformations and Dependencies: ) <|cite_end|>. However, the origin semantics is rather rigid. To mitigate this, in <|cite_start|> (Reference: On Equivalence and Uniformisation Problems for Finite Transducers: Transductions are binary relations of finite words. For rational transductions, i.e., transductions defined by finite transducers, the inclusion, equivalence and sequential uniformisation problems are known to be undecidable. In this paper, we investigate stronger variants of inclusion, equivalence and sequential uniformisation, based on a general notion of transducer resynchronisation, and show their decidability. We also investigate the classes of finite-valued rational transductions and deterministic rational transductions, which are known to have a decidable equivalence problem. We show that sequential uniformisation is also decidable for them.) <|cite_end|>, the authors have introduced a similarity measure between (one-way) word transducers with origin semantics which amounts to a measure that compares the difference between produced outputs on the same input prefix, in short, the measure compares their output delays. They show that inclusion, equivalence, and sequential uniformization (see next paragraph) problems become decidable for transducers that have bounded output delay. These problem are undecidable for word transducers in general, see <|cite_start|> (Reference: The unsolvability of the Equivalence Problem for Λ-Free nondeterministic generalized machines: It is shown that the equivalence problem for A-free nondeterministic generalized machines is unsolvable, and it is observed that this result implies the unsolvability of the equality problem for c-finite languages.) <|cite_end|> <|cite_start|> (Reference: Multitape One-Way Nonwriting Automata: ) <|cite_end|> <|cite_start|> (Reference: {Uniformization in Automata Theory: We survey some classical results on uniformizations of automaton de-finable relations by automaton definable functions. We consider the case of automatic relations over finite and infinite words and trees as well as rational relations over finite and infinite words. We also provide some new results concerning the uniformization of automatic and rational relations over finite words by subsequential transducers. We show that it is undecidable whether a given rational relation can be uniformized by a subsequential transducer and provide a decision procedure for the case of automatic relations.) <|cite_end|>. The introduction of this similarity measure has triggered similar works on two-way word transducers, see <|cite_start|> (Reference: Origin-equivalence of two-way word transducers is in PSPACE: We consider equivalence and containment problems for word transductions. These problems are known to be undecidable when the transductions are relations between words realized by non-deterministic transducers, and become decidable when restricting to functions from words to words. Here we prove that decidability can be equally recovered by adopting a slightly different, but natural semantics, called origin semantics and introduced by Bojanczyk in 2014. Specifically, we prove that the equivalence and containment problems for two-way word transducers in the origin semantics are PSPACE-complete. We also consider a variant of the containment problem where two-way transducers are compared under the origin semantics, but in a more relaxed way, by allowing distortions of the origins. The possible distortions are described by means of a resynchronization relation. We propose a logical formalism for describing a broad class of resynchronizations, while preserving the decidability of the variant of the containment problem.) <|cite_end|> <|cite_start|> (Reference: On Synthesis of Resynchronizers for Transducers: We study two formalisms that allow to compare transducers over words under origin semantics: rational and regular resynchronizers, and show that the former are captured by the latter. We then consider some instances of the following synthesis problem: given transducers T1, T2, construct a rational (resp. regular) resynchronizer R, if it exists, such that T1 is contained in R(T2) under the origin semantics. We show that synthesis of rational resynchronizers is decidable for functional, and even finite-valued, one-way transducers, and undecidable for relational one-way transducers. In the two-way setting, synthesis of regular resynchronizers is shown to be decidable for unambiguous two-way transducers. For larger classes of two-way transducers, the decidability status is open.) <|cite_end|>. In order to obtain decidability results (in a less rigid setting than origin semantics), we initiate the study of inclusion, equivalence, and uniformization problems for top-down tree transducers under similarity measures which are based on the behavior of the transducers. A uniformization of a binary relation is a function that selects for each element of the domain of the relation an element in its image. Synthesis problems are closely related to \emph{effective} uniformization problems; algorithmic synthesis of specifications (i.e., relations) asks for effective uniformization by functions that can be implemented in a specific way. The classical setting is Church's synthesis problem <|cite_start|> (Reference: Logic, Arithmetic and Automata: This paper is a summary of recent work in the application of mathematical logic to finite automata, and especially of mathematical logic beyond propositional calculus. To begin with a sketch of the history of the matter, let us recall that application of the "algebra of logic", i.e., elementary Boolean algebra, to the analysis of switching circuits was first suggested by Ehrenfest [A]. Nothing came of Ehrenfest's remark for many years, and it seems to have remained wholly unknown outside of Russia. Yanovskaya [G] says that details of the suggested application were worked out by Shestakoff in 1934-35. However, Shestakoff's candidate's dissertation, embodying the material, was presented to the University of Moscow in 1938, and the earliest publications by Shesta-koff are [D] and [E] in 1941. Meanwhile the same idea had occurred independently to Nakasima and Hanzawa [B] and Shannon [C]. For some time the development of the idea proceeded independently in Russia, in Japan, and in the United States, the three lines of development having had at first no influence on one another. This use of Boolean algebra is now widely familiar, and therefore requires no elaboration here. It is usually taken to be a Boolean algebra of cardinal number 2 that is used, although the character of the application would more naturally suggest propositional calculus. Use of the Boolean algebra and of propositional calculus are equivalent in a way that is well known. The choice of Boolean algebra is advantageous if algebraic methods and results are to be employed. But otherwise there is a certain artificiality in allowing only equations and inequalities to be asserted. And for further application of mathematical logic, the choice of propositional calculus provides a better basis. Mathematical logic beyond propositional calculus is first applied to autom-ata theory in the paper of McCulloch and Pitts [16], in which the context is biological. The authors are concerned with analyzing the behavior of a net of neurons and with the question of the existence of, and of finding, a neural net having some specified behavior. But their hypotheses about the behavior and the interaction of neurons are such that these questions become entirely similar to coiresponding questions about electronic digital computing circuits. The relevance of the ideas of McCulloch and Pitts to the theory of digital computing circuits was noticed by John von Neumann, and it was evidently this that led him to suggest application of mathematical …) <|cite_end|>, where logical specifications over infinite words are considered. Büchi and Landweber <|cite_start|> (Reference: Solving Sequential Conditions by Finite-State Strategies: Our main purpose is to present an algorithm which decides whether or not a condition 𝕮(X, Y) stated in sequential calculus admits a finite automata solution, and produces one if it exists. This solves a problem stated in [4] and contains, as a very special case, the answer to Case 4 left open in [6]. In an equally appealing form the result can be restated in the terminology of [7], [10], [15]: Every ω-game definable in sequential calculus is determined. Moreover the player who has a winning strategy, in fact, has a winning finite-state strategy, that is one which can effectively be played in a strong sense. The main proof, that of the central Theorem 1, will be presented at the end. We begin with a discussion of its consequences.) <|cite_end|> showed that for specifications in monadic second order logic, that is, specifications that can be translated into synchronous finite automata, it is decidable whether they can be realized by a synchronous sequential transducer. Later, decidability has been extended to asynchronous sequential transducers <|cite_start|> (Reference: Finite Delay Solutions for Sequential Conditions: ) <|cite_end|> <|cite_start|> (Reference: Foundations of Software Science and Computational Structures: ) <|cite_end|>. Detailed studies of the synthesis of sequential transducers from synchronous and asynchronous finite automata on finite words are provided in <|cite_start|> (Reference: On Equivalence and Uniformisation Problems for Finite Transducers: Transductions are binary relations of finite words. For rational transductions, i.e., transductions defined by finite transducers, the inclusion, equivalence and sequential uniformisation problems are known to be undecidable. In this paper, we investigate stronger variants of inclusion, equivalence and sequential uniformisation, based on a general notion of transducer resynchronisation, and show their decidability. We also investigate the classes of finite-valued rational transductions and deterministic rational transductions, which are known to have a decidable equivalence problem. We show that sequential uniformisation is also decidable for them.) <|cite_end|> <|cite_start|> (Reference: Uniformization Problems for Synchronizations of Automatic Relations on Words: A uniformization of a binary relation is a function that is contained in the relation and has the same domain as the relation. The synthesis problem asks for effective uniformization for classes of relations and functions that can be implemented in a specific way. We consider the synthesis problem for automatic relations over finite words (also called regular or synchronized rational relations) by functions implemented by specific classes of sequential transducers. It is known that the problem "Given an automatic relation, does it have a uniformization by a subsequential transducer?" is decidable in the two variants where the uniformization can either be implemented by an arbitrary subsequential transducer or it has to be implemented by a synchronous transducer. We introduce a new variant of this problem in which the allowed input/output behavior of the subsequential transducer is specified by a set of synchronizations and prove decidability for a specific class of synchronizations.) <|cite_end|>, for an overview see <|cite_start|> (Reference: {Uniformization in Automata Theory: We survey some classical results on uniformizations of automaton de-finable relations by automaton definable functions. We consider the case of automatic relations over finite and infinite words and trees as well as rational relations over finite and infinite words. We also provide some new results concerning the uniformization of automatic and rational relations over finite words by subsequential transducers. We show that it is undecidable whether a given rational relation can be uniformized by a subsequential transducer and provide a decision procedure for the case of automatic relations.) <|cite_end|>. Uniformization questions in this spirit have been first studied for relations over finite trees in <|cite_start|> (Reference: Synthesis of Deterministic Top-down Tree Transducers from Automatic Tree Relations: We consider the synthesis of deterministic tree transducers from automaton definable specifications, given as binary relations, over finite trees. We consider the case of specifications that are deterministic top-down tree automatic, meaning the specification is recognizable by a deterministic top-down tree automaton that reads the two given trees synchronously in parallel. In this setting we study tree transducers that are allowed to have either bounded delay or arbitrary delay. Delay is caused whenever the transducer reads a symbol from the input tree but does not produce output. We provide decision procedures for both bounded and arbitrary delay that yield deterministic top-down tree transducers which realize the specification for valid input trees. Similar to the case of relations over words, we use two-player games to obtain our results.) <|cite_end|> <|cite_start|> (Reference: Uniformization Problems for Tree-Automatic Relations and Top-Down Tree Transducers: For a given binary relation of finite trees, we consider the synthesis problem of deciding whether there is a deterministic top-down tree transducer that uniformizes the relation, and constructing such a transducer if it exists. A uniformization of a relation is a function that is contained in the relation and has the same domain as the relation. It is known that this problem is decidable if the relation is a deterministic top-down tree-automatic relation. We show that it becomes undecidable for general tree-automatic relations (specified by non-deterministic top-down tree automata). We also exhibit two cases for which the problem remains decidable. If we restrict the transducers to be path-preserving, which is a subclass of linear transducers, then the synthesis problem is decidable for general tree-automatic relations. If we consider relations that are finite unions of deterministic top-down tree-automatic relations, then the problem is decidable for synchronous transducers, which produce exactly one output symbol in each step (but can be non-linear).) <|cite_end|>. The authors have considered tree-automatic relations, that is, relations definable by tree automata over a product alphabet. They have shown that for tree-automatic relations definable by deterministic top-down tree automata uniformization by deterministic top-down tree transducers (which are a natural extension of sequential transducer on words) is decidable. However, for non-deterministic top-down tree automata it becomes undecidable. Our contribution is the introduction of two similarity measures for top-down tree transducers. The first measure is an extension of the output delay measure introduced for word transducers in <|cite_start|> (Reference: On Equivalence and Uniformisation Problems for Finite Transducers: Transductions are binary relations of finite words. For rational transductions, i.e., transductions defined by finite transducers, the inclusion, equivalence and sequential uniformisation problems are known to be undecidable. In this paper, we investigate stronger variants of inclusion, equivalence and sequential uniformisation, based on a general notion of transducer resynchronisation, and show their decidability. We also investigate the classes of finite-valued rational transductions and deterministic rational transductions, which are known to have a decidable equivalence problem. We show that sequential uniformisation is also decidable for them.) <|cite_end|> to tree transducers. Comparing top-down tree transducers based on their output delay has also been done in e.g., <|cite_start|> (Reference: Look-ahead removal for total deterministic top-down tree transducers: ) <|cite_end|>, we use the same notion of delay to define our measure. Unfortunately, while decidability for major decision problems is regained in the setting of word transducers, we show that it is not in the setting of tree transducers. The second similarity measure is more closely connected to the origin semantics. We define two transducers as origin-close if there is a bound on the distance of two positions which are origins of the same output node by the two transducers. Our main result is that inclusion, equivalence and uniformization by deterministic top-down tree transducers is decidable for origin-close top-down tree transducers. The paper is structured as follows. In \cref{sec:prelims} we provide definitions and terminology used throughout the paper. In \cref{sec:similarity} we present two similarity measures for (top-down tree) transducers and provide a comparison of their expressiveness, and in \cref{sec:origin-close} we consider decision problems for origin-close top-down tree transducers. <|paper_end|>
[ "<|reference_start|> XQuery Streaming by Forest Transducers: Streaming of XML transformations is a challenging task and only very few systems support streaming. Research approaches generally define custom fragments of XQuery and XPath that are amenable to streaming, and then design custom algorithms for each fragment. These languages have several shortcomings. Here we take a more principles approach to the problem of streaming XQuery-based transformations. We start with an elegant transducer model for which many static analysis problems are well-understood: the Macro Forest Transducer (MFT). We show that a large fragment of XQuery can be translated into MFTs --- indeed, a fragment of XQuery, that can express important features that are missing from other XQuery stream engines, such as GCX: our fragment of XQuery supports XPath predicates and let-statements. We then rely on a streaming execution engine for MFTs, one which uses a well-founded set of optimizations from functional programming, such as strictness analysis and deforestation. Our prototype achieves time and memory efficiency comparable to the fastest known engine for XQuery streaming, GCX. This is surprising because our engine relies on the OCaml built in garbage collector and does not use any specialized buffer management, while GCX's efficiency is due to clever and explicit buffer management. <|reference_end|>", "<|reference_start|> The unsolvability of the Equivalence Problem for Λ-Free nondeterministic generalized machines: It is shown that the equivalence problem for A-free nondeterministic generalized machines is unsolvable, and it is observed that this result implies the unsolvability of the equality problem for c-finite languages. <|reference_end|>", "<|reference_start|> On Synthesis of Resynchronizers for Transducers: We study two formalisms that allow to compare transducers over words under origin semantics: rational and regular resynchronizers, and show that the former are captured by the latter. We then consider some instances of the following synthesis problem: given transducers T1, T2, construct a rational (resp. regular) resynchronizer R, if it exists, such that T1 is contained in R(T2) under the origin semantics. We show that synthesis of rational resynchronizers is decidable for functional, and even finite-valued, one-way transducers, and undecidable for relational one-way transducers. In the two-way setting, synthesis of regular resynchronizers is shown to be decidable for unambiguous two-way transducers. For larger classes of two-way transducers, the decidability status is open. <|reference_end|>", "<|reference_start|> Foundations of Software Science and Computational Structures: <|reference_end|>" ]
[ 3, 10, 26, 30 ]
{"<|cite_1|>": "ss-2037988", "<|cite_3|>": "ss-910838", "<|multi_cite_4_1|>": "ss-910839", "<|multi_cite_4_2|>": "arxiv-53180", "<|multi_cite_5_1|>": "ss-2037999", "<|multi_cite_5_2|>": "ss-910840", "<|multi_cite_6_1|>": "ss-910841", "<|multi_cite_6_2|>": "ss-910842", "<|cite_7|>": "ss-960706", "<|cite_8|>": "ss-910843", "<|multi_cite_9_1|>": "ss-910844", "<|multi_cite_9_2|>": "ss-2484320", "<|cite_10|>": "arxiv-50645", "<|cite_11|>": "arxiv-50645", "<|cite_12|>": "ss-1374630", "<|multi_cite_13_1|>": "ss-1814071", "<|multi_cite_13_2|>": "ss-2148587", "<|cite_14|>": "ss-1374632", "<|multi_cite_15_1|>": "ss-910845", "<|multi_cite_15_2|>": "ss-1983205", "<|multi_cite_15_3|>": "ss-910846", "<|cite_16|>": "arxiv-93033", "<|multi_cite_17_1|>": "ss-910844", "<|multi_cite_17_2|>": "ss-2484320", "<|multi_cite_17_3|>": "ss-1260973", "<|multi_cite_18_1|>": "arxiv-166659", "<|multi_cite_18_2|>": "arxiv-210712", "<|cite_19|>": "ss-1998821", "<|cite_20|>": "ss-823840", "<|multi_cite_21_1|>": "ss-1260974", "<|multi_cite_21_2|>": "ss-2148589", "<|multi_cite_22_1|>": "arxiv-93033", "<|multi_cite_22_2|>": "arxiv-157568", "<|cite_23|>": "ss-1260973", "<|multi_cite_24_1|>": "arxiv-65225", "<|multi_cite_24_2|>": "ss-910847", "<|cite_25|>": "arxiv-93033", "<|cite_26|>": "ss-910848"}
1904.08189
<|paper_start|> Title: CenterNet: Keypoint Triplets for Object Detection Abstract: CenterNet: Keypoint Triplets for Object Detection: In object detection, keypoint-based approaches often suffer a large number of incorrect object bounding boxes, arguably due to the lack of an additional look into the cropped regions. This paper presents an efficient solution which explores the visual patterns within each cropped region with minimal costs. We build our framework upon a representative one-stage keypoint-based detector named CornerNet. Our approach, named CenterNet, detects each object as a triplet, rather than a pair, of keypoints, which improves both precision and recall. Accordingly, we design two customized modules named cascade corner pooling and center pooling, which play the roles of enriching information collected by both top-left and bottom-right corners and providing more recognizable information at the central regions, respectively. On the MS-COCO dataset, CenterNet achieves an AP of 47.0%, which outperforms all existing one-stage detectors by at least 4.9%. Meanwhile, with a faster inference speed, CenterNet demonstrates quite comparable performance to the top-ranked two-stage detectors. Code is available at https://github.com/Duankaiwen/CenterNet. Introduction Object detection has been significantly improved and advanced with the help of deep learning, especially convolutional neural networks <|cite_start|> (Reference: Rich feature hierarchies for accurate object detection and semantic segmentation: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012---achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also compare R-CNN to OverFeat, a recently proposed sliding-window detector based on a similar CNN architecture. We find that R-CNN outperforms OverFeat by a large margin on the 200-class ILSVRC2013 detection dataset. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.) <|cite_end|> (CNNs). In the current era, one of the most popular flowcharts is anchor-based <|cite_start|> (Reference: Fast R-CNN: This paper proposes a Fast Region-based Convolutional Network method (Fast R-CNN) for object detection. Fast R-CNN builds on previous work to efficiently classify object proposals using deep convolutional networks. Compared to previous work, Fast R-CNN employs several innovations to improve training and testing speed while also increasing detection accuracy. Fast R-CNN trains the very deep VGG16 network 9x faster than R-CNN, is 213x faster at test-time, and achieves a higher mAP on PASCAL VOC 2012. Compared to SPPnet, Fast R-CNN trains VGG16 3x faster, tests 10x faster, and is more accurate. Fast R-CNN is implemented in Python and C++ (using Caffe) and is available under the open-source MIT License at https://github.com/rbgirshick/fast-rcnn.) <|cite_end|> <|cite_start|> (Reference: Mask R-CNN: We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without bells and whistles, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code has been made available at: https://github.com/facebookresearch/Detectron) <|cite_end|> <|cite_start|> (Reference: SSD: Single Shot MultiBox Detector: ) <|cite_end|> <|cite_start|> (Reference: You Only Look Once: Unified, Real-Time Object Detection: We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is far less likely to predict false detections where nothing exists. Finally, YOLO learns very general representations of objects. It outperforms all other detection methods, including DPM and R-CNN, by a wide margin when generalizing from natural images to artwork on both the Picasso Dataset and the People-Art Dataset.) <|cite_end|> <|cite_start|> (Reference: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.) <|cite_end|>, which placed a set of rectangles with pre-defined sizes, and regressed them to the desired place with the help of ground-truth objects. These approaches often need a large number of anchors to ensure a sufficiently high IoU (intersection over union) rate with the ground-truth objects, and the size and aspect ratio of each anchor box need to be manually designed. In addition, anchors are usually not aligned with the ground-truth boxes, which is not conducive to the bounding box classification task. \begin{figure}[tb] \centering \subfigure{ \includegraphics[height=0.165\textwidth,width=0.165\textheight]{1_a.pdf} \hspace{0.05in} \includegraphics[height=0.165\textwidth,width=0.165\textheight]{1_b.pdf} \label{fig1a} } \subfigure{ \includegraphics[height=0.155\textwidth,width=0.345\textheight]{1_cc.pdf} \label{fig1b} } \vspace{-2ex} \caption{In the first row, we visualize the top 100 bounding boxes (according to the MS-COCO dataset standard) of CornerNet. Ground-truth and predicted objects are marked in blue and red, respectively. In the second row, we show that correct predictions can be determined by checking the central parts.} \label{fig1} \end{figure} To overcome the drawbacks of anchor-based approaches, a keypoint-based object detection pipeline named CornerNet <|cite_start|> (Reference: CornerNet: Detecting Objects as Paired Keypoints: We propose CornerNet, a new approach to object detection where we detect an object bounding box as a pair of keypoints, the top-left corner and the bottom-right corner, using a single convolution neural network. By detecting objects as paired keypoints, we eliminate the need for designing a set of anchor boxes commonly used in prior single-stage detectors. In addition to our novel formulation, we introduce corner pooling, a new type of pooling layer that helps the network better localize corners. Experiments show that CornerNet achieves a 42.2% AP on MS COCO, outperforming all existing one-stage detectors.) <|cite_end|> was proposed. It represented each object by a pair of corner keypoints, which bypassed the need of anchor boxes and achieved the state-of-the-art one-stage object detection accuracy. Nevertheless, the performance of CornerNet is still restricted by its relatively weak ability of referring to the global information of an object. That is to say, since each object is constructed by a pair of corners, the algorithm is sensitive to detect the boundary of objects, meanwhile not being aware of which pairs of keypoints should be grouped into objects. Consequently, as shown in Figure~\ref{fig1}, it often generates some incorrect bounding boxes, most of which could be easily filtered out with complementary information, {\em e.g.}, the aspect ratio. To address this issue, we equip CornerNet with an ability of perceiving the visual patterns within each proposed region, so that it can identify the correctness of each bounding box by itself. In this paper, we present a low-cost yet effective solution named {\bf CenterNet}, which explores the central part of a proposal, {\em i.e.}, the region that is close to the geometric center, with one extra keypoint. Our intuition is that, if a predicted bounding box has a high IoU with the ground-truth box, then the probability that the center keypoint in its central region is predicted as the same class is high, and vice versa. Thus, during inference, after a proposal is generated as a pair of corner keypoints, we determine if the proposal is indeed an object by checking if there is a center keypoint of the same class falling within its central region. The idea, as shown in Figure~\ref{fig1}, is to use a triplet, instead of a pair, of keypoints to represent each object. Accordingly, for better detecting center keypoints and corners, we propose two strategies to enrich center and corner information, respectively. The first strategy is named {\bf center pooling}, which is used in the branch for predicting center keypoints. Center pooling helps the center keypoints obtain more recognizable visual patterns within objects, which makes it easier to perceive the central part of a proposal. We achieve this by getting out the max summed response in both horizontal and vertical directions of the center keypoint on a feature map for predicting center keypoints. The second strategy is named {\bf cascade corner pooling}, which equips the original corner pooling module <|cite_start|> (Reference: CornerNet: Detecting Objects as Paired Keypoints: We propose CornerNet, a new approach to object detection where we detect an object bounding box as a pair of keypoints, the top-left corner and the bottom-right corner, using a single convolution neural network. By detecting objects as paired keypoints, we eliminate the need for designing a set of anchor boxes commonly used in prior single-stage detectors. In addition to our novel formulation, we introduce corner pooling, a new type of pooling layer that helps the network better localize corners. Experiments show that CornerNet achieves a 42.2% AP on MS COCO, outperforming all existing one-stage detectors.) <|cite_end|> with the ability of perceiving internal information. We achieve this by getting out the max summed response in both boundary and internal directions of objects on a feature map for predicting corners. Empirically, we verify that such a two-directional pooling method is more stable, {\em i.e.}, being more robust to feature-level noises, which contributes to the improvement of both precision and recall. We evaluate the proposed CenterNet on the MS-COCO dataset <|cite_start|> (Reference: Microsoft COCO: Common Objects in Context: We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.) <|cite_end|>, one of the most popular benchmarks for large-scale object detection. CenterNet, with both center pooling and cascade corner pooling incorporated, reports an AP of $\mathbf{47.0\%}$ on the test-dev set, which outperforms all existing one-stage detectors by a large margin. With an average inference time of $270\mathrm{ms}$ using a 52-layer hourglass backbone <|cite_start|> (Reference: Stacked Hourglass Networks for Human Pose Estimation: This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a "stacked hourglass" network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.) <|cite_end|> and $340\mathrm{ms}$ using a 104-layer hourglass backbone <|cite_start|> (Reference: Stacked Hourglass Networks for Human Pose Estimation: This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a "stacked hourglass" network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.) <|cite_end|> per image, CenterNet is quite efficient yet closely matches the state-of-the-art performance of the other two-stage detectors. The remainder of this paper is organized as follows. Section~\ref{RelatedWork} briefly reviews related work, and Section~\ref{Approach} details the proposed CenterNet. Experimental results are given in Section~\ref{Experiments}, followed by the conclusion in Section~\ref{Conclusions}. Related Work \label{RelatedWork} Object detection involves locating and classifying the objects. In the deep learning era, powered by deep convolutional neural networks, object detection approaches can be roughly categorized into two main types of pipelines, namely, two-stage approaches and one-stage approaches. \vspace{1ex}\noindent \textbf{Two-stage approaches}~divide the object detection task into two stages: extract RoIs, then classify and regress the RoIs. R-CNN <|cite_start|> (Reference: Rich feature hierarchies for accurate object detection and semantic segmentation: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012---achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also compare R-CNN to OverFeat, a recently proposed sliding-window detector based on a similar CNN architecture. We find that R-CNN outperforms OverFeat by a large margin on the 200-class ILSVRC2013 detection dataset. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.) <|cite_end|> uses a selective search method <|cite_start|> (Reference: {Selective Search for Object Recognition: This paper evaluates the selective search algorithm implemented by J.R.R. Uijlings et al. The selective search algorithm addresses the problem of object recognition. In particular the selective search has emphasis on the inherit hierarchical structure of images. This is done by combining segmentation for object recognition with exhaustive search. The advantage of exhaustive search is that is aims to capture all object locations, and the advantage of segmentation is that it uses image structure to guide the search for object locations. The selective search results in a small set of data-driven, class-independent, high quality locations. The results of selective search have been outstanding with exceptional scores across the Pascal Image challenges. This paper evaluates external potential challenges where the algorithm may fail to recognize an object. These instances may include camouflaged object, which may be obvious to a human but not so much to the selective search algorithm. Keywords—Object recognition, selective search, segmentation, exhaustive search, hierarchical image structure.) <|cite_end|> to locate RoIs in the input images and uses a DCN-based regionwise classifier to classify the RoIs independently. SPP-Net <|cite_start|> (Reference: Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition: Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224x224) input image. This requirement is "artificial" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, "spatial pyramid pooling", to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102x faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.) <|cite_end|> and Fast-RCNN <|cite_start|> (Reference: Fast R-CNN: This paper proposes a Fast Region-based Convolutional Network method (Fast R-CNN) for object detection. Fast R-CNN builds on previous work to efficiently classify object proposals using deep convolutional networks. Compared to previous work, Fast R-CNN employs several innovations to improve training and testing speed while also increasing detection accuracy. Fast R-CNN trains the very deep VGG16 network 9x faster than R-CNN, is 213x faster at test-time, and achieves a higher mAP on PASCAL VOC 2012. Compared to SPPnet, Fast R-CNN trains VGG16 3x faster, tests 10x faster, and is more accurate. Fast R-CNN is implemented in Python and C++ (using Caffe) and is available under the open-source MIT License at https://github.com/rbgirshick/fast-rcnn.) <|cite_end|> improve R-CNN by extracting the RoIs from the feature maps. Faster-RCNN <|cite_start|> (Reference: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.) <|cite_end|> is allowed to be trained end to end by introducing RPN (region proposal network). RPN can generate RoIs by regressing the anchor boxes. Later, the anchor boxes are widely used in the object detection task. Mask-RCNN <|cite_start|> (Reference: Mask R-CNN: We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without bells and whistles, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code has been made available at: https://github.com/facebookresearch/Detectron) <|cite_end|> adds a mask prediction branch on the Faster-RCNN, which can detect objects and predict their masks at the same time. R-FCN <|cite_start|> (Reference: R-FCN: Object Detection via Region-based Fully Convolutional Networks: We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast/Faster R-CNN that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets), for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6% mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: https://github.com/daijifeng001/r-fcn) <|cite_end|> replaces the fully connected layers with the position-sensitive score maps for better detecting objects. Cascade R-CNN <|cite_start|> (Reference: Cascade R-CNN: Delving into High Quality Object Detection: In object detection, an intersection over union (IoU) threshold is required to define positives and negatives. An object detector, trained with low IoU threshold, e.g. 0.5, usually produces noisy detections. However, detection performance tends to degrade with increasing the IoU thresholds. Two main factors are responsible for this: 1) overfitting during training, due to exponentially vanishing positive samples, and 2) inference-time mismatch between the IoUs for which the detector is optimal and those of the input hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, is proposed to address these problems. It consists of a sequence of detectors trained with increasing IoU thresholds, to be sequentially more selective against close false positives. The detectors are trained stage by stage, leveraging the observation that the output of a detector is a good distribution for training the next higher quality detector. The resampling of progressively improved hypotheses guarantees that all detectors have a positive set of examples of equivalent size, reducing the overfitting problem. The same cascade procedure is applied at inference, enabling a closer match between the hypotheses and the detector quality of each stage. A simple implementation of the Cascade R-CNN is shown to surpass all single-model object detectors on the challenging COCO dataset. Experiments also show that the Cascade R-CNN is widely applicable across detector architectures, achieving consistent gains independently of the baseline detector strength. The code will be made available at https://github.com/zhaoweicai/cascade-rcnn.) <|cite_end|> addresses the problem of overfitting at training and quality mismatch at inference by training a sequence of detectors with increasing IoU thresholds. The keypoint-based object detection approaches <|cite_start|> (Reference: DeNet: Scalable Real-time Object Detection with Directed Sparse Sampling: We define the object detection from imagery problem as estimating a very large but extremely sparse bounding box dependent probability distribution. Subsequently we identify a sparse distribution estimation scheme, Directed Sparse Sampling, and employ it in a single end-to-end CNN based detection model. This methodology extends and formalizes previous state-of-the-art detection models with an additional emphasis on high evaluation rates and reduced manual engineering. We introduce two novelties, a corner based region-of-interest estimator and a deconvolution based CNN model. The resulting model is scene adaptive, does not require manually defined reference bounding boxes and produces highly competitive results on MSCOCO, Pascal VOC 2007 and Pascal VOC 2012 with real-time evaluation rates. Further analysis suggests our model performs particularly well when finegrained object localization is desirable. We argue that this advantage stems from the significantly larger set of available regions-of-interest relative to other methods. Source-code is available from: https://github.com/lachlants/denet) <|cite_end|> <|cite_start|> (Reference: Grid R-CNN: This paper proposes a novel object detection framework named Grid R-CNN, which adopts a grid guided localization mechanism for accurate object detection. Different from the traditional regression based methods, the Grid R-CNN captures the spatial information explicitly and enjoys the position sensitive property of fully convolutional architecture. Instead of using only two independent points, we design a multi-point supervision formulation to encode more clues in order to reduce the impact of inaccurate prediction of specific points. To take the full advantage of the correlation of points in a grid, we propose a two-stage information fusion strategy to fuse feature maps of neighbor grid points. The grid guided localization approach is easy to be extended to different state-of-the-art detection frameworks. Grid R-CNN leads to high quality object localization, and experiments demonstrate that it achieves a 4.1% AP gain at IoU=0.8 and a 10.0% AP gain at IoU=0.9 on COCO benchmark compared to Faster R-CNN with Res50 backbone and FPN architecture.) <|cite_end|> are proposed to avoid the disadvantages of using anchor boxes and bounding boxes regression. Other meaningful works are proposed for different problems in object detection, \eg, <|cite_start|> (Reference: CoupleNet: Coupling Global Structure with Local Parts for Object Detection: The region-based Convolutional Neural Network (CNN) detectors such as Faster R-CNN or R-FCN have already shown promising results for object detection by combining the region proposal subnetwork and the classification subnetwork together. Although R-FCN has achieved higher detection speed while keeping the detection performance, the global structure information is ignored by the position-sensitive score maps. To fully explore the local and global properties, in this paper, we propose a novel fully convolutional network, named as CoupleNet, to couple the global structure with local parts for object detection. Specifically, the object proposals obtained by the Region Proposal Network (RPN) are fed into the the coupling module which consists of two branches. One branch adopts the position-sensitive RoI (PSRoI) pooling to capture the local part information of the object, while the other employs the RoI pooling to encode the global and context information. Next, we design different coupling strategies and normalization ways to make full use of the complementary advantages between the global and local branches. Extensive experiments demonstrate the effectiveness of our approach. We achieve state-of-the-art results on all three challenging datasets, i.e. a mAP of 82.7% on VOC07, 80.4% on VOC12, and 34.4% on COCO. Codes will be made publicly available.) <|cite_end|> <|cite_start|> (Reference: ME R-CNN: Multi-Expert R-CNN for Object Detection: We introduce Multi-Expert Region-based Convolutional Neural Network (ME R-CNN) which is equipped with multiple experts (ME) where each expert is learned to process a certain type of regions of interest (RoIs). This architecture better captures the appearance variations of the RoIs caused by different shapes, poses, and viewing angles. In order to direct each RoI to the appropriate expert, we devise a novel "learnable" network, which we call, expert assignment network (EAN). EAN automatically learns the optimal RoI-expert relationship even without any supervision of expert assignment. As the major components of ME R-CNN, ME and EAN, are mutually affecting each other while tied to a shared network, neither an alternating nor a naive end-to-end optimization is likely to fail. To address this problem, we introduce a practical training strategy which is tailored to optimize ME, EAN, and the shared network in an end-to-end fashion. We show that both of the architectures provide considerable performance increase over the baselines on PASCAL VOC 07, 12, and MS COCO datasets.) <|cite_end|> focus on the architecture design, <|cite_start|> (Reference: Inside-Outside Net: Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks: It is well known that contextual and multi-scale representations are important for accurate visual recognition. In this paper we present the Inside-Outside Net (ION), an object detector that exploits information both inside and outside the region of interest. Contextual information outside the region of interest is integrated using spatial recurrent neural networks. Inside, we use skip pooling to extract information at multiple scales and levels of abstraction. Through extensive experiments we evaluate the design space and provide readers with an overview of what tricks of the trade are important. ION improves state-of-the-art on PASCAL VOC 2012 object detection from 73.9% to 76.4% mAP. On the new and more challenging MS COCO dataset, we improve state-of-art-the from 19.7% to 33.1% mAP. In the 2015 MS COCO Detection Challenge, our ION model won the Best Student Entry and finished 3rd place overall. As intuition suggests, our detection results provide strong evidence that context and multi-scale representations improve small object detection.) <|cite_end|> <|cite_start|> (Reference: Object Detection Via a Multi-Region and Semantic Segmentation-Aware CNN Model: We propose an object detection system that relies on a multi-region deep convolutional neural network (CNN) that also encodes semantic segmentation-aware features. The resulting CNN-based representation aims at capturing a diverse set of discriminative appearance factors and exhibits localization sensitivity that is essential for accurate object localization. We exploit the above properties of our recognition module by integrating it on an iterative localization mechanism that alternates between scoring a box proposal and refining its location with a deep CNN regression model. Thanks to the efficient use of our modules, we detect objects with very high localization accuracy. On the detection challenges of PASCAL VOC2007 and PASCAL VOC2012 we achieve mAP of 78.2% and 73.9% correspondingly, surpassing any other published work by a significant margin.) <|cite_end|> <|cite_start|> (Reference: Contextual Priming and Feedback for Faster R-CNN: ) <|cite_end|> <|cite_start|> (Reference: Gated Bi-directional CNN for Object Detection: ) <|cite_end|> focus on the contextual relationship, <|cite_start|> (Reference: Scale-Aware Trident Networks for Object Detection: Scale variation is one of the key challenges in object detection. In this work, we first present a controlled experiment to investigate the effect of receptive fields for scale variation in object detection. Based on the findings from the exploration experiments, we propose a novel Trident Network (TridentNet) aiming to generate scale-specific feature maps with a uniform representational power. We construct a parallel multi-branch architecture in which each branch shares the same transformation parameters but with different receptive fields. Then, we adopt a scale-aware training scheme to specialize each branch by sampling object instances of proper scales for training. As a bonus, a fast approximation version of TridentNet could achieve significant improvements without any additional parameters and computational cost compared with the vanilla detector. On the COCO dataset, our TridentNet with ResNet-101 backbone achieves state-of-the-art single-model results of 48.4 mAP. Codes are available at https://git.io/fj5vR.) <|cite_end|> <|cite_start|> (Reference: A Unified Multi-scale Deep Convolutional Neural Network for Fast Object Detection: A unified deep neural network, denoted the multi-scale CNN (MS-CNN), is proposed for fast multi-scale object detection. The MS-CNN consists of a proposal sub-network and a detection sub-network. In the proposal sub-network, detection is performed at multiple output layers, so that receptive fields match objects of different scales. These complementary scale-specific detectors are combined to produce a strong multi-scale object detector. The unified network is learned end-to-end, by optimizing a multi-task loss. Feature upsampling by deconvolution is also explored, as an alternative to input upsampling, to reduce the memory and computation costs. State-of-the-art object detection performance, at up to 15 fps, is reported on datasets, such as KITTI and Caltech, containing a substantial number of small objects.) <|cite_end|> focus on the multi-scale unification. \begin{figure*}[!tb] \centering \includegraphics[width=0.98\textwidth]{Network_Structure.pdf} \vspace{-2ex} \caption{Architecture of CenterNet. A convolutional backbone network applies cascade corner pooling and center pooling to output two corner heatmaps and a center keypoint heatmap, respectively. Similar to CornerNet, a pair of detected corners and the similar embeddings are used to detect a potential bounding box. Then the detected center keypoints are used to determine the final bounding boxes.} \label{structure} \end{figure*} \vspace{1ex}\noindent \textbf{One-stage approaches}~remove the RoI extraction process and directly classify and regress the candidate anchor boxes. YOLO <|cite_start|> (Reference: You Only Look Once: Unified, Real-Time Object Detection: We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is far less likely to predict false detections where nothing exists. Finally, YOLO learns very general representations of objects. It outperforms all other detection methods, including DPM and R-CNN, by a wide margin when generalizing from natural images to artwork on both the Picasso Dataset and the People-Art Dataset.) <|cite_end|> uses fewer anchor boxes (divide the input image into an $\mathrm{S}\times\mathrm{S}$ grid) to do regression and classification. YOLOv2 <|cite_start|> (Reference: YOLO9000: Better, Faster, Stronger: We introduce YOLO9000, a state-of-the-art, real-time object detection system that can detect over 9000 object categories. First we propose various improvements to the YOLO detection method, both novel and drawn from prior work. The improved model, YOLOv2, is state-of-the-art on standard detection tasks like PASCAL VOC and COCO. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At 40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like Faster RCNN with ResNet and SSD while still running significantly faster. Finally we propose a method to jointly train on object detection and classification. Using this method we train YOLO9000 simultaneously on the COCO detection dataset and the ImageNet classification dataset. Our joint training allows YOLO9000 to predict detections for object classes that don't have labelled detection data. We validate our approach on the ImageNet detection task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite only having detection data for 44 of the 200 classes. On the 156 classes not in COCO, YOLO9000 gets 16.0 mAP. But YOLO can detect more than just 200 classes; it predicts detections for more than 9000 different object categories. And it still runs in real-time.) <|cite_end|> improves the performance by using more anchor boxes and a new bounding box regression method. SSD <|cite_start|> (Reference: SSD: Single Shot MultiBox Detector: ) <|cite_end|> places anchor boxes densely over an input image and use features from different convolutional layers to regress and classify the anchor boxes. DSSD <|cite_start|> (Reference: {DSSD: Deconvolutional single shot detector: The main contribution of this paper is an approach for introducing additional context into state-of-the-art general object detection. To achieve this we first combine a state-of-the-art classifier (Residual-101[14]) with a fast detection framework (SSD[18]). We then augment SSD+Residual-101 with deconvolution layers to introduce additional large-scale context in object detection and improve accuracy, especially for small objects, calling our resulting system DSSD for deconvolutional single shot detector. While these two contributions are easily described at a high-level, a naive implementation does not succeed. Instead we show that carefully adding additional stages of learned transformations, specifically a module for feed-forward connections in deconvolution and a new output module, enables this new approach and forms a potential way forward for further detection research. Results are shown on both PASCAL VOC and COCO detection. Our DSSD with $513 \times 513$ input achieves 81.5% mAP on VOC2007 test, 80.0% mAP on VOC2012 test, and 33.2% mAP on COCO, outperforming a state-of-the-art method R-FCN[3] on each dataset.) <|cite_end|> introduces a deconvolution module into SSD to combine low-level and high-level features. While R-SSD <|cite_start|> (Reference: Enhancement of SSD by concatenating feature maps for object detection: We propose an object detection method that improves the accuracy of the conventional SSD (Single Shot Multibox Detector), which is one of the top object detection algorithms in both aspects of accuracy and speed. The performance of a deep network is known to be improved as the number of feature maps increases. However, it is difficult to improve the performance by simply raising the number of feature maps. In this paper, we propose and analyze how to use feature maps effectively to improve the performance of the conventional SSD. The enhanced performance was obtained by changing the structure close to the classifier network, rather than growing layers close to the input data, e.g., by replacing VGGNet with ResNet. The proposed network is suitable for sharing the weights in the classifier networks, by which property, the training can be faster with better generalization power. For the Pascal VOC 2007 test set trained with VOC 2007 and VOC 2012 training sets, the proposed network with the input size of 300 x 300 achieved 78.5% mAP (mean average precision) at the speed of 35.0 FPS (frame per second), while the network with a 512 x 512 sized input achieved 80.8% mAP at 16.6 FPS using Nvidia Titan X GPU. The proposed network shows state-of-the-art mAP, which is better than those of the conventional SSD, YOLO, Faster-RCNN and RFCN. Also, it is faster than Faster-RCNN and RFCN.) <|cite_end|> uses pooling and deconvolution operations in different feature layers to combine low-level and high-level features. RON <|cite_start|> (Reference: RON: Reverse Connection with Objectness Prior Networks for Object Detection: We present RON, an efficient and effective framework for generic object detection. Our motivation is to smartly associate the best of the region-based (e.g., Faster R-CNN) and region-free (e.g., SSD) methodologies. Under fully convolutional architecture, RON mainly focuses on two fundamental problems: (a) multi-scale object localization and (b) negative sample mining. To address (a), we design the reverse connection, which enables the network to detect objects on multi-levels of CNNs. To deal with (b), we propose the objectness prior to significantly reduce the searching space of objects. We optimize the reverse connection, objectness prior and object detector jointly by a multi-task loss function, thus RON can directly predict final detection results from all locations of various feature maps. Extensive experiments on the challenging PASCAL VOC 2007, PASCAL VOC 2012 and MS COCO benchmarks demonstrate the competitive performance of RON. Specifically, with VGG-16 and low resolution 384X384 input size, the network gets 81.3% mAP on PASCAL VOC 2007, 80.7% mAP on PASCAL VOC 2012 datasets. Its superiority increases when datasets become larger and more difficult, as demonstrated by the results on the MS COCO dataset. With 1.5G GPU memory at test phase, the speed of the network is 15 FPS, 3X faster than the Faster R-CNN counterpart.) <|cite_end|> proposes a reverse connection and an objectness prior to extract multiscale features effectively. RefineDet <|cite_start|> (Reference: Single-Shot Refinement Neural Network for Object Detection: For object detection, the two-stage approach (e.g., Faster R-CNN) has been achieving the highest accuracy, whereas the one-stage approach (e.g., SSD) has the advantage of high efficiency. To inherit the merits of both while overcoming their disadvantages, in this paper, we propose a novel single-shot based detector, called RefineDet, that achieves better accuracy than two-stage methods and maintains comparable efficiency of one-stage methods. RefineDet consists of two inter-connected modules, namely, the anchor refinement module and the object detection module. Specifically, the former aims to (1) filter out negative anchors to reduce search space for the classifier, and (2) coarsely adjust the locations and sizes of anchors to provide better initialization for the subsequent regressor. The latter module takes the refined anchors as the input from the former to further improve the regression and predict multi-class label. Meanwhile, we design a transfer connection block to transfer the features in the anchor refinement module to predict locations, sizes and class labels of objects in the object detection module. The multi-task loss function enables us to train the whole network in an end-to-end way. Extensive experiments on PASCAL VOC 2007, PASCAL VOC 2012, and MS COCO demonstrate that RefineDet achieves state-of-the-art detection accuracy with high efficiency. Code is available at https://github.com/sfzhang15/RefineDet) <|cite_end|> refines the locations and sizes of the anchor boxes for two times, which inherits the merits of both one-stage and two-stage approaches. CornerNet <|cite_start|> (Reference: CornerNet: Detecting Objects as Paired Keypoints: We propose CornerNet, a new approach to object detection where we detect an object bounding box as a pair of keypoints, the top-left corner and the bottom-right corner, using a single convolution neural network. By detecting objects as paired keypoints, we eliminate the need for designing a set of anchor boxes commonly used in prior single-stage detectors. In addition to our novel formulation, we introduce corner pooling, a new type of pooling layer that helps the network better localize corners. Experiments show that CornerNet achieves a 42.2% AP on MS COCO, outperforming all existing one-stage detectors.) <|cite_end|> is another keypoint-based approach, which directly detects an object using a pair of corners. Although CornerNet achieves high performance, it still has more room to improve. <|paper_end|>
[ "<|reference_start|> Fast R-CNN: This paper proposes a Fast Region-based Convolutional Network method (Fast R-CNN) for object detection. Fast R-CNN builds on previous work to efficiently classify object proposals using deep convolutional networks. Compared to previous work, Fast R-CNN employs several innovations to improve training and testing speed while also increasing detection accuracy. Fast R-CNN trains the very deep VGG16 network 9x faster than R-CNN, is 213x faster at test-time, and achieves a higher mAP on PASCAL VOC 2012. Compared to SPPnet, Fast R-CNN trains VGG16 3x faster, tests 10x faster, and is more accurate. Fast R-CNN is implemented in Python and C++ (using Caffe) and is available under the open-source MIT License at https://github.com/rbgirshick/fast-rcnn. <|reference_end|>", "<|reference_start|> Contextual Priming and Feedback for Faster R-CNN: <|reference_end|>", "<|reference_start|> Gated Bi-directional CNN for Object Detection: <|reference_end|>", "<|reference_start|> You Only Look Once: Unified, Real-Time Object Detection: We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is far less likely to predict false detections where nothing exists. Finally, YOLO learns very general representations of objects. It outperforms all other detection methods, including DPM and R-CNN, by a wide margin when generalizing from natural images to artwork on both the Picasso Dataset and the People-Art Dataset. <|reference_end|>" ]
[ 14, 25, 26, 29 ]
{"<|cite_1|>": "arxiv-52559", "<|multi_cite_2_1|>": "arxiv-76959", "<|multi_cite_2_2|>": "arxiv-119553", "<|multi_cite_2_3|>": "ss-697426", "<|multi_cite_2_4|>": "arxiv-79041", "<|multi_cite_2_5|>": "arxiv-78819", "<|cite_3|>": "arxiv-168217", "<|cite_4|>": "arxiv-168217", "<|cite_5|>": "arxiv-60292", "<|cite_6|>": "arxiv-94431", "<|cite_7|>": "arxiv-94431", "<|cite_8|>": "arxiv-52559", "<|cite_9|>": "ss-1102672", "<|cite_10|>": "arxiv-62406", "<|cite_11|>": "arxiv-76959", "<|cite_12|>": "arxiv-78819", "<|cite_13|>": "arxiv-119553", "<|cite_14|>": "arxiv-98334", "<|cite_15|>": "arxiv-142057", "<|multi_cite_16_1|>": "arxiv-120384", "<|multi_cite_16_2|>": "arxiv-182380", "<|multi_cite_17_1|>": "arxiv-131565", "<|multi_cite_17_2|>": "arxiv-120815", "<|multi_cite_18_1|>": "arxiv-89002", "<|multi_cite_18_2|>": "ss-691140", "<|multi_cite_18_3|>": "ss-1051574", "<|multi_cite_18_4|>": "ss-1313731", "<|multi_cite_19_1|>": "arxiv-186773", "<|multi_cite_19_2|>": "arxiv-102713", "<|cite_20|>": "arxiv-79041", "<|cite_21|>": "arxiv-113287", "<|cite_22|>": "ss-697426", "<|cite_23|>": "ss-1264357", "<|cite_24|>": "arxiv-125236", "<|cite_25|>": "arxiv-128592", "<|cite_26|>": "arxiv-140547", "<|cite_27|>": "arxiv-168217"}
2110.03070
<|paper_start|> Title: Robust Generalized Method of Moments: A Finite Sample Viewpoint Abstract: Robust Generalized Method of Moments: A Finite Sample Viewpoint: For many inference problems in statistics and econometrics, the unknown parameter is identified by a set of moment conditions. A generic method of solving moment conditions is the Generalized Method of Moments (GMM). However, classical GMM estimation is potentially very sensitive to outliers. Robustified GMM estimators have been developed in the past, but suffer from several drawbacks: computational intractability, poor dimension-dependence, and no quantitative recovery guarantees in the presence of a constant fraction of outliers. In this work, we develop the first computationally efficient GMM estimator (under intuitive assumptions) that can tolerate a constant $\epsilon$ fraction of adversarially corrupted samples, and that has an $\ell_2$ recovery guarantee of $O(\sqrt{\epsilon})$. To achieve this, we draw upon and extend a recent line of work on algorithmic robust statistics for related but simpler problems such as mean estimation, linear regression and stochastic optimization. As two examples of the generality of our algorithm, we show how our estimation algorithm and assumptions apply to instrumental variables linear and logistic regression. Moreover, we experimentally validate that our estimator outperforms classical IV regression and two-stage Huber regression on synthetic and semi-synthetic datasets with corruption. Introduction Econometric and causal inference methodologies are increasingly being incorporated in automated large scale decision systems. Inevitably these systems need to deal with the plethora of practical issues that arise from automation. One important aspect is being able to deal with corrupted or irregular data, either due to poor data collection, the presence of outliers, or adversarial attacks by malicious agents. Even more classical applications of econometric methods in social science studies, can greatly benefit from robust inference so as not to draw conclusions solely driven by a handful of samples, as was recently highlighted in <|cite_start|> (Reference: An Automatic Finite-Sample Robustness Metric: Can Dropping a Little Data Change Conclusions?: We propose a method to assess the sensitivity of econometric analyses to the removal of a small fraction of the sample. Analyzing all possible data subsets of a certain size is computationally prohibitive, so we provide a finite-sample metric to approximately compute the number (or fraction) of observations that has the greatest influence on a given result when dropped. We call our resulting metric the Approximate Maximum Influence Perturbation. Our approximation is automatically computable and works for common estimators (including OLS, IV, GMM, MLE, and variational Bayes). We provide explicit finite-sample error bounds on our approximation for linear and instrumental variables regressions. At minimal computational cost, our metric provides an exact finite-sample lower bound on sensitivity for any estimator, so any non-robustness our metric finds is conclusive. We demonstrate that the Approximate Maximum Influence Perturbation is driven by a low signal-to-noise ratio in the inference problem, is not reflected in standard errors, does not disappear asymptotically, and is not a product of misspecification. Several empirical applications show that even 2-parameter linear regression analyses of randomized trials can be highly sensitive. While we find some applications are robust, in others the sign of a treatment effect can be changed by dropping less than 1% of the sample even when standard errors are small.) <|cite_end|>. Recent work in statistical machine learning has enabled robust estimation for regression problems and more generally estimation problems that reduce to the minimization of a stochastic loss. However, many estimation methods in causal inference and econometrics do not fall under this umbrella. A more general statistical framework that encompasses the most widely used estimation techniques in econometrics and causal inference is the framework of estimating models defined via \emph{moment conditions}. In this paper we offer a robust estimation algorithm that extends prior recent work in robust statistics to this more general estimation setting. For a family of distributions $\{\mathcal{D}_\theta: \theta \in \Theta\}$, identifying the parameter $\theta$ is often equivalent to solving \begin{equation} \EE_{X \sim \mathcal{D}_\theta}[g(X, \theta)] = 0, \label{eq:moment-conditions} \end{equation} for an appropriate problem-specific vector-valued function $g$. This formalism encompasses such problems as linear regression (with covariates $X$, response $Y$, and moment $g((X, Y), \theta) = X(Y - X^T\theta)$) and instrumental variables linear regression (with covariates $X$, response $Y$, instruments $Z$, and moment $g((X,Y,Z), \theta) = Z(Y - X^T\theta)$). Under simple identifiability assumptions, moment conditions are statistically tractable, and can be solved by the \emph{Generalized Method of Moments} (GMM) <|cite_start|> (Reference: Large Sample Properties of Generalized Method of Moments Estimators: ) <|cite_end|>. Given independent observations $X_1,\dots,X_n \sim \mathcal{D}_\theta$, the GMM estimator is \[\hat{\theta} = \argmin_{\theta \in \Theta} \left(\frac{1}{n} \sum_{i=1}^n g(X_i, \theta)\right)^T W \left(\frac{1}{n} \sum_{i=1}^n g(X_i, \theta)\right)\] for a positive-definite weight matrix $W$. Of course, for general functions $g$, finding $\hat{\theta}$ (the global minimizer of a potentially non-convex function) may be computationally intractable. Under stronger assumptions, all approximate \emph{local} minima of the above function are near the true parameter, in which case the GMM estimator is efficiently approximable. For instrumental variables (IV) linear regression, these assumptions follow from standard non-degeneracy assumptions. Due to its flexibility, the GMM estimator is widely used in practice (or heuristic variants, in models where it is computationally intractable). Unfortunately, like most other classical estimators in statistics, the GMM estimator suffers from a lack of robustness: a single outlier in the observations can arbitrarily corrupt the estimate. \paragraph{Robust statistics} Initiated by Tukey and Huber in the 1960s, robust statistics is a broad field studying estimators which have provable guarantees even in the presence of outliers <|cite_start|> (Reference: {Robust statistics: The classical books on this subject are Hampel et al. (1986); Huber (1981), with somewhat simpler (but partial) introductions by Rousseeuw & Leroy (1987); Staudte & Sheather (1990). The dates reflect the development of the subject: it had tremendous growth for about two decades from 1964, but failed to win over the mainstream. I think it is an important area that is used a lot less than it ought to be.) <|cite_end|>. Outliers can be modelled as samples from a heavy-tailed distribution, or even as adversarially and arbitrarily corrupted data. Classically, robustness of an estimator against arbitrary outliers is measured by breakdown point (the fraction of outliers which can be tolerated without causing the estimator to become unbounded <|cite_start|> (Reference: A General Qualitative Definition of Robustness: ) <|cite_end|>) and influence (the maximum change in the estimator under an infinitesimal fraction of outliers <|cite_start|> (Reference: The influence curve and its role in robust estimation: Abstract This paper treats essentially the first derivative of an estimator viewed as functional and the ways in which it can be used to study local robustness properties. A theory of robust estimation “near” strict parametric models is briefly sketched and applied to some classical situations. Relations between von Mises functionals, the jackknife and U-statistics are indicated. A number of classical and new estimators are discussed, including trimmed and Winsorized means, Huber-estimators, and more generally maximum likelihood and M-estimators. Finally, a table with some numerical robustness properties is given.) <|cite_end|>). These metrics have spurred development and study of numerous statistical estimators which are often used in practice to mitigate the effect of outliers (e.g. Huber loss for mean estimation, linear regression, and other problems <|cite_start|> (Reference: Robust Estimation of a Location Parameter: ) <|cite_end|>). Unfortunately, classical robust statistics suffers from a number of limitations due to emphasis on statistical efficiency and low-dimensional statistical problems. In particular, until the last few years, most high-dimensional statistical problems lacked robust estimators satisfying the following basic properties (see e.g. <|cite_start|> (Reference: {Robust Estimators in High-Dimensions Without the Computational Intractability: We study high-dimensional distribution learning in an agnostic setting where an adversary is allowed to arbitrarily corrupt an $\varepsilon$-fraction of the samples. Such questions have a rich hist...) <|cite_end|> for discussion in the setting of learning Gaussians and mixtures of Gaussians): \begin{enumerate} \item Computational tractability (i.e. evading the curse of dimensionality) \item Robustness to a constant fraction of arbitrary outliers \item Quantitative error guarantees without dimension dependence. \end{enumerate} In a revival of robust statistics within the field of theoretical computer science, estimators with the above properties have been developed for various fundamental problems in high-dimensional statistics, including mean and covariance estimation <|cite_start|> (Reference: {Robust Estimators in High-Dimensions Without the Computational Intractability: We study high-dimensional distribution learning in an agnostic setting where an adversary is allowed to arbitrarily corrupt an $\varepsilon$-fraction of the samples. Such questions have a rich hist...) <|cite_end|> <|cite_start|> (Reference: Being Robust (in High Dimensions) Can Be Practical: Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors. Recent work in theoretical computer science has shown that, in appropriate distributional models, it is possible to robustly estimate the mean and covariance with polynomial time algorithms that can tolerate a constant fraction of corruptions, independent of the dimension. However, the sample and time complexity of these algorithms is prohibitively large for high-dimensional applications. In this work, we address both of these issues by establishing sample complexity bounds that are optimal, up to logarithmic factors, as well as giving various refinements that allow the algorithms to tolerate a much larger fraction of corruptions. Finally, we show on both synthetic and real data that our algorithms have state-of-the-art performance and suddenly make high-dimensional robust estimation a realistic possibility.) <|cite_end|>, linear regression <|cite_start|> (Reference: Efficient Algorithms and Lower Bounds for Robust Linear Regression: We study the problem of high-dimensional linear regression in a robust model where an $\epsilon$-fraction of the samples can be adversarially corrupted. We focus on the fundamental setting where the covariates of the uncorrupted samples are drawn from a Gaussian distribution $\mathcal{N}(0, \Sigma)$ on $\mathbb{R}^d$. We give nearly tight upper bounds and computational lower bounds for this problem. Specifically, our main contributions are as follows: For the case that the covariance matrix is known to be the identity, we give a sample near-optimal and computationally efficient algorithm that outputs a candidate hypothesis vector $\widehat{\beta}$ which approximates the unknown regression vector $\beta$ within $\ell_2$-norm $O(\epsilon \log(1/\epsilon) \sigma)$, where $\sigma$ is the standard deviation of the random observation noise. An error of $\Omega (\epsilon \sigma)$ is information-theoretically necessary, even with infinite sample size. Prior work gave an algorithm for this problem with sample complexity $\tilde{\Omega}(d^2/\epsilon^2)$ whose error guarantee scales with the $\ell_2$-norm of $\beta$. For the case of unknown covariance, we show that we can efficiently achieve the same error guarantee as in the known covariance case using an additional $\tilde{O}(d^2/\epsilon^2)$ unlabeled examples. On the other hand, an error of $O(\epsilon \sigma)$ can be information-theoretically attained with $O(d/\epsilon^2)$ samples. We prove a Statistical Query (SQ) lower bound providing evidence that this quadratic tradeoff in the sample size is inherent. More specifically, we show that any polynomial time SQ learning algorithm for robust linear regression (in Huber's contamination model) with estimation complexity $O(d^{2-c})$, where $c>0$ is an arbitrarily small constant, must incur an error of $\Omega(\sqrt{\epsilon} \sigma)$.) <|cite_end|> <|cite_start|> (Reference: Robust Linear Regression: Optimal Rates in Polynomial Time: We obtain robust and computationally efficient estimators for learning several linear models that achieve statistically optimal convergence rate under minimal distributional assumptions. Concretely, we assume our data is drawn from a $k$-hypercontractive distribution and an $\epsilon$-fraction is adversarially corrupted. We then describe an estimator that converges to the optimal least-squares minimizer for the true distribution at a rate proportional to $\epsilon^{2-2/k}$, when the noise is independent of the covariates. We note that no such estimator was known prior to our work, even with access to unbounded computation. The rate we achieve is information-theoretically optimal and thus we resolve the main open question in Klivans, Kothari and Meka [COLT'18]. Our key insight is to identify an analytic condition that serves as a polynomial relaxation of independence of random variables. In particular, we show that when the moments of the noise and covariates are negatively-correlated, we obtain the same rate as independent noise. Further, when the condition is not satisfied, we obtain a rate proportional to $\epsilon^{2-4/k}$, and again match the information-theoretic lower bound. Our central technical contribution is to algorithmically exploit independence of random variables in the "sum-of-squares" framework by formulating it as the aforementioned polynomial inequality.) <|cite_end|>, and stochastic optimization <|cite_start|> (Reference: Sever: A Robust Meta-Algorithm for Stochastic Optimization: In high dimensions, most machine learning methods are brittle to even a small fraction of structured outliers. To address this, we introduce a new meta-algorithm that can take in a base learner such as least squares or stochastic gradient descent, and harden the learner to be resistant to outliers. Our method, Sever, possesses strong theoretical guarantees yet is also highly scalable -- beyond running the base learner itself, it only requires computing the top singular vector of a certain $n \times d$ matrix. We apply Sever on a drug design dataset and a spam classification dataset, and find that in both cases it has substantially greater robustness than several baselines. On the spam dataset, with $1\%$ corruptions, we achieved $7.4\%$ test error, compared to $13.4\%-20.5\%$ for the baselines, and $3\%$ error on the uncorrupted dataset. Similarly, on the drug design dataset, with $10\%$ corruptions, we achieved $1.42$ mean-squared error test error, compared to $1.51$-$2.33$ for the baselines, and $1.23$ error on the uncorrupted dataset.) <|cite_end|>. However, practitioners in econometrics and applied statistics often employ more sophisticated inference methods such as GMM and IV regression, for which computationally and statistically efficient robust estimators are still lacking. \paragraph{Our contribution} In this work, we address the aforementioned lack. Extending the \textsc{Sever} algorithm for robust stochastic optimization <|cite_start|> (Reference: Sever: A Robust Meta-Algorithm for Stochastic Optimization: In high dimensions, most machine learning methods are brittle to even a small fraction of structured outliers. To address this, we introduce a new meta-algorithm that can take in a base learner such as least squares or stochastic gradient descent, and harden the learner to be resistant to outliers. Our method, Sever, possesses strong theoretical guarantees yet is also highly scalable -- beyond running the base learner itself, it only requires computing the top singular vector of a certain $n \times d$ matrix. We apply Sever on a drug design dataset and a spam classification dataset, and find that in both cases it has substantially greater robustness than several baselines. On the spam dataset, with $1\%$ corruptions, we achieved $7.4\%$ test error, compared to $13.4\%-20.5\%$ for the baselines, and $3\%$ error on the uncorrupted dataset. Similarly, on the drug design dataset, with $10\%$ corruptions, we achieved $1.42$ mean-squared error test error, compared to $1.51$-$2.33$ for the baselines, and $1.23$ error on the uncorrupted dataset.) <|cite_end|>, we develop a computationally efficient and provably robust GMM estimator under intuitive deterministic assumptions about the uncorrupted data. We instantiate this estimator for two special cases of GMM---instrumental variables linear regression and instrumental variables logistic regression---under distributional assumptions about the covariates, instruments, and responses (and in fact our algorithm also applies to the IV generalized linear model under certain conditions on the link function). We corroborate the theory with experiments solving IV linear regression on corrupted synthetic and semi-synthetic data, which demonstrate that our algorithm outperforms non-robust IV as well as Huberized IV. \paragraph{Techniques and Relation to [DKKLSS19]} Our robust GMM algorithm builds upon the \textsc{Sever} algorithm and framework introduced in <|cite_start|> (Reference: Sever: A Robust Meta-Algorithm for Stochastic Optimization: In high dimensions, most machine learning methods are brittle to even a small fraction of structured outliers. To address this, we introduce a new meta-algorithm that can take in a base learner such as least squares or stochastic gradient descent, and harden the learner to be resistant to outliers. Our method, Sever, possesses strong theoretical guarantees yet is also highly scalable -- beyond running the base learner itself, it only requires computing the top singular vector of a certain $n \times d$ matrix. We apply Sever on a drug design dataset and a spam classification dataset, and find that in both cases it has substantially greater robustness than several baselines. On the spam dataset, with $1\%$ corruptions, we achieved $7.4\%$ test error, compared to $13.4\%-20.5\%$ for the baselines, and $3\%$ error on the uncorrupted dataset. Similarly, on the drug design dataset, with $10\%$ corruptions, we achieved $1.42$ mean-squared error test error, compared to $1.51$-$2.33$ for the baselines, and $1.23$ error on the uncorrupted dataset.) <|cite_end|> for stochastic optimization. In this section, we briefly outline the relation. The \textsc{Sever} algorithm robustly finds an approximate critical point for the empirical mean of input functions $f_1,\dots,f_n: \RR^d \to \RR$, i.e. for convex functions, approximately and robustly solves \[\frac{1}{n} \sum_{i=1}^n \Gradient f_i(w^*) = 0.\] The approach is to alternate between (a) finding an approximate critical point $\hat{w}$ of the current sample set, and (b) filtering the sample set by $\Gradient f_i(\hat{w})$, until convergence (i.e. when no samples are filtered out). Filtering ensures that at convergence, the mean of $\Gradient f_i(\hat{w})$ over the current sample set (which is small by criticality) is near the mean over the uncorrupted samples, so $\hat{w}$ is an approximate critical point for the uncorrupted samples, as desired. Any moment condition which is the gradient of some function can be interpreted as a critical-point finding problem, and solved in the above way. An example is linear regression, where the moment $g(w) = X(Y - X^T W)$ is the gradient of the squared-loss $f(w) = \norm{Y - X^T w}_2^2$. However, $g(w) = Z(Y - X^T w)$ is not a gradient, so IV linear regression cannot directly be solved by \textsc{Sever}. In general, we need a way to robustly find an approximate solution to \[\frac{1}{n}\sum_{i=1}^n g(w^*) = 0.\] Our approach is to alternate approximately minimizing \[\norm{\frac{1}{|S|} \sum_{i \in S} g(w)}_2^2,\] where $S$ is the current sample set, with a filtering step. However, it is not sufficient to filter by $g(\hat{w})$, because the minimization step does not necessarily output $\hat{w}$ for which $\frac{1}{|S|} \sum_{i \in S} g(w)$ is small (unlike for \textsc{Sever}, where $g = \Gradient f$, and so an approximate zero of $\frac{1}{|S|} \sum_{i \in S} \Gradient f_i(w)$ can always be found, for an arbitrary set of functions $\{f_i\}_{i \in S}$). To fix this, we introduce a second filtering step based on $\Gradient g$. Under an identifiability condition for the uncorrupted samples (which is needed even in the absence of corruption), we show that the above situation, where $\frac{1}{|S|} \sum_{i \in S} g(w)$ is large, can be detected by the gradient filtering step, so that at convergence the empirical moment is in fact small. \paragraph{Further related work} The generalized method of moments and instrumental variables regression have indeed been studied in the context of robust statistics <|cite_start|> (Reference: Two stage least absolute deviations estimators: In this paper the method of least absolute deviations is applied to the estimation of the parameters of a structural equation in the simultaneous equations model. A class of estimators called two stage least absolute deviations estimators is defined, their asymptotic properties are derived, and the problem of finding the optimal member of the class is considered. IN THIS PAPER WE APPLY the method of least absolute deviations to the estimation of the parameters of a structural equation in the simultaneous equations model. We define a class of estimators called two stage least absolute deviations estimators (2SLAD) and derive their asymptotic properties. They are so named as their relationship to the two stage least squares estimator (2SLS) is analogous to the relationship of the least absolute deviations estimator (LAD) to the least squares estimator (LS) in the standard regression model. The LAD estimation has been extensively studied in the context of the standard regression model and its usefulness is universally recognized. In this paper we show that the advantage of 2SLAD over 2SLS in the simultaneous equations model can be as great as that of LAD over LS in the standard regression model, if 2SLAD is properly defined. This last clause is very important, since the results of this paper indicate that the LAD analogue of 2SLS that has been considered before in the literature is not an appropriate method.) <|cite_end|> <|cite_start|> (Reference: A natural robustification of the ordinary instrumental variables estimator: Instrumental variables estimators are designed to provide consistent parameter estimates for linear regression models when some covariates are correlated with the error term. We propose a new robust instrumental variables estimator (RIV) which is a natural robustification of the ordinary instrumental variables estimator (OIV). Specifically, we construct RIV using a robust multivariate location and scatter S‐estimator to robustify the solution of the estimating equations that define OIV. RIV is computationally inexpensive and readily available for applications through the R‐library riv. It has attractive robustness and asymptotic properties, including high resilience to outliers, bounded influence function, consistency under weak distributional assumptions, asymptotic normality under mild regularity conditions, and equivariance. We further endow RIV with an iterative algorithm which allows for the estimation of models with endogenous continuous covariates and exogenous dummy covariates. We study the performance of RIV when the data contains outliers using an extensive Monte Carlo simulation study and by applying it to a limited‐access dataset from the Framingham Heart Study‐Cohort to estimate the effect of long‐term systolic blood pressure on left atrial size.) <|cite_end|> <|cite_start|> (Reference: Two-Stage Bounded-Influence Estimators for Simultaneous-Equations Models: This article presents a class of estimators for linear structural models that are robust to heavytailed disturbance distributions, gross errors in either the endogenous or exogenous variables, and certain other model failures. The class of estimators modifies ordinary two-stage least squares by replacing each least squares regression by a bounded-influence regression. Conditions under which the estimators are qualitatively robust, consistent, and asymptotically normal are established, and an empirical example is presented.) <|cite_end|> <|cite_start|> (Reference: Robust inference with GMM estimators: ) <|cite_end|>. However, the resulting estimators face the same nearly ubiquitous issues described above. For instance, <|cite_start|> (Reference: Two stage least absolute deviations estimators: In this paper the method of least absolute deviations is applied to the estimation of the parameters of a structural equation in the simultaneous equations model. A class of estimators called two stage least absolute deviations estimators is defined, their asymptotic properties are derived, and the problem of finding the optimal member of the class is considered. IN THIS PAPER WE APPLY the method of least absolute deviations to the estimation of the parameters of a structural equation in the simultaneous equations model. We define a class of estimators called two stage least absolute deviations estimators (2SLAD) and derive their asymptotic properties. They are so named as their relationship to the two stage least squares estimator (2SLS) is analogous to the relationship of the least absolute deviations estimator (LAD) to the least squares estimator (LS) in the standard regression model. The LAD estimation has been extensively studied in the context of the standard regression model and its usefulness is universally recognized. In this paper we show that the advantage of 2SLAD over 2SLS in the simultaneous equations model can be as great as that of LAD over LS in the standard regression model, if 2SLAD is properly defined. This last clause is very important, since the results of this paper indicate that the LAD analogue of 2SLS that has been considered before in the literature is not an appropriate method.) <|cite_end|> presents a variant of two-stage least squares which uses least absolute deviations. The resulting estimator performs well under the metric of bounded influence, but an arbitrary outlier can still cause arbitrary changes in the estimator. The estimator proposed by <|cite_start|> (Reference: A natural robustification of the ordinary instrumental variables estimator: Instrumental variables estimators are designed to provide consistent parameter estimates for linear regression models when some covariates are correlated with the error term. We propose a new robust instrumental variables estimator (RIV) which is a natural robustification of the ordinary instrumental variables estimator (OIV). Specifically, we construct RIV using a robust multivariate location and scatter S‐estimator to robustify the solution of the estimating equations that define OIV. RIV is computationally inexpensive and readily available for applications through the R‐library riv. It has attractive robustness and asymptotic properties, including high resilience to outliers, bounded influence function, consistency under weak distributional assumptions, asymptotic normality under mild regularity conditions, and equivariance. We further endow RIV with an iterative algorithm which allows for the estimation of models with endogenous continuous covariates and exogenous dummy covariates. We study the performance of RIV when the data contains outliers using an extensive Monte Carlo simulation study and by applying it to a limited‐access dataset from the Framingham Heart Study‐Cohort to estimate the effect of long‐term systolic blood pressure on left atrial size.) <|cite_end|> modifies the closed-form solution to IV linear regression using robust mean and covariance estimators. These have attractive theoretical properties but are computationally intractable, and the heuristics by which they are implemented in practice have no associated theoretical guarantees. The robust GMM estimator presented in <|cite_start|> (Reference: Robust inference with GMM estimators: ) <|cite_end|> has bounded influence but is not robust to a constant fraction of outliers. <|paper_end|>
[ "<|reference_start|> Sever: A Robust Meta-Algorithm for Stochastic Optimization: In high dimensions, most machine learning methods are brittle to even a small fraction of structured outliers. To address this, we introduce a new meta-algorithm that can take in a base learner such as least squares or stochastic gradient descent, and harden the learner to be resistant to outliers. Our method, Sever, possesses strong theoretical guarantees yet is also highly scalable -- beyond running the base learner itself, it only requires computing the top singular vector of a certain $n \\times d$ matrix. We apply Sever on a drug design dataset and a spam classification dataset, and find that in both cases it has substantially greater robustness than several baselines. On the spam dataset, with $1\\%$ corruptions, we achieved $7.4\\%$ test error, compared to $13.4\\%-20.5\\%$ for the baselines, and $3\\%$ error on the uncorrupted dataset. Similarly, on the drug design dataset, with $10\\%$ corruptions, we achieved $1.42$ mean-squared error test error, compared to $1.51$-$2.33$ for the baselines, and $1.23$ error on the uncorrupted dataset. <|reference_end|>", "<|reference_start|> A natural robustification of the ordinary instrumental variables estimator: Instrumental variables estimators are designed to provide consistent parameter estimates for linear regression models when some covariates are correlated with the error term. We propose a new robust instrumental variables estimator (RIV) which is a natural robustification of the ordinary instrumental variables estimator (OIV). Specifically, we construct RIV using a robust multivariate location and scatter S‐estimator to robustify the solution of the estimating equations that define OIV. RIV is computationally inexpensive and readily available for applications through the R‐library riv. It has attractive robustness and asymptotic properties, including high resilience to outliers, bounded influence function, consistency under weak distributional assumptions, asymptotic normality under mild regularity conditions, and equivariance. We further endow RIV with an iterative algorithm which allows for the estimation of models with endogenous continuous covariates and exogenous dummy covariates. We study the performance of RIV when the data contains outliers using an extensive Monte Carlo simulation study and by applying it to a limited‐access dataset from the Framingham Heart Study‐Cohort to estimate the effect of long‐term systolic blood pressure on left atrial size. <|reference_end|>", "<|reference_start|> Robust inference with GMM estimators: <|reference_end|>", "<|reference_start|> Robust inference with GMM estimators: <|reference_end|>" ]
[ 11, 15, 17, 20 ]
{"<|cite_1|>": "ss-1840967", "<|cite_2|>": "ss-2470978", "<|cite_3|>": "ss-1089863", "<|cite_4|>": "ss-1334998", "<|cite_5|>": "ss-1341779", "<|cite_6|>": "ss-683319", "<|cite_7|>": "ss-1556100", "<|multi_cite_8_1|>": "ss-1556100", "<|multi_cite_8_2|>": "arxiv-118051", "<|multi_cite_9_1|>": "arxiv-160818", "<|multi_cite_9_2|>": "arxiv-276059", "<|cite_10|>": "arxiv-150855", "<|cite_11|>": "arxiv-150855", "<|cite_12|>": "arxiv-150855", "<|multi_cite_13_1|>": "ss-2557724", "<|multi_cite_13_2|>": "ss-2557725", "<|multi_cite_13_3|>": "ss-2557726", "<|multi_cite_13_4|>": "ss-2557727", "<|cite_14|>": "ss-2557724", "<|cite_15|>": "ss-2557725", "<|cite_16|>": "ss-2557727"}
2106.02556
<|paper_start|> Title: Musical Prosody-Driven Emotion Classification: Interpreting Vocalists Portrayal of Emotions Through Machine Learning Abstract: Musical Prosody-Driven Emotion Classification: Interpreting Vocalists Portrayal of Emotions Through Machine Learning: The task of classifying emotions within a musical track has received widespread attention within the Music Information Retrieval (MIR) community. Music emotion recognition has traditionally relied on the use of acoustic features, verbal features, and metadata-based filtering. The role of musical prosody remains under-explored despite several studies demonstrating a strong connection between prosody and emotion. In this study, we restrict the input of traditional machine learning algorithms to the features of musical prosody. Furthermore, our proposed approach builds upon the prior by classifying emotions under an expanded emotional taxonomy, using the Geneva Wheel of Emotion. We utilize a methodology for individual data collection from vocalists, and personal ground truth labeling by the artist themselves. We found that traditional machine learning algorithms when limited to the features of musical prosody (1) achieve high accuracies for a single singer, (2) maintain high accuracy when the dataset is expanded to multiple singers, and (3) achieve high accuracies when trained on a reduced subset of the total features. Introduction \label{sec:introduction} The work presented in this paper is situated in the intersection between research on emotion for robotics <|cite_start|> (Reference: A Survey of Robotics and Emotion: Classifications and Models of Emotional Interaction: As emotion plays a growing role in robotic research it is crucial to develop methods to analyze and compare among the wide range of approaches. To this end we present a survey of 1427 IEEE and ACM publications that include robotics and emotion. This includes broad categorizations of trends in emotion input analysis, robot emotional expression, studies of emotional interaction and models for internal processing. We then focus on 232 papers that present internal processing of emotion, such as using a human's emotion for better interaction or turning environmental stimuli into an emotional drive for robotic path planning. We conducted constant comparison analysis of the 232 papers and arrived at three broad categorization metrics; emotional intelligence, emotional model and implementation, each including two or three subcategories. The subcategories address the algorithm used, emotional mapping, history, the emotional model, emotional categories, the role of emotion, the purpose of emotion and the platform. Our results show a diverse field of study, largely divided by the role of emotion in the system, either for improved interaction, or improved robotic performance. We also present multiple future opportunities for research and describe intrinsic challenges common in all publications.) <|cite_end|> and emotional classification research in Music Information Retrieval <|cite_start|> (Reference: Content-based music audio recommendation: We present the MusicSurfer, a metadata free system for the interaction with massive collections of music. MusicSurfer automatically extracts descriptions related to instrumentation, rhythm and harmony from music audio signals. Together with efficient similarity metrics, the descriptions allow navigation of multimillion track music collections in a flexible and efficient way without the need for metadata nor human ratings.) <|cite_end|>. In particular, we focus on the under-explored domain of emotion-driven prosody for human-robot interaction <|cite_start|> (Reference: Establishing Human-Robot Trust through Music-Driven Robotic Emotion Prosody and Gesture: As human-robot collaboration opportunities continue to expand, trust becomes ever more important for full engagement and utilization of robots. Affective trust, built on emotional relationship and interpersonal bonds is particularly critical as it is more resilient to mistakes and increases the willingness to collaborate. In this paper we present a novel model built on music-driven emotional prosody and gestures that encourages the perception of a robotic identity, designed to avoid uncanny valley. Symbolic musical phrases were generated and tagged with emotional information by human musicians. These phrases controlled a synthesis engine playing back pre-rendered audio samples generated through interpolation of phonemes and electronic instruments. Gestures were also driven by the symbolic phrases, encoding the emotion from the musical phrase to low degree-of-freedom movements. Through a user study we showed that our system was able to accurately portray a range of emotions to the user. We also showed with a significant result that our non-linguistic audio generation achieved an 8% higher mean of average trust than using a state-of-the-art text-to-speech system.) <|cite_end|>. Verbal prosody is concerned with elements of speech that are not individual phonetic segments but rather pertain to linguistic functions such as intonation, tone, stress, and rhythm. Similarly, musical prosody is defined as the performer's manipulation of music for certain expressive and coordinating functions <|cite_start|> (Reference: What Is Musical Prosody: ) <|cite_end|>. It has been hypothesized that these expressive functions serve to communicate emotion <|cite_start|> (Reference: Music and Emotion: Theory and Research.: That music has an incredible power to move us emotionally is without question. Whether performing music, listening to music, or creating music, this bond with our emotions is always there. The natu ...) <|cite_end|>. In this paper, we explore the relationship between musical prosody and emotion through three research questions. First, are traditional machine learning algorithms able to accurately classify an individual's emotions when trained on only the features of musical prosody? Next, are these models able to generalize to a larger group of vocalists? Finally, which features of musical prosody contribute the most to the classification of emotion? The paper is structured as follows, in Section \ref{sec:background_and_motivation}, background and motivation are discussed. Section \ref{sec:metholdology} describes the dataset collection, training and testing, the taxonomies used in classification, the feature extraction methodology and analysis of their relevance to emotion, feature aggregation, feature selection, and model generalization. Section \ref{sec:results} presents the experiments: Experiment 1 asks how well can traditional machine learning models classify emotion when limited to inputs of musical prosody, Experiment 2 explores our approach's ability to generalize to a larger population of singers, and Experiment 3 explores the individual contribution to accuracy of each feature via training on reduced subsets of the input vector. Section \ref{sec:discussion} provides discussion to these results, with particular attention paid to the relationships between emotions and potential future work. Finally, section \ref{sec:conclusions} concludes the paper. A demo via python notebook with audio samples is available online. \footnote{\url{https://github.com/brianmodel/EmotionClassification}} Related Work \label{sec:background_and_motivation} Emotion classification has been a major focus of research in recent years. Ekman created a discrete categorization that consists of fundamental basic emotions which are the root for more complex emotions <|cite_start|> (Reference: Basic emotions: determining not only that they pertain to emotion, but to which emotion . . . Appraisal is not always automatic. Sometimes the evaluation of what is happening is slow, deliberate and conscious. With such a more extended appraisal there may be some autonomic arousal, but perhaps not of a kind which is differentiated. The person could be said to be aroused or alerted, but no specific emotion is operative. Cognition plays the important role in determining what will transpire. During such extended appraisal the evaluation may match to the selective filters of the automatic appraiser . . . . It need not be, however; the experience may be diffuse rather than specific to one emotion” (pp. 58—59).) <|cite_end|>. Another classification model is the Circumplex model proposed by Posner et al which plots emotions on a continuous, two-dimensional scale of valence and arousal <|cite_start|> (Reference: The circumplex model of affect: an integrative approach to affective neuroscience, cognitive development, and psychopathology: The circumplex model of affect proposes that all affective states arise from cognitive interpretations of core neural sensations that are the product of two independent neurophysiological systems. This model stands in contrast to theories of basic emotions, which posit that a discrete and independent neural system subserves every emotion. We propose that basic emotion theories no longer explain adequately the vast number of empirical observations from studies in affective neuroscience, and we suggest that a conceptual shift is needed in the empirical approaches taken to the study of emotion and affective psychopathologies. The circumplex model of affect is more consistent with many recent findings from behavioral, cognitive neuroscience, neuroimaging, and developmental studies of affect. Moreover, the model offers new theoretical and empirical approaches to studying the development of affective disorders as well as the genetic and cognitive underpinnings of affective processing within the central nervous system.This work was supported in part by NIMH Grants MH01232, MH59139, MH36197, MHK02-74677, and MH068318; a grant from the National Alliance for Research in Schizophrenia and Affective Disorders (NARSAD); NSF Grant BSC-0421702; and funding from the Thomas D. Klingenstein and Nancy D. Perlman Family Fund and the Suzanne Crosby Murphy Endowment at Columbia University.) <|cite_end|>. In this paper, we classify emotions using a model similar to the two-dimensional Circumplex model which is further described in section 3.1. There has also been much work done in the field of analyzing emotion from text for tasks such as sentiment analysis. Research on classification of emotion in audio has taken many different approaches. Research into classifying emotions in knocking sounds has found that anger, happiness and sadness could be easily classified from audio alone <|cite_start|> (Reference: :: إن قدرة النص الأدبي خارقة في البحث عن أشكال قرائية جديدة باستمرار لفك أسراره . لذلك تشعبت الرؤى والتصورات والمناهج في ارتباط وثيق ومتعدد المقاربات بعدد من العلوم والحقول المعرفية والنظريات. وقد حقق الحقل السيميائي ، في التصورات الكبرى التأسيسية أو في الاجتهادات الموالية ، طفرة ملموسة في النقد وفي صوغ المفاهيم وفي استجلاء دلالات النصوص الأدبية . كما عرف الدرس السيميائي في النقد العربي اجتهادات حقيقة بالتتبع خصوصا على المستوى الأكاديمي الجامعي . ففي حالة المغرب مثلا ، شكلت جهود الباحثين السيميائيين : محمد مفتاح ، سعيد بنكراد ، عبد اللطيف محفوظ ، عبد المجيد نوسي ..أكثر من مدخل لتقديم مفاهيم ورؤى في هذا الحقل بالإضافة إلى تحليلات متقدمة في الشعر والقصة والرواية والخطاب عموما ... إنها مسألة متعلقة في الدرس السيميائي بالمغرب بالنص والخطاب والمرجعية النظرية والبناء المنهجي وسبل التوظيف أثناء القراءة والتأويل مما أسهم في تحقيق أدوات إجرائية موسعة تتسلح بعلوم ومعارف للنفاذ إلى تشعبات الخطابات .) <|cite_end|>. There have been multimodal approaches which use audio in combination with another feature, namely visual facial features <|cite_start|> (Reference: Audio-visual feature selection and reduction for emotion classification: Recognition of expressed emotion from speech and facial gestures was investigated in experiments on an audio-visual emotional database. A total of 106 audio and 240 visual features were extracted and then features were selected with Plus l-Take Away r algorithm based on Bhattacharyya distance criterion. In the second step, linear transformation methods, principal component analysis (PCA) and linear discriminant analysis (LDA), were applied to the selected features and Gaussian classifiers were used for classification of emotions. The performance was higher for LDA features compared to PCA features. The visual features performed better than audio features, for both PCA and LDA. Across a range of fusion schemes, the audio-visual feature results were close to that of visual features. A highest recognition rate of 53% was achieved with audio features, 98% with visual features, and 98% with audio-visual features selected by Bhattacharyya distance and transformed by LDA.) <|cite_end|> <|cite_start|> (Reference: Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition, March 28-30, 2000, Grenoble, France: Presents nine sessions containing a total of 88 papers from a conference organized to provide a primary forum for current work on machine perception of humans and human actions. Includes papers addressing face detection, face tracking using statistical methods, face tracking, face tracking using structural methods, face recognition, tracking people and recognizing activities, gesture recognition, face expression and gaze direction, structural models, and biological vision and 3D models. Invited talks address such topics as the use of computer graphics to study the recognition of facial attributes, problems in the description and interpretation of gesture in conversation, and other topics. Illustrated throughout in b&w. Lacks a subject index.) <|cite_end|> or text lyrics <|cite_start|> (Reference: Emotion Analysis of Songs Based on Lyrical and Audio Features: In this paper, a method is proposed to detect the emotion of a song based on its lyrical and audio features. Lyrical features are generated by segmentation of lyrics during the process of data extraction. ANEW and WordNet knowledge is then incorporated to compute Valence and Arousal values. In addition to this, linguistic association rules are applied to ensure that the issue of ambiguity is properly addressed. Audio features are used to supplement the lyrical ones and include attributes like energy, tempo, and danceability. These features are extracted from The Echo Nest, a widely used music intelligence platform. Construction of training and test sets is done on the basis of social tags extracted from the last.fm website. The classification is done by applying feature weighting and stepwise threshold reduction on the k-Nearest Neighbors algorithm to provide fuzziness in the classification.) <|cite_end|>. Furthermore, researchers have performed emotional classification from audio in the context of music by analyzing which musical features best convey emotions <|cite_start|> (Reference: {Evaluation of Musical Features for Emotion Classification: Because music conveys and evokes feelings, a wealth of research has been performed on music emotion recognition. Previous research has shown that musical mood is linked to features based on rhythm, timbre, spectrum and lyrics. For example, sad music correlates with slow tempo, while happy music is generally faster. However, only limited success has been obtained in learning automatic classifiers of emotion in music. In this paper, we collect a ground truth data set of 2904 songs that have been tagged with one of the four words “happy”, “sad”, “angry” and “relaxed”, on the Last.FM web site. An excerpt of the audio is then retrieved from 7Digital.com, and various sets of audio features are extracted using standard algorithms. Two classifiers are trained using support vector machines with the polynomial and radial basis function kernels, and these are tested with 10-fold cross validation. Our results show that spectral features outperform those based on rhythm, dynamics, and, to a lesser extent, harmony. We also find that the polynomial kernel gives better results than the radial basis function, and that the fusion of different feature sets does not always lead to improved classification.) <|cite_end|>. Panda et al. have found a relationship between melodic and dynamic features to a number of specific emotions <|cite_start|> (Reference: Audio features for music emotion recognition: A survey: The design of meaningful audio features is a key need to advance the state-of-the-art in music emotion recognition (MER). This article presents a survey on the existing emotionally-relevant computational audio features, supported by the music psychology literature on the relations between eight musical dimensions (melody, harmony, rhythm, dynamics, tone color, expressivity, texture and form) and specific emotions. Based on this review, current gaps and needs are identified and strategies for future research on feature engineering for MER are proposed, namely ideas for computational audio features that capture elements of musical form, texture and expressivity that should be further researched. Previous MER surveys offered broad reviews, covering topics such as emotion paradigms, approaches for the collection of ground-truth data, types of MER problems and overviewing different MER systems. On the contrary, our approach is to offer a deep and specific review on one key MER problem: the design of emotionally-relevant audio features.) <|cite_end|>. Such features that were used to classify emotion in music, however, cannot be easily generalized to other domains. Prosody has been found by linguists to communicate emotion across various cultures, with patterns of pitch and loudness over time representing different emotions <|cite_start|> (Reference: Communicating emotion: The role of prosodic features.: ) <|cite_end|>, and has shown the potential to improve human-robot interaction <|cite_start|> (Reference: Emotional Musical Prosody for the Enhancement of Trust in Robotic Arm Communication: As robotic arms become prevalent in industry it is crucial to improve levels of trust from human collaborators. Low levels of trust in human-robot interaction can reduce overall performance and prevent full robot utilization. We investigated the potential benefits of using emotional musical prosody to allow the robot to respond emotionally to the user's actions. We tested participants' responses to interacting with a virtual robot arm that acted as a decision agent, helping participants select the next number in a sequence. We compared results from three versions of the application in a between-group experiment, where the robot had different emotional reactions to the user's input depending on whether the user agreed with the robot and whether the user's choice was correct. In all versions, the robot reacted with emotional gestures. One version used prosody-based emotional audio phrases selected from our dataset of singer improvisations, the second version used audio consisting of a single pitch randomly assigned to each emotion, and the final version used no audio, only gestures. Our results showed no significant difference for the percentage of times users from each group agreed with the robot, and no difference between user's agreement with the robot after it made a mistake. However, participants also took a trust survey following the interaction, and we found that the reported trust ratings of the musical prosody group were significantly higher than both the single-pitch and no audio groups.) <|cite_end|> <|cite_start|> (Reference: Before, Between, and After: Enriching Robot Communication Surrounding Collaborative Creative Activities: Research in creative robotics continues to expand across all creative domains, including art, music and language. Creative robots are primarily designed to be task specific, with limited research into the implications of their design outside their core task. In the case of a musical robot, this includes when a human sees and interacts with the robot before and after the performance, as well as in between pieces. These non-musical interaction tasks such as the presence of a robot during musical equipment set up, play a key role in the human perception of the robot however have received only limited attention. In this paper, we describe a new audio system using emotional musical prosody, designed to match the creative process of a musical robot for use before, between and after musical performances. Our generation system relies on the creation of a custom dataset for musical prosody. This system is designed foremost to operate in real time and allow rapid generation and dialogue exchange between human and robot. For this reason, the system combines symbolic deep learning through a Conditional Convolution Variational Auto-encoder, with an emotion-tagged audio sampler. We then compare this to a SOTA text-to-speech system in our robotic platform, Shimon the marimba player.We conducted a between-groups study with 100 participants watching a musician interact for 30 s with Shimon. We were able to increase user ratings for the key creativity metrics; novelty and coherence, while maintaining ratings for expressivity across each implementation. Our results also indicated that by communicating in a form that relates to the robot’s core functionality, we can raise likeability and perceived intelligence, while not altering animacy or anthropomorphism. These findings indicate the variation that can occur in the perception of a robot based on interactions surrounding a performance, such as initial meetings and spaces between pieces, in addition to the core creative algorithms.) <|cite_end|>. Our approach aims to bridge this gap by analyzing these prosodic features which are fundamental to everyday speech and explore how they can be used to classify emotional driven prosody. Koo et al. have done work in speech emotion recognition using a combination of MFCC and prosodic features with a GRU model on the IEMOCAP dataset <|cite_start|> (Reference: 2020 International Conference on Electronics, Information, and Communication (ICEIC): ) <|cite_end|>. We expand upon their work by performing an in-depth analysis of 11 different audio features and their effect on classifying emotion. We also classify emotion beyond spoken language by analyzing prosodic features which better generalize to how humans convey emotion using the new dataset collected, as described in section 3.2. <|paper_end|>
[ "<|reference_start|> Audio-visual feature selection and reduction for emotion classification: Recognition of expressed emotion from speech and facial gestures was investigated in experiments on an audio-visual emotional database. A total of 106 audio and 240 visual features were extracted and then features were selected with Plus l-Take Away r algorithm based on Bhattacharyya distance criterion. In the second step, linear transformation methods, principal component analysis (PCA) and linear discriminant analysis (LDA), were applied to the selected features and Gaussian classifiers were used for classification of emotions. The performance was higher for LDA features compared to PCA features. The visual features performed better than audio features, for both PCA and LDA. Across a range of fusion schemes, the audio-visual feature results were close to that of visual features. A highest recognition rate of 53% was achieved with audio features, 98% with visual features, and 98% with audio-visual features selected by Bhattacharyya distance and transformed by LDA. <|reference_end|>", "<|reference_start|> Emotion Analysis of Songs Based on Lyrical and Audio Features: In this paper, a method is proposed to detect the emotion of a song based on its lyrical and audio features. Lyrical features are generated by segmentation of lyrics during the process of data extraction. ANEW and WordNet knowledge is then incorporated to compute Valence and Arousal values. In addition to this, linguistic association rules are applied to ensure that the issue of ambiguity is properly addressed. Audio features are used to supplement the lyrical ones and include attributes like energy, tempo, and danceability. These features are extracted from The Echo Nest, a widely used music intelligence platform. Construction of training and test sets is done on the basis of social tags extracted from the last.fm website. The classification is done by applying feature weighting and stepwise threshold reduction on the k-Nearest Neighbors algorithm to provide fuzziness in the classification. <|reference_end|>", "<|reference_start|> Communicating emotion: The role of prosodic features.: <|reference_end|>", "<|reference_start|> Emotional Musical Prosody for the Enhancement of Trust in Robotic Arm Communication: As robotic arms become prevalent in industry it is crucial to improve levels of trust from human collaborators. Low levels of trust in human-robot interaction can reduce overall performance and prevent full robot utilization. We investigated the potential benefits of using emotional musical prosody to allow the robot to respond emotionally to the user's actions. We tested participants' responses to interacting with a virtual robot arm that acted as a decision agent, helping participants select the next number in a sequence. We compared results from three versions of the application in a between-group experiment, where the robot had different emotional reactions to the user's input depending on whether the user agreed with the robot and whether the user's choice was correct. In all versions, the robot reacted with emotional gestures. One version used prosody-based emotional audio phrases selected from our dataset of singer improvisations, the second version used audio consisting of a single pitch randomly assigned to each emotion, and the final version used no audio, only gestures. Our results showed no significant difference for the percentage of times users from each group agreed with the robot, and no difference between user's agreement with the robot after it made a mistake. However, participants also took a trust survey following the interaction, and we found that the reported trust ratings of the musical prosody group were significantly higher than both the single-pitch and no audio groups. <|reference_end|>" ]
[ 8, 10, 13, 14 ]
{"<|cite_1|>": "arxiv-281503", "<|cite_2|>": "ss-983569", "<|cite_3|>": "arxiv-243697", "<|cite_4|>": "ss-2029329", "<|cite_5|>": "ss-793389", "<|cite_6|>": "ss-1935641", "<|cite_7|>": "ss-900771", "<|cite_8|>": "ss-701706", "<|cite_9|>": "ss-2029330", "<|cite_10|>": "ss-2310222", "<|cite_11|>": "arxiv-79524", "<|cite_12|>": "ss-1760508", "<|cite_13|>": "ss-687319", "<|cite_14|>": "ss-1053148", "<|multi_cite_15_2|>": "arxiv-290767", "<|multi_cite_15_3|>": "ss-2092187", "<|cite_16|>": "ss-2029331"}
2302.11637
<|paper_start|> Title: Hitting Sets when the Shallow Cell Complexity is Small Abstract: Hitting Sets when the Shallow Cell Complexity is Small: The hitting set problem is a well-known NP-hard optimization problem in which, given a set of elements and a collection of subsets, the goal is to find the smallest selection of elements, such that each subset contains at least one element in the selection. Many geometric set systems enjoy improved approximation ratios, which have recently been shown to be tight with respect to the shallow cell complexity of the set system. The algorithms that exploit the cell complexity, however, tend to be involved and computationally intensive. This paper shows that a slightly improved asymptotic approximation ratio for the hitting set problem can be attained using a much simpler algorithm: solve the linear programming relaxation, take one initial random sample from the set of elements with probabilities proportional to the LP-solution, and, while there is an unhit set, take an additional sample from it proportional to the LP-solution. Our algorithm is a simple generalization of the elegant net-finder algorithm by Nabil Mustafa. To analyze this algorithm for the hitting set problem, we generalize the classic Packing Lemma, and the more recent Shallow Packing Lemma, to the setting of weighted epsilon-nets. Introduction \label{sec:intro} The input to the hitting set problem is a finite \emph{set system} -- a ground set $X$ of $\neles$ elements, or \textit{points}, and a collection $\sets$ of $\nsets$ subsets, or \textit{ranges}, of $X$. This can also be understood as a hypergraph, with vertices $X$ and hyper-edges $\sets$. A \emph{hitting set} is a subset of elements $H \subseteq X$ such that every set $R \in \sets$ is hit by $H$, i.e. $R \cap H \neq \emptyset$, for all $R \in \sets$. This is a vertex cover under the hypergraph view. The set system can be encoded as a set-element incidence matrix $A \in \{0, 1\}^{\nsets \times \neles}$, in which the $(i,j)$th entry $a_{ij}$ is $1$ if range $R_i$ contains point $x_j$, and $0$ otherwise. The IP of the minimum hitting set problem is \begin{align} \label{eq:ip_formulation} \min_y \sum_{j: x_j \in X}&y_j \nonumber\\ \textrm{s.t. } \sum_{j: x_j \in X} &a_{ij}y_j \geq 1, && \forall i : R_i \in \sets;\\ & y_j \in \{0, 1\}, && \forall j: x_j \in X,\nonumber \end{align} where variable $y_j \in \{0, 1\}$ indicates whether element $x_j$ is in the solution $H$. Hitting sets and set covers are intimately connected; a hitting set for $A$ is a set cover of $A^T$. Both problems' decision versions are NP-complete <|cite_start|> (Reference: Computers And Intractability A Guide To The Theory Of Np Completeness: Thank you very much for reading computers and intractability a guide to the theory of np completeness. Maybe you have knowledge that, people have look numerous times for their chosen books like this computers and intractability a guide to the theory of np completeness, but end up in harmful downloads. Rather than enjoying a good book with a cup of coffee in the afternoon, instead they cope with some malicious virus inside their laptop.) <|cite_end|>. There exists an $\bigO{\log \neles}$-approximation algorithm, and this bound is tight unless P = NP <|cite_start|> (Reference: A {T: A damper clutch in automatic transmission systems has some advantages of fuel economy and dynamic performance. Although a damper clutch operation improves a fuel economy of the vehicles, a positive operation of a damper clutch in a low vehicle speed induces abnormal vibration. This paper analyzed one of reasons for abnormal vibration by a damper clutch operation in low engine speed ranges. A simulation model was designed to confirm the effects of a damper clutch operation under unstable regions of an engine. A theoretical analysis was carried out about an engine operation stability. Simulation was conducted to depict abnormal vibration by a damper clutch operation in unstable regions of an engine performance curve. The effects of an engine operation region for abnormal vibration by a damper clutch was investigated according to the range and the slope of unstable regions. As a result of simulations, a damper clutch operation would be better to avoid an engine unstable regions.) <|cite_end|> <|cite_start|> (Reference: Approximation Algorithms for Combinatorial Problems: Simple, polynomial-time, heuristic algorithms for finding approximate solutions to various polynomial complete optimization problems are analyzed with respect to their worst case behavior, measured by the ratio of the worst solution value that can be chosen by the algorithm to the optimal value. For certain problems, such as a simple form of the knapsack problem and an optimization problem based on satisfiability testing, there are algorithms for which this ratio is bounded by a constant, independent of the problem size. For a number of set covering problems, simple algorithms yield worst case ratios which can grow with the log of the problem size. And for the problem of finding the maximum clique in a graph, no algorithm has been found for which the ratio does not grow at least as fast as 0(nε), where n is the problem size and ε> 0 depends on the algorithm.) <|cite_end|>. However, there are algorithms that exploit additional structure in $A$ to attain improved approximation ratios\footnote{For example when $A$ has bounded row or column sums <|cite_start|> (Reference: A Linear-Time Approximation Algorithm for the Weighted Vertex Cover Problem: ) <|cite_end|> <|cite_start|> (Reference: {A Greedy Heuristic for the Set-Covering Problem: Let A be a binary matrix of size m × n, let cT be a positive row vector of length n and let e be the column vector, all of whose m components are ones. The set-covering problem is to minimize cTx subject to Ax ≥ e and x binary. We compare the value of the objective function at a feasible solution found by a simple greedy heuristic to the true optimum. It turns out that the ratio between the two grows at most logarithmically in the largest column sum of A. When all the components of cT are the same, our result reduces to a theorem established previously by Johnson and Lovasz.) <|cite_end|>.}. Indeed, our work is motivated by the problem of exploiting structure when covering large numbers of wireless LoRaWAN transmitters with wireless receivers. Transmitters can be viewed as points, which are considered to be covered if they are in the line of sight of a wireless receiver, which in turn drives transmission quality in LoRaWAN. The area in the line of sight of a receiver roughly resembles a simple shape. Many geometric set systems enjoy better approximation ratios via \emph{epsilon-nets}, or $\e$-nets. A set system is said to be \emph{geometric} whenever its elements can be encoded as points in Euclidean space, and sets are derived from containment of the points in geometric shapes, such as half-spaces, balls or rectangles\footnote{Some definition allow for uncountably many geometric shapes in $\sets$, e.g. all squares. However, because the number of points $X$ is finite, there are nevertheless a finite number of unique sets induced by these shapes.}. The seminal work of Brönnimann and Goodrich <|cite_start|> (Reference: {{{A: 分析2002-2012年阳宗海TN、TP、Chl-a、氮磷比的动态变化特征及相互关系,并用综合营养状态指数法评价阳宗海的富营养化状态。结果表明:阳宗海富营养化呈上升趋势,2007年从之前的贫营养级上升为中营养级;Chl-a、TN、TP浓度在2007年后均呈快速上升趋势。 Chl-a浓度与TN、TP浓度呈正相关,且氮磷比越接近阈值16∶1,Chl-a浓度上升就越快。指出若营养盐输入得不到有效控制,预测阳宗海将在2017年前后达到富营养级水平,水质将进一步下降。) <|cite_end|>, and Even \textit{et al.} <|cite_start|> (Reference: Hitting sets when the VC-dimension is small: ) <|cite_end|>, connects the approximability of a hitting set instance to the size of weighted $\e$-nets. Given non-negative weights on the points, $\mu: X \rightarrow \mathbb{R}_{\geq 0}$, a \emph{weighted} $\e$-net with respect to weights $\mu$ is a subset $H \subseteq X$ that hits all $\e$-heavy sets: \begin{equation} \label{eq:e-net} \forall R \in \sets \textrm{ with } \mu(R) \geq \e \cdot \mu(X): \quad R \cap H \neq \emptyset, \end{equation} where the weight of any subset $S \subseteq X$ is defined as $\mu(S) = \sum_{x \in S}\mu(x)$. Even \textit{et al.} <|cite_start|> (Reference: Hitting sets when the VC-dimension is small: ) <|cite_end|> reduce the problem of finding a small hitting set to finding a small $\e$-net via a reformulation of the linear programming relaxation of the hitting set problem (\ref{eq:ip_formulation}). The reformulated LP (\ref{eq:even_lp_formulation}) is a program for finding the largest $\epsilon$, and corresponding weights $\mu$, subject to the constraint that an $\e$-net with respect to weights $\mu$ is a hitting set. \begin{align} \label{eq:even_lp_formulation} \max_{\e, \mu} \ &\epsilon \nonumber\\ \textrm{s.t. } \sum_{j: x_j \in X} a_{ij}&\mu_j \geq \e, && \forall i: R_i \in \sets;\nonumber\\ \sum_{j: x_j \in X} &\mu_j = 1;\\ &\mu_j \geq 0, && \forall j : x_j \in X.\nonumber \end{align} The first constraint requires that each set $R$ is $\e$-heavy; the second constraint normalizes the weights. Let $(\epsilon^*, \mu^*)$ denote an optimal solution to LP (\ref{eq:even_lp_formulation}), with $\mu^* = (\mu^*_1, \dots, \mu^*_n)$. Let $z^*$ be the optimal value to the LP relaxation of the original program (\ref{eq:ip_formulation}). The first constraint ensures that an $\e^*$-net with respect to weights $\mu^*$ is a hitting set. Moreover, the reciprocal optimal value $1/\e^*$ is equal to the optimal LP value $z^*$ <|cite_start|> (Reference: Hitting sets when the VC-dimension is small: ) <|cite_end|>. In particular, an $\e^*$-net of size $g(1/\e^*)$ for some function $g(\cdot)$ is a hitting set of size of $g(z^*)$. Hence, to find a small hitting set it suffices to solve LP (\ref{eq:even_lp_formulation}) and find a small $\e^*$-net with respect to weights $\mu^*$. Haussler and Welzl <|cite_start|> (Reference: Epsilon-nets and simplex range queries: We present a new technique for half-space and simplex range query using <italic>&Ogr;</italic>(<italic>n</italic>) space and <italic>&Ogr;</italic>(<italic>n</italic><supscrpt><italic>a</italic></supscrpt>) query time, where <italic>a</italic> < <italic>d</italic>(d-1)/<italic>d</italic>(<italic>d</italic>-1) + 1 + γ for all dimensions <italic>d</italic> ≥ 2 and <italic>γ</italic> > 0. These bounds are better than those previously published for all <italic>d</italic> ≥ 2. The technique uses random sampling to build a partition-tree structure. We introduce the concept of an <italic>ε</italic>-net for an abstract set of ranges to describe the desired result of this random sampling and give necessary and sufficient conditions that a random sample is an <italic>ε</italic>-net with high probability. We illustrate the application of these ideas to other range query problems.) <|cite_end|> show that set systems with bounded VC-dimension admit small $\e$-nets, and develop a simple algorithm to find them. The VC-dimension is a measure of the set system's complexity. Given a subset $S \subseteq X$, the \emph{projection} of $\sets$ to $S$ is the set system formed by elements $S$ and sets $\reduce{\sets}{S} = \{R \cap S: R \in \sets\}$. The VC-dimension of $\sets$ is the size of the largest subset $S \subseteq X$ such that $\reduce{\sets}{S}$ \emph{shatters} $S$, i.e. the largest set $S$ such that $\reduce{\sets}{S}$ contains all subsets of $S$. In particular, Clarkson <|cite_start|> (Reference: A randomized algorithm for closest-point queries: An algorithm for closest-point queries is given. The problem is this: given a set S of n points in d-dimensional space, build a data structure so that given an arbitrary query point p, a closest point in S to p can be found quickly. The measure of distance is the Euclidean norm. This is sometimes called the post-office problem. The new data structure will be termed an RPO tree, from Randomized Post Office. The expected time required to build an RPO tree is $O(n^{\lceil {{d / 2}} \rceil (1 + \epsilon )} )$, for any fixed $\epsilon > 0$, and a query can be answered in $O(\log n)$ worst-case time. An RPO tree requires $O(n^{\lceil {{d / 2}} \rceil (1 + \epsilon )} )$ space in the worst case. The constant factors in these bounds depend on d and $\epsilon $. The bounds are average-case due to the randomization employed by the algorithm, and hold for any set of input points. This result approaches the $\Omega (n^{\lceil {{d / 2}} \rceil } )$ worst-case time required for any algorithm that constructs the Voronoi...) <|cite_end|>, and Haussler and Welzl <|cite_start|> (Reference: Epsilon-nets and simplex range queries: We present a new technique for half-space and simplex range query using <italic>&Ogr;</italic>(<italic>n</italic>) space and <italic>&Ogr;</italic>(<italic>n</italic><supscrpt><italic>a</italic></supscrpt>) query time, where <italic>a</italic> < <italic>d</italic>(d-1)/<italic>d</italic>(<italic>d</italic>-1) + 1 + γ for all dimensions <italic>d</italic> ≥ 2 and <italic>γ</italic> > 0. These bounds are better than those previously published for all <italic>d</italic> ≥ 2. The technique uses random sampling to build a partition-tree structure. We introduce the concept of an <italic>ε</italic>-net for an abstract set of ranges to describe the desired result of this random sampling and give necessary and sufficient conditions that a random sample is an <italic>ε</italic>-net with high probability. We illustrate the application of these ideas to other range query problems.) <|cite_end|>, show that any set system with VC-dimension $d$ has a weighted $\e$-net of size $\bigO{\tfrac{d}{\e}\log\tfrac{1}{\e}}$. This is remarkable, as the size is independent of both the size of $X$ and $\sets$. Moreover, the algorithm for finding such an $\e$-net is simple: Select a subset $H \subseteq X$ by sampling each element $x$ in $X$ independently. \begin{theorem}[$\e$-net Theorem <|cite_start|> (Reference: Epsilon-nets and simplex range queries: We present a new technique for half-space and simplex range query using <italic>&Ogr;</italic>(<italic>n</italic>) space and <italic>&Ogr;</italic>(<italic>n</italic><supscrpt><italic>a</italic></supscrpt>) query time, where <italic>a</italic> < <italic>d</italic>(d-1)/<italic>d</italic>(<italic>d</italic>-1) + 1 + γ for all dimensions <italic>d</italic> ≥ 2 and <italic>γ</italic> > 0. These bounds are better than those previously published for all <italic>d</italic> ≥ 2. The technique uses random sampling to build a partition-tree structure. We introduce the concept of an <italic>ε</italic>-net for an abstract set of ranges to describe the desired result of this random sampling and give necessary and sufficient conditions that a random sample is an <italic>ε</italic>-net with high probability. We illustrate the application of these ideas to other range query problems.) <|cite_end|> <|cite_start|> (Reference: Almost Tight Bounds for epsilon-Nets: Given any natural numberd, 0<d - 2 + \frac2d + 2 \leqslant lime® 0 \fracfd (e)(1/e)log(1/e) \leqslant d.Unknown control sequence '\leqslant' Further, we prove thatf 1()=max(2, 1/ –1), and similar bounds are established for some special classes of range spaces of Vapnik-Chervonenkis dimension three.) <|cite_end|>] \label{thm:e-net} Let $(X, \sets)$ be a set system with VC-dimension $d$, and let $\mu: X \rightarrow \mathbb{R}_{\geq 0}$ be element weights with $\mu(X) = 1$. Then for any $\e, \gamma \in (0, 1)$: \begin{equation*} H \gets \textrm{ pick each } x \in X \text{ with probability }\min\left\{1, \frac{2\mu(x)}{\e}\cdot\max\left\{\log\tfrac{1}{\gamma}, d\log\tfrac{1}{\e}\right\}\right\} \end{equation*} is a weighted $\e$-net with respect to weights $\mu$ with probability at least $1-\gamma$. \end{theorem} Throughout, we define $\mu(S) = \sum_{x \in S}\mu(x)$ for all subsets $S \subseteq X$. For general set systems of VC-dimension $d$, this bound is tight in expectation <|cite_start|> (Reference: Almost Tight Bounds for epsilon-Nets: Given any natural numberd, 0<d - 2 + \frac2d + 2 \leqslant lime® 0 \fracfd (e)(1/e)log(1/e) \leqslant d.Unknown control sequence '\leqslant' Further, we prove thatf 1()=max(2, 1/ –1), and similar bounds are established for some special classes of range spaces of Vapnik-Chervonenkis dimension three.) <|cite_end|>. However, there are alternative ways to parameterize the complexity of set systems. \subsection{Shallow Cell Complexity} The \textit{shallow cell complexity} (SCC) is a finer parameterization of the complexity of set systems. <|cite_start|> (Reference: Small-Size $\eps$-Nets for Axis-Parallel Rectangles and Boxes: We show the existence of $\varepsilon$-nets of size $O\left(\frac{1}{\varepsilon}\log\log\frac{1}{\varepsilon}\right)$ for planar point sets and axis-parallel rectangular ranges. The same bound holds for points in the plane and “fat” triangular ranges and for point sets in $\boldsymbol{R}^3$ and axis-parallel boxes; these are the first known nontrivial bounds for these range spaces. Our technique also yields improved bounds on the size of $\varepsilon$-nets in the more general context considered by Clarkson and Varadarajan. For example, we show the existence of $\varepsilon$-nets of size $O\left(\frac{1}{\varepsilon}\log\log\log\frac{1}{\varepsilon}\right)$ for the dual range space of “fat” regions and planar point sets (where the regions are the ground objects and the ranges are subsets stabbed by points). Plugging our bounds into the technique of Bronnimann and Goodrich or of Even, Rawitz, and Shahar, we obtain improved approximation factors (computable in expected polynomial time by a randomized algorithm) for the hitting set or the set cover problems associated with the corresponding range spaces.) <|cite_end|> <|cite_start|> (Reference: Weighted capacitated, priority, and geometric set cover via improved quasi-uniform sampling: The minimum-weight set cover problem is widely known to be O(log n)-approximable, with no improvement possible in the general case. We take the approach of exploiting problem structure to achieve better results, by providing a geometry-inspired algorithm whose approximation guarantee depends solely on an instance-specific combinatorial property known as shallow cell complexity (SCC). Roughly speaking, a set cover instance has low SCC if any column-induced submatrix of the corresponding element-set incidence matrix has few distinct rows. By adapting and improving Varadarajan's recent quasi-uniform random sampling method for weighted geometric covering problems, we obtain strong approximation algorithms for a structurally rich class of weighted covering problems with low SCC. We also show how to derandomize our algorithm. Our main result has several immediate consequences. Among them, we settle an open question of Chakrabarty et al. [8] by showing that weighted instances of the capacitated covering problem with underlying network structure have O(1)-approximations. Additionally, our improvements to Varadarajan's sampling framework yield several new results for weighted geometric set cover, hitting set, and dominating set problems. In particular, for weighted covering problems exhibiting linear (or near-linear) union complexity, we obtain approximability results agreeing with those known for the unweighted case. For example, we obtain a constant approximation for the weighted disk cover problem, improving upon the 2O(log* n)-approximation known prior to our work and matching the O(1)-approximation known for the unweighted variant.) <|cite_end|> <|cite_start|> (Reference: Epsilon nets and union complexity: We consider the following combinatorial problem: given a set of n objects (for example, disks in the plane, triangles), and an integer L ≥ 1, what is the size of the smallest subset of these n objects that covers all points that are in at least L of the objects? This is the classic question about the size of an L/n-net for these objects. It is well known that for fairly general classes of geometric objects the size of an L/n-net is O(n/L log n/L). There are some instances where this general bound can be improved, and this improvement is usually due to bounds on the combinatorial complexity (size) of the boundary of the union of these objects. Thus, the boundary of the union of m disks has size O(m), and this translates to an O(n/L) bound on the size of an L/n-net for disks. For m fat triangles, the size of the union boundary is O(m log log m), and this yields L/n-nets of size O(n/L log log n/L). Improved nets directly translate into an upper bound on the ratio between the optimal integral solution and the optimal fractional solution for the corresponding geometric set cover problem. Thus, for covering k points by disks, this ratio is O(1); and for covering k points by fat triangles, this ratio is O(log log k). This connection to approximation algorithms for geometric set cover is a major motivation for attempting to improve bounds on nets. Our main result is an argument that in some cases yields nets that are smaller than those previously obtained from the size of the union boundary. Thus for fat triangles, for instance, we obtain nets of size O(n/L log log log n). We use this to obtain a randomized polynomial time algorithm that gives an O(log log log k)-approximation for the problem of covering k points by the smallest subset of a given set of triangles.) <|cite_end|>. Readers are referred to Mustafa and Varadarajan <|cite_start|> (Reference: Epsilon-approximations \& epsilon-nets: The use of random samples to approximate properties of geometric configurations has been an influential idea for both combinatorial and algorithmic purposes. This chapter considers two related notions---$\epsilon$-approximations and $\epsilon$-nets---that capture the most important quantitative properties that one would expect from a random sample with respect to an underlying geometric configuration.) <|cite_end|> for more background. A \emph{cell} in a binary matrix $A$ is a collection of identical rows. A cell has \emph{depth} $k$ if the number of $1$'s in any of its rows is exactly $k$, i.e., if each set in the cell contains $k$ elements. For a non-decreasing function $\cells{\cdot, \cdot}$ we say binary matrix $A$ has \emph{shallow cell complexity} (SCC) $\cells{\cdot, \cdot}$ if, for all $1 \leq k \leq l \leq \neles$, the number of cells of depth at most $k$ in any submatrix $A^*$ of $A$ of at most $l$ columns, is at most $\cells{l, k}$. A set system $(X, \sets)$ is said to have SCC $\cells{l, k}$ if its set-element incidence matrix $A$ does. Often $\cells{l, k} = \bigO{\cells{l}k^c}$ for some constant $c > 0$ and single-variable function $\cells{\cdot}$, in which case the dependence on $k$ is can be dropped and the SCC denoted by $\cells{l}$. Examples of geometric set systems with small shallow cell complexity are discs in the plane with $\cells{l, k} = \bigO{k}$, and axis-parallel rectangles with $\cells{l, k} = \bigO{lk^2}$. As is true for VC-dimension, there are algorithms that find hitting sets or $\e$-nets with sizes bounded in terms of the shallow cell complexity. A prominent example is the quasi-uniform sampling algorithm of Chan \textit{et al.} <|cite_start|> (Reference: Weighted capacitated, priority, and geometric set cover via improved quasi-uniform sampling: The minimum-weight set cover problem is widely known to be O(log n)-approximable, with no improvement possible in the general case. We take the approach of exploiting problem structure to achieve better results, by providing a geometry-inspired algorithm whose approximation guarantee depends solely on an instance-specific combinatorial property known as shallow cell complexity (SCC). Roughly speaking, a set cover instance has low SCC if any column-induced submatrix of the corresponding element-set incidence matrix has few distinct rows. By adapting and improving Varadarajan's recent quasi-uniform random sampling method for weighted geometric covering problems, we obtain strong approximation algorithms for a structurally rich class of weighted covering problems with low SCC. We also show how to derandomize our algorithm. Our main result has several immediate consequences. Among them, we settle an open question of Chakrabarty et al. [8] by showing that weighted instances of the capacitated covering problem with underlying network structure have O(1)-approximations. Additionally, our improvements to Varadarajan's sampling framework yield several new results for weighted geometric set cover, hitting set, and dominating set problems. In particular, for weighted covering problems exhibiting linear (or near-linear) union complexity, we obtain approximability results agreeing with those known for the unweighted case. For example, we obtain a constant approximation for the weighted disk cover problem, improving upon the 2O(log* n)-approximation known prior to our work and matching the O(1)-approximation known for the unweighted variant.) <|cite_end|>. Given non-negative weights $\mu: X \rightarrow \mathbb{R}_{\geq 0}$, and a value $\epsilon > 0$, the algorithm finds a hitting set while maintaining an upper bound on the probability of selecting any given element. \begin{theorem}[Quasi-uniform sampling <|cite_start|> (Reference: Weighted capacitated, priority, and geometric set cover via improved quasi-uniform sampling: The minimum-weight set cover problem is widely known to be O(log n)-approximable, with no improvement possible in the general case. We take the approach of exploiting problem structure to achieve better results, by providing a geometry-inspired algorithm whose approximation guarantee depends solely on an instance-specific combinatorial property known as shallow cell complexity (SCC). Roughly speaking, a set cover instance has low SCC if any column-induced submatrix of the corresponding element-set incidence matrix has few distinct rows. By adapting and improving Varadarajan's recent quasi-uniform random sampling method for weighted geometric covering problems, we obtain strong approximation algorithms for a structurally rich class of weighted covering problems with low SCC. We also show how to derandomize our algorithm. Our main result has several immediate consequences. Among them, we settle an open question of Chakrabarty et al. [8] by showing that weighted instances of the capacitated covering problem with underlying network structure have O(1)-approximations. Additionally, our improvements to Varadarajan's sampling framework yield several new results for weighted geometric set cover, hitting set, and dominating set problems. In particular, for weighted covering problems exhibiting linear (or near-linear) union complexity, we obtain approximability results agreeing with those known for the unweighted case. For example, we obtain a constant approximation for the weighted disk cover problem, improving upon the 2O(log* n)-approximation known prior to our work and matching the O(1)-approximation known for the unweighted variant.) <|cite_end|>] Suppose a set system defined by $A$ has SCC $\cells{l, k} = \cells{l}k^c$ for some $c >0$. Then there is a randomized poly-time algorithm that returns a hitting set of expected size $\bigO{\max\{1, \log(\cells{\neles})\}}$ times the LP optimum. \end{theorem} The algorithm attains the optimal approximation ratio with respect to the SCC\footnote{In addition, it is worth noting that this algorithm can solve the more general \textit{weighted} hitting set problem, in which each element has a given weight, and the goal is to find the minimum weight hitting set.}. However, the sampling procedure is involved, and may require enumeration over all sets $\sets$, of which there can be $\nsets = \Omega(\neles^c)$ for some constant $c > 0$ <|cite_start|> (Reference: Computing Optimal Epsilon-Nets Is as Easy as Finding an Unhit Set: Given a set system (X,R) with VC-dimension d, the celebrated result of Haussler and Welzl (1987) showed that there exists an -net for (X,R) of size O ( d log 1 ) . Furthermore, the algorithm is simple: just take a uniform random sample from X! However, for many geometric set systems this bound is sub-optimal and since then, there has been much work presenting improved bounds and algorithms tailored to specific geometric set systems. In this paper, we consider the following natural algorithm to compute an -net: start with an initial random sample N . Iteratively, as long as N is not an -net for R, pick any unhit set S ∈ R (say, given by an Oracle), and add O(1) randomly chosen points from S to N . We prove that the above algorithm computes, in expectation, -nets of asymptotically optimal size for all known cases of geometric set systems. Furthermore, it makes O ( 1 ) calls to the Oracle. In particular, this implies that computing optimal-sized -nets are as easy as computing an unhit set in the given set system. 2012 ACM Subject Classification Theory of computation → Sketching and sampling) <|cite_end|>. Taking a different approach, Mustafa and colleagues <|cite_start|> (Reference: A Simple Proof of the Shallow Packing Lemma: ) <|cite_end|> <|cite_start|> (Reference: Computing Optimal Epsilon-Nets Is as Easy as Finding an Unhit Set: Given a set system (X,R) with VC-dimension d, the celebrated result of Haussler and Welzl (1987) showed that there exists an -net for (X,R) of size O ( d log 1 ) . Furthermore, the algorithm is simple: just take a uniform random sample from X! However, for many geometric set systems this bound is sub-optimal and since then, there has been much work presenting improved bounds and algorithms tailored to specific geometric set systems. In this paper, we consider the following natural algorithm to compute an -net: start with an initial random sample N . Iteratively, as long as N is not an -net for R, pick any unhit set S ∈ R (say, given by an Oracle), and add O(1) randomly chosen points from S to N . We prove that the above algorithm computes, in expectation, -nets of asymptotically optimal size for all known cases of geometric set systems. Furthermore, it makes O ( 1 ) calls to the Oracle. In particular, this implies that computing optimal-sized -nets are as easy as computing an unhit set in the given set system. 2012 ACM Subject Classification Theory of computation → Sketching and sampling) <|cite_end|> <|cite_start|> (Reference: A Simple Proof of Optimal Epsilon Nets: ) <|cite_end|> develop a net-finder for asymptotically optimal-sized \emph{un}weighted $\e$-nets with respect to the SCC. The algorithm is remarkably simple: Take an initial sample from $X$, and while there are unhit sets, choose an unhit set arbitrarily, and add $\bigO{1}$ randomly chosen elements from this set to the original sample. The algorithm assumes access to an oracle that returns an unhit set. This oracle is called at most $\bigO{1/\e}$ times in expectation. While the size of the returned $\e$-net is asymptotically on par with the quasi-uniform sampling algorithm, there are large constants in the upper bound <|cite_start|> (Reference: Computing Optimal Epsilon-Nets Is as Easy as Finding an Unhit Set: Given a set system (X,R) with VC-dimension d, the celebrated result of Haussler and Welzl (1987) showed that there exists an -net for (X,R) of size O ( d log 1 ) . Furthermore, the algorithm is simple: just take a uniform random sample from X! However, for many geometric set systems this bound is sub-optimal and since then, there has been much work presenting improved bounds and algorithms tailored to specific geometric set systems. In this paper, we consider the following natural algorithm to compute an -net: start with an initial random sample N . Iteratively, as long as N is not an -net for R, pick any unhit set S ∈ R (say, given by an Oracle), and add O(1) randomly chosen points from S to N . We prove that the above algorithm computes, in expectation, -nets of asymptotically optimal size for all known cases of geometric set systems. Furthermore, it makes O ( 1 ) calls to the Oracle. In particular, this implies that computing optimal-sized -nets are as easy as computing an unhit set in the given set system. 2012 ACM Subject Classification Theory of computation → Sketching and sampling) <|cite_end|>. This algorithm is not directly applicable to the hitting set problem via the LP-reduction above, although it can be used via a standard reduction. The analysis of the algorithm applies to only uniform weights, and the optimal weights $\mu^*$ of the LP-formulation (\ref{eq:even_lp_formulation}) are not generally uniform. Nevertheless, it is possible to reduce the problem of finding a weighted $\e$-net to that of finding a uniform $\e'$-net following a standard reduction, in which an expanded instance is generated by copying each element $x_j \in X$ a number of times roughly proportional to its weight $\mu^*(x_j)$ <|cite_start|> (Reference: {{{A: 分析2002-2012年阳宗海TN、TP、Chl-a、氮磷比的动态变化特征及相互关系,并用综合营养状态指数法评价阳宗海的富营养化状态。结果表明:阳宗海富营养化呈上升趋势,2007年从之前的贫营养级上升为中营养级;Chl-a、TN、TP浓度在2007年后均呈快速上升趋势。 Chl-a浓度与TN、TP浓度呈正相关,且氮磷比越接近阈值16∶1,Chl-a浓度上升就越快。指出若营养盐输入得不到有效控制,预测阳宗海将在2017年前后达到富营养级水平,水质将进一步下降。) <|cite_end|> <|cite_start|> (Reference: Weighted capacitated, priority, and geometric set cover via improved quasi-uniform sampling: The minimum-weight set cover problem is widely known to be O(log n)-approximable, with no improvement possible in the general case. We take the approach of exploiting problem structure to achieve better results, by providing a geometry-inspired algorithm whose approximation guarantee depends solely on an instance-specific combinatorial property known as shallow cell complexity (SCC). Roughly speaking, a set cover instance has low SCC if any column-induced submatrix of the corresponding element-set incidence matrix has few distinct rows. By adapting and improving Varadarajan's recent quasi-uniform random sampling method for weighted geometric covering problems, we obtain strong approximation algorithms for a structurally rich class of weighted covering problems with low SCC. We also show how to derandomize our algorithm. Our main result has several immediate consequences. Among them, we settle an open question of Chakrabarty et al. [8] by showing that weighted instances of the capacitated covering problem with underlying network structure have O(1)-approximations. Additionally, our improvements to Varadarajan's sampling framework yield several new results for weighted geometric set cover, hitting set, and dominating set problems. In particular, for weighted covering problems exhibiting linear (or near-linear) union complexity, we obtain approximability results agreeing with those known for the unweighted case. For example, we obtain a constant approximation for the weighted disk cover problem, improving upon the 2O(log* n)-approximation known prior to our work and matching the O(1)-approximation known for the unweighted variant.) <|cite_end|>. This can generate $\Omega(\neles)$ copies of each element, which can have notable consequences. First, to achieve a weighted $\e^*$-net in the original instance, one must use a smaller value $\e'$ for the expanded instance, on the order of $\bigO{\e^* / \neles}$. This results in an approximation ratio of $\bigO{\log\cells{\bigO{\neles}}}$. Secondly, generating copies can increase the number of elements from $\neles$ to $\Omega(\neles^2)$. This can increase the runtime considerably. In particular, repeatedly sampling from sets of size $\Theta(\neles^2)$ can become prohibitive on large instances such as the wireless coverage problem motivating our work. \subsection{Our Contributions} This paper generalizes the elegant net-finder algorithm of Mustafa <|cite_start|> (Reference: Computing Optimal Epsilon-Nets Is as Easy as Finding an Unhit Set: Given a set system (X,R) with VC-dimension d, the celebrated result of Haussler and Welzl (1987) showed that there exists an -net for (X,R) of size O ( d log 1 ) . Furthermore, the algorithm is simple: just take a uniform random sample from X! However, for many geometric set systems this bound is sub-optimal and since then, there has been much work presenting improved bounds and algorithms tailored to specific geometric set systems. In this paper, we consider the following natural algorithm to compute an -net: start with an initial random sample N . Iteratively, as long as N is not an -net for R, pick any unhit set S ∈ R (say, given by an Oracle), and add O(1) randomly chosen points from S to N . We prove that the above algorithm computes, in expectation, -nets of asymptotically optimal size for all known cases of geometric set systems. Furthermore, it makes O ( 1 ) calls to the Oracle. In particular, this implies that computing optimal-sized -nets are as easy as computing an unhit set in the given set system. 2012 ACM Subject Classification Theory of computation → Sketching and sampling) <|cite_end|> to the setting of weighted $\e$-nets, in order to produce a fast and simple algorithm for the hitting set problem, which attains asymptotically optimal approximation ratios with respect to the shallow cell complexity. The algorithm enjoys a faster runtime that makes solving larger instances, such as LoRaWAN receiver placement at scale, feasible. This is achieved by combining the weighted $\e$-net finder with the reduction of Even \textit{et al.} <|cite_start|> (Reference: Hitting sets when the VC-dimension is small: ) <|cite_end|>. In doing so, we also improve on the asymptotic approximation ratio from $\max\{1, \log\cells{\neles}\}$ to $\max\{1, \bigO{\log\cells{\bigO{z^*}}}\}$ where $z*$ is the optimal value to the linear relaxation of the hitting set program (\ref{eq:ip_formulation}). While in the worst case $z^* = \neles$, it is often the case that $z^* \ll \neles$. However, the multiplicative constants in our analysis are relatively large, matching those of Mustafa <|cite_start|> (Reference: Computing Optimal Epsilon-Nets Is as Easy as Finding an Unhit Set: Given a set system (X,R) with VC-dimension d, the celebrated result of Haussler and Welzl (1987) showed that there exists an -net for (X,R) of size O ( d log 1 ) . Furthermore, the algorithm is simple: just take a uniform random sample from X! However, for many geometric set systems this bound is sub-optimal and since then, there has been much work presenting improved bounds and algorithms tailored to specific geometric set systems. In this paper, we consider the following natural algorithm to compute an -net: start with an initial random sample N . Iteratively, as long as N is not an -net for R, pick any unhit set S ∈ R (say, given by an Oracle), and add O(1) randomly chosen points from S to N . We prove that the above algorithm computes, in expectation, -nets of asymptotically optimal size for all known cases of geometric set systems. Furthermore, it makes O ( 1 ) calls to the Oracle. In particular, this implies that computing optimal-sized -nets are as easy as computing an unhit set in the given set system. 2012 ACM Subject Classification Theory of computation → Sketching and sampling) <|cite_end|>. In addition to the algorithm, our analysis generalizes the classic Packing Lemma of Haussler, as well as the Shallow Packing Lemma of Mustafa \textit{et al.}. <|cite_start|> (Reference: A Simple Proof of Optimal Epsilon Nets: ) <|cite_end|>, to the weighted setting, which may be of independent interest. Key to our approach are adaptations of Mustafa's <|cite_start|> (Reference: A Simple Proof of Optimal Epsilon Nets: ) <|cite_end|> Shallow Packing Lemma and Haussler's classic Packing Lemma that accommodate non-uniform weights. Our main technical contribution is to allow a notion of \emph{weighted packings}. Consider any non-negative weights $\mu: X \rightarrow \mathbb{R}_{\geq 0}$ with $\sum_{x \in X}\mu(X) = 1$, and extend it to element subsets via $\mu(S) = \sum_{x \in S}\mu(S)$.\footnote{Any non-negative weights $w:X \rightarrow \mathbb{R}_{\geq 0}$ with $w(X) > 0$ can be normalized as $\mu(x) = w(x)/w(X)$.} A $(k, \delta)$-\emph{packing with respect to weights $\mu$} is a collection of sets $\pack \subseteq \sets$ in which (i) all sets $R$ in $\pack$ are at most $k$-\textit{heavy}, i.e., have bounded weight $\mu(R) \leq k$; and (ii) all pairs of sets have symmetric differences of weight at least $\delta$. (See Definition \ref{def:k-pack}). Our weighted shallow packing lemma upper bounds the number of sets in $\pack$ as a function of the SCC. Our approach accommodates weights $\mu$ by sampling elements from a distribution with probability mass proportional to the weights, rather than from a uniform distribution as in the original proofs. Moreover, our proof uses sampling \textit{with replacement} rather than \textit{without replacement} to simplify the analysis. While more generally applicable, our result yields the same bound on the size of $\pack$ as in the unweighted setting. An analogous sampling approach is used in proving \cref{thm:e-net} <|cite_start|> (Reference: Almost Tight Bounds for epsilon-Nets: Given any natural numberd, 0<d - 2 + \frac2d + 2 \leqslant lime® 0 \fracfd (e)(1/e)log(1/e) \leqslant d.Unknown control sequence '\leqslant' Further, we prove thatf 1()=max(2, 1/ –1), and similar bounds are established for some special classes of range spaces of Vapnik-Chervonenkis dimension three.) <|cite_end|>. Equipped with our generalized lemma, it is straightforward to adapt Mustafa's <|cite_start|> (Reference: Computing Optimal Epsilon-Nets Is as Easy as Finding an Unhit Set: Given a set system (X,R) with VC-dimension d, the celebrated result of Haussler and Welzl (1987) showed that there exists an -net for (X,R) of size O ( d log 1 ) . Furthermore, the algorithm is simple: just take a uniform random sample from X! However, for many geometric set systems this bound is sub-optimal and since then, there has been much work presenting improved bounds and algorithms tailored to specific geometric set systems. In this paper, we consider the following natural algorithm to compute an -net: start with an initial random sample N . Iteratively, as long as N is not an -net for R, pick any unhit set S ∈ R (say, given by an Oracle), and add O(1) randomly chosen points from S to N . We prove that the above algorithm computes, in expectation, -nets of asymptotically optimal size for all known cases of geometric set systems. Furthermore, it makes O ( 1 ) calls to the Oracle. In particular, this implies that computing optimal-sized -nets are as easy as computing an unhit set in the given set system. 2012 ACM Subject Classification Theory of computation → Sketching and sampling) <|cite_end|> analysis to a weighted net-finder. A proof of our Weighted Packing Lemma is included in the extended online version. <|paper_end|>
[ "<|reference_start|> A randomized algorithm for closest-point queries: An algorithm for closest-point queries is given. The problem is this: given a set S of n points in d-dimensional space, build a data structure so that given an arbitrary query point p, a closest point in S to p can be found quickly. The measure of distance is the Euclidean norm. This is sometimes called the post-office problem. The new data structure will be termed an RPO tree, from Randomized Post Office. The expected time required to build an RPO tree is $O(n^{\\lceil {{d / 2}} \\rceil (1 + \\epsilon )} )$, for any fixed $\\epsilon > 0$, and a query can be answered in $O(\\log n)$ worst-case time. An RPO tree requires $O(n^{\\lceil {{d / 2}} \\rceil (1 + \\epsilon )} )$ space in the worst case. The constant factors in these bounds depend on d and $\\epsilon $. The bounds are average-case due to the randomization employed by the algorithm, and hold for any set of input points. This result approaches the $\\Omega (n^{\\lceil {{d / 2}} \\rceil } )$ worst-case time required for any algorithm that constructs the Voronoi... <|reference_end|>", "<|reference_start|> Weighted capacitated, priority, and geometric set cover via improved\nquasi-uniform sampling: The minimum-weight set cover problem is widely known to be O(log n)-approximable, with no improvement possible in the general case. We take the approach of exploiting problem structure to achieve better results, by providing a geometry-inspired algorithm whose approximation guarantee depends solely on an instance-specific combinatorial property known as shallow cell complexity (SCC). Roughly speaking, a set cover instance has low SCC if any column-induced submatrix of the corresponding element-set incidence matrix has few distinct rows. By adapting and improving Varadarajan's recent quasi-uniform random sampling method for weighted geometric covering problems, we obtain strong approximation algorithms for a structurally rich class of weighted covering problems with low SCC. We also show how to derandomize our algorithm. \n \nOur main result has several immediate consequences. Among them, we settle an open question of Chakrabarty et al. [8] by showing that weighted instances of the capacitated covering problem with underlying network structure have O(1)-approximations. Additionally, our improvements to Varadarajan's sampling framework yield several new results for weighted geometric set cover, hitting set, and dominating set problems. In particular, for weighted covering problems exhibiting linear (or near-linear) union complexity, we obtain approximability results agreeing with those known for the unweighted case. For example, we obtain a constant approximation for the weighted disk cover problem, improving upon the 2O(log* n)-approximation known prior to our work and matching the O(1)-approximation known for the unweighted variant. <|reference_end|>", "<|reference_start|> Epsilon nets and union complexity: We consider the following combinatorial problem: given a set of n objects (for example, disks in the plane, triangles), and an integer L ≥ 1, what is the size of the smallest subset of these n objects that covers all points that are in at least L of the objects? This is the classic question about the size of an L/n-net for these objects. It is well known that for fairly general classes of geometric objects the size of an L/n-net is O(n/L log n/L). There are some instances where this general bound can be improved, and this improvement is usually due to bounds on the combinatorial complexity (size) of the boundary of the union of these objects. Thus, the boundary of the union of m disks has size O(m), and this translates to an O(n/L) bound on the size of an L/n-net for disks. For m fat triangles, the size of the union boundary is O(m log log m), and this yields L/n-nets of size O(n/L log log n/L). Improved nets directly translate into an upper bound on the ratio between the optimal integral solution and the optimal fractional solution for the corresponding geometric set cover problem. Thus, for covering k points by disks, this ratio is O(1); and for covering k points by fat triangles, this ratio is O(log log k). This connection to approximation algorithms for geometric set cover is a major motivation for attempting to improve bounds on nets. Our main result is an argument that in some cases yields nets that are smaller than those previously obtained from the size of the union boundary. Thus for fat triangles, for instance, we obtain nets of size O(n/L log log log n). We use this to obtain a randomized polynomial time algorithm that gives an O(log log log k)-approximation for the problem of covering k points by the smallest subset of a given set of triangles. <|reference_end|>", "<|reference_start|> Computing Optimal Epsilon-Nets Is as Easy as Finding an Unhit Set: Given a set system (X,R) with VC-dimension d, the celebrated result of Haussler and Welzl (1987) showed that there exists an -net for (X,R) of size O ( d log 1 ) . Furthermore, the algorithm is simple: just take a uniform random sample from X! However, for many geometric set systems this bound is sub-optimal and since then, there has been much work presenting improved bounds and algorithms tailored to specific geometric set systems. In this paper, we consider the following natural algorithm to compute an -net: start with an initial random sample N . Iteratively, as long as N is not an -net for R, pick any unhit set S ∈ R (say, given by an Oracle), and add O(1) randomly chosen points from S to N . We prove that the above algorithm computes, in expectation, -nets of asymptotically optimal size for all known cases of geometric set systems. Furthermore, it makes O ( 1 ) calls to the Oracle. In particular, this implies that computing optimal-sized -nets are as easy as computing an unhit set in the given set system. 2012 ACM Subject Classification Theory of computation → Sketching and sampling <|reference_end|>" ]
[ 10, 16, 17, 25 ]
{"<|cite_1|>": "ss-683080", "<|multi_cite_2_1|>": "ss-946698", "<|multi_cite_2_2|>": "ss-1462109", "<|multi_cite_3_1|>": "ss-1625108", "<|multi_cite_3_2|>": "ss-788097", "<|cite_5|>": "ss-809778", "<|cite_6|>": "ss-998382", "<|cite_7|>": "ss-998382", "<|cite_8|>": "ss-998382", "<|cite_9|>": "ss-681199", "<|cite_10|>": "ss-1373137", "<|cite_11|>": "ss-681199", "<|multi_cite_12_1|>": "ss-681199", "<|multi_cite_12_2|>": "ss-1097607", "<|cite_13|>": "ss-1097607", "<|multi_cite_14_1|>": "ss-929630", "<|multi_cite_14_2|>": "ss-1107285", "<|multi_cite_14_3|>": "ss-2274188", "<|cite_15|>": "ss-1759073", "<|cite_17|>": "ss-1107285", "<|cite_18|>": "ss-1107285", "<|cite_19|>": "ss-1106290", "<|multi_cite_20_1|>": "ss-2564477", "<|multi_cite_20_2|>": "ss-1106290", "<|multi_cite_20_3|>": "ss-1726728", "<|cite_21|>": "ss-1106290", "<|multi_cite_22_1|>": "ss-809778", "<|multi_cite_22_2|>": "ss-1107285", "<|cite_23|>": "ss-1106290", "<|cite_24|>": "ss-998382", "<|cite_25|>": "ss-1106290", "<|cite_27|>": "ss-1726728", "<|cite_28|>": "ss-1726728", "<|cite_30|>": "ss-1097607", "<|cite_31|>": "ss-1106290"}
2012.02179
<|paper_start|> Title: Reconstructing cellular automata rules from observations at nonconsecutive times Abstract: Reconstructing cellular automata rules from observations at nonconsecutive times: Recent experiments by Springer and Kenyon have shown that a deep neural network can be trained to predict the action of $t$ steps of Conway's Game of Life automaton given millions of examples of this action on random initial states. However, training was never completely successful for $t>1$, and even when successful, a reconstruction of the elementary rule ($t=1$) from $t>1$ data is not within the scope of what the neural network can deliver. We describe an alternative network-like method, based on constraint projections, where this is possible. From a single data item this method perfectly reconstructs not just the automaton rule but also the states in the time steps it did not see. For a unique reconstruction, the size of the initial state need only be large enough that it and the $t-1$ states it evolves into contain all possible automaton input patterns. We demonstrate the method on 1D binary cellular automata that take inputs from $n$ adjacent cells. The unknown rules in our experiments are not restricted to simple rules derived from a few linear functions on the inputs (as in Game of Life), but include all $2^{2^n}$ possible rules on $n$ inputs. Our results extend to $n=6$, for which exhaustive rule-search is not feasible. By relaxing translational symmetry in space and also time, our method is attractive as a platform for the learning of binary data, since the discreteness of the variables does not pose the same challenge it does for gradient-based methods. Introduction From a hardware perspective, cellular automata (CA) are a natural model of computation. While too simple as a serious model of the universe itself <|cite_start|> (Reference: A New Kind of Science: Book Review for:"A New Kind of Science", by Stephen Wolfram (Wolfram Media, Inc. Champaign IL 2002).) <|cite_end|>, their dynamics exhibit many of the same qualitative modes of behavior seen in physical systems. CA have translational symmetry and as such are of interest in machine learning, where neural networks with convolutional filters are routinely used to detect spatial patterns, no matter where they occur in an image. With convolutional filters matching in size the input field of an automaton, a network has the capacity to represent the automaton rules. The challenge of training a network to learn the rules was recently taken up by Springer and Kenyon (SK) <|cite_start|> (Reference: It's Hard for Neural Networks To Learn the Game of Life: Efforts to improve the learning abilities of neural networks have focused mostly on the role of optimization methods rather than on weight initializations. Recent findings, however, suggest that neural networks rely on lucky random initial weights of subnetworks called "lottery tickets" that converge quickly to a solution. To investigate how weight initializations affect performance, we examine small convolutional networks that are trained to predict n steps of the two-dimensional cellular automaton Conway's Game of Life, the update rules of which can be implemented efficiently in a 2n+1 layer convolutional network. We find that networks of this architecture trained on this task rarely converge. Rather, networks require substantially more parameters to consistently converge. In addition, near-minimal architectures are sensitive to tiny changes in parameters: changing the sign of a single weight can cause the network to fail to learn. Finally, we observe a critical value d_0 such that training minimal networks with examples in which cells are alive with probability d_0 dramatically increases the chance of convergence to a solution. We conclude that training convolutional neural networks to learn the input/output function represented by n steps of Game of Life exhibits many characteristics predicted by the lottery ticket hypothesis, namely, that the size of the networks required to learn this function are often significantly larger than the minimal network required to implement the function.) <|cite_end|> with Conway's Game of Life. In this 2D binary-valued automaton, the value of a cell at the next time step is uniquely determined by the value of a linear filter applied to the $3\times 3$ field of inputs, together with the current value of the cell. SK used the training protocol where random patterns are fed into the network inputs, and the network outputs are compared to the $t$-step Game of Life evolution of the input pattern. Using standard gradient-based optimization of the network parameters, such as the $3\times 3$ filters, SK found that the $t$-step Game of Life rule was learned reliably only for $t=1$. Results were mixed for $t>1$, even when (convolutionally) adding many extra parameters as is common practice in machine learning. Because SK did not impose time-translational symmetry on their filters, their network cannot be faulted for not reconstructing the elementary ($t=1$) CA rule, even when it was able to correctly predict $t>1$ applications of the rule. In fact, SK were motivated by a more general question, the \textit{lottery ticket} hypothesis <|cite_start|> (Reference: The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks: Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.) <|cite_end|> of gradient-based optimization on networks, for which the CA prediction problem is an instructive test case. On the other hand, now that one approach to this problem has been tried, it seems appropriate to consider its difficulty and what methods are available to solve it. The case $t=1$ is trivial for any number of CA inputs $n$: one simply examines states at two consecutive times and constructs the CA rule as a look-up table. For a binary automaton, a random input state having size of order $n 2^n$ will contain all $2^n$ possible patterns to completely define the CA rule. For small enough $n$ the case $t>1$ is trivial as well, since one only has to try all $2^{2^n}$ (binary) CA rules on the input to find one that gives a match to the output when evolved by $t$ steps. Again, a single large random data instance suffices, although now one should expect non-uniqueness, such as when the output state has low entropy (e.g. a uniform state). The CA rule reconstruction problem is therefore interesting for $t>1$ and sufficiently large $n$. Since $2^{2^6}\approx 10^{19}$, $n=6$ is already an interesting case. We present a method for reconstructing CA rules that has several parallels with neural networks. Variables are arranged at the nodes of a layered feed-forward network, with data applied at the input and output layers. ``Training" is done with a single input-output pair. When successful, the variables on the intervening layers reveal the unseen states of the CA. There are also variables on the network edges, connecting every node not in the input layer with its $n$ inputs in the layer one time step earlier. However, these are not weight parameters, as in standard neural networks, but auxiliary variables used for ``splitting" the reconstruction problem into constraints among independent sets of variables. The actual network parameters in our method are the unknown $2^n$ bits of the CA rule. An important point of departure from standard practice is that the parameters are not optimized by minimizing a loss. Instead, the parameter-bits along with the states in the unseen layers are recovered from the fixed-point of an iterative feasibility solver. This alternative approach <|cite_start|> (Reference: Learning Without Loss: We explore a new approach for training neural networks where all loss functions are replaced by hard constraints. The same approach is very successful in phase retrieval, where signals are reconstructed from magnitude constraints and general characteristics (sparsity, support, etc.). Instead of taking gradient steps, the optimizer in the constraint based approach, called relaxed-reflect-reflect (RRR), derives its steps from projections to local constraints. In neural networks one such projection makes the minimal modification to the inputs $x$, the associated weights $w$, and the pre-activation value $y$ at each neuron, to satisfy the equation $x\cdot w=y$. These projections, along with a host of other local projections (constraining pre- and post-activations, etc.) can be partitioned into two sets such that all the projections in each set can be applied concurrently, across the network and across all data in the training batch. This partitioning into two sets is analogous to the situation in phase retrieval and the setting for which the general purpose RRR optimizer was designed. Owing to the novelty of the method, this paper also serves as a self-contained tutorial. Starting with a single-layer network that performs non-negative matrix factorization, and concluding with a generative model comprising an autoencoder and classifier, all applications and their implementations by projections are described in complete detail. Although the new approach has the potential to extend the scope of neural networks (e.g. by defining activation not through functions but constraint sets), most of the featured models are standard to allow comparison with stochastic gradient descent.) <|cite_end|> has been demonstrated for the training of standard network models and seems especially well suited for the CA rule reconstruction problem. After defining the network variables for a general CA in section \ref{sec2}, we show in section \ref{sec3} that the constraints they must satisfy can be partitioned into two sets such that the corresponding projections --- to satisfy the constraints with least change --- are easy, local computations. In section \ref{sec4} we briefly review the general purpose RRR algorithm we will use for finding feasible points, that is, points that satisfy both sets of constraints. The method is first applied, in section \ref{sec5}, to $n=3$ automata in one dimension, featuring Wolfram's Rules <|cite_start|> (Reference: Statistical mechanics of cellular automata: Cellular automata are used as simple mathematical models to investigate self-organization in statistical mechanics. A detailed analysis is given of ''elementary'' cellular automata consisting of a sequence of sites with values 0 or 1 on a line, with each site evolving deterministically in discrete time steps according to p definite rules involving the values of its nearest neighbors. With simple initial configurations, the cellular automata either tend to homogeneous states, or generate self-similar patterns with fractal dimensions approx. =1.59 or approx. =1.69. With ''random'' initial configurations, the irreversible character of the cellular automaton evolution leads to several self-organization phenomena. Statistical properties of the structures generated are found to lie in two universality classes, independent of the details of the initial state or the cellular automaton rules. More complicated cellular automata are briefly considered, and connections with dynamical systems theory and the formal theory of computation are discussed.) <|cite_end|> 30 and 110 as examples of chaotic and Turing-complete CAs. Although a reconstruction algorithm is not needed for $n=3$, we find that the new method appears to find rules without exploring $2^{2^3}$ possibilities. To demonstrate the method in a setting where we know of no practical alternatives, we turn to a CA with $n=6$. Finally, in section \ref{sec6} we describe how the same scheme might be used in a new model for unsupervised learning called Boolean generative networks, where the task is to discover how strings of bits are generated from fewer uncorrelated bits. <|paper_end|>
[ "<|reference_start|> A New Kind of Science: Book Review for:\"A New Kind of Science\", by Stephen Wolfram (Wolfram Media, Inc. Champaign IL 2002). <|reference_end|>", "<|reference_start|> It's Hard for Neural Networks To Learn the Game of Life: Efforts to improve the learning abilities of neural networks have focused mostly on the role of optimization methods rather than on weight initializations. Recent findings, however, suggest that neural networks rely on lucky random initial weights of subnetworks called \"lottery tickets\" that converge quickly to a solution. To investigate how weight initializations affect performance, we examine small convolutional networks that are trained to predict n steps of the two-dimensional cellular automaton Conway's Game of Life, the update rules of which can be implemented efficiently in a 2n+1 layer convolutional network. We find that networks of this architecture trained on this task rarely converge. Rather, networks require substantially more parameters to consistently converge. In addition, near-minimal architectures are sensitive to tiny changes in parameters: changing the sign of a single weight can cause the network to fail to learn. Finally, we observe a critical value d_0 such that training minimal networks with examples in which cells are alive with probability d_0 dramatically increases the chance of convergence to a solution. We conclude that training convolutional neural networks to learn the input/output function represented by n steps of Game of Life exhibits many characteristics predicted by the lottery ticket hypothesis, namely, that the size of the networks required to learn this function are often significantly larger than the minimal network required to implement the function. <|reference_end|>", "<|reference_start|> The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks: Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the \"lottery ticket hypothesis:\" dense, randomly-initialized, feed-forward networks contain subnetworks (\"winning tickets\") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy. <|reference_end|>", "<|reference_start|> Learning Without Loss: We explore a new approach for training neural networks where all loss functions are replaced by hard constraints. The same approach is very successful in phase retrieval, where signals are reconstructed from magnitude constraints and general characteristics (sparsity, support, etc.). Instead of taking gradient steps, the optimizer in the constraint based approach, called relaxed-reflect-reflect (RRR), derives its steps from projections to local constraints. In neural networks one such projection makes the minimal modification to the inputs $x$, the associated weights $w$, and the pre-activation value $y$ at each neuron, to satisfy the equation $x\\cdot w=y$. These projections, along with a host of other local projections (constraining pre- and post-activations, etc.) can be partitioned into two sets such that all the projections in each set can be applied concurrently, across the network and across all data in the training batch. This partitioning into two sets is analogous to the situation in phase retrieval and the setting for which the general purpose RRR optimizer was designed. Owing to the novelty of the method, this paper also serves as a self-contained tutorial. Starting with a single-layer network that performs non-negative matrix factorization, and concluding with a generative model comprising an autoencoder and classifier, all applications and their implementations by projections are described in complete detail. Although the new approach has the potential to extend the scope of neural networks (e.g. by defining activation not through functions but constraint sets), most of the featured models are standard to allow comparison with stochastic gradient descent. <|reference_end|>" ]
[ 0, 1, 2, 3 ]
{"<|cite_1|>": "ss-1269825", "<|cite_2|>": "arxiv-287944", "<|cite_3|>": "arxiv-151068", "<|cite_4|>": "arxiv-232051", "<|cite_5|>": "ss-1282829"}
2103.09518
<|paper_start|> Title: Sliceable Monolith: Monolith First, Microservices Later Abstract: Sliceable Monolith: Monolith First, Microservices Later: We propose Sliceable Monolith, a new methodology for developing microservice architectures and perform their integration testing by leveraging most of the simplicity of a monolith: a single codebase and a local execution environment that simulates distribution. Then, a tool compiles a codebase for each microservice and a cloud deployment configuration. The key enabler of our approach is the technology-agnostic service definition language offered by Jolie. Introduction \label{sec:introduction} Microservices represent the prominent software paradigm for building distributed applications that strive for scalability, maintainability, and tight development and deployment cycles <|cite_start|> (Reference: Microservices: yesterday, today, and tomorrow: Microservices is an architectural style inspired by service-oriented computing that has recently started gaining popularity. Before presenting the current state-of-the-art in the field, this chapter reviews the history of software architecture, the reasons that led to the diffusion of objects and services first, and microservices later. Finally, open problems and future challenges are introduced. This survey primarily addresses newcomers to the discipline, while offering an academic viewpoint on the topic. In addition, we investigate some practical issues and point out some potential solutions.) <|cite_end|>. Microservices enforce strong boundaries and interact by message passing, leading to modular and independently executable software components. However, they require dealing with multiple codebases (one per microservice), making prototyping and testing more challenging compared to a monolith---a standard application that consists of a single executable. The complexity introduced by microservices can easily outweigh their benefits and, especially when it comes to greenfield project development, experts have mixed opinions on whether to start with microservices or with a monolith <|cite_start|> (Reference: Sliceable Monolith: Monolith First, Microservices Later: We propose Sliceable Monolith, a new methodology for developing microservice architectures and perform their integration testing by leveraging most of the simplicity of a monolith: a single codebase and a local execution environment that simulates distribution. Then, a tool compiles a codebase for each microservice and a cloud deployment configuration. The key enabler of our approach is the technology-agnostic service definition language offered by Jolie.) <|cite_end|>. Thus we ask: Can we recover some of the simplicity of monoliths in the development of microservices? A positive answer would contribute to making the greenfield development of microservice systems more approachable, which is important because migrating monoliths to microservices is difficult. In this article, we propose a new development methodology whereby an entire microservice architecture has a single codebase. Thus, our approach reduces drastically the complexity of reaching a working prototype to iterate on. We depict our methodology in \cref{fig:methodology} and outline it in the following. The main artifact in the codebase is a ``sliceable monolith'': the definition of a microservice system that looks like a monolith, but where all components are enforced to be services with clear boundaries and data models (e.g., the structures of Data Transfer Objects). We achieve these features by using the Jolie programming language <|cite_start|> (Reference: Service-Oriented Programming with Jolie: ) <|cite_end|>. Jolie enforces linguistically some best practices for microservice development, e.g., interaction among components happens necessarily through formally-defined service interfaces. Thanks to the built-in facilities of the Jolie interpreter, the application can then be tested locally straight away, enabling fast refinement cycles of the prototype. The structure of a sliceable monolith make it possible to automatically extract the implementation of each microservice into its own codebase. We implement this procedure with an automatic \emph{slicer} tool (called Jolie Slicer), emphasising the fact that the sliceable monolith is cut alongside the sharp boundaries of the microservices. Our slicer tool also produces the necessary configuration for the containerisation and distributed deployment of the microservice system on the cloud. At this point, developers are free to choose between iterating on the sliceable monolith codebase, or to start developing some (even all) of the microservices independently. The technology-agnostic nature of Jolie interfaces makes it possible to mix different languages for the implementation of each microservice (Jolie currently supports its own behavioural language, Java, and JavaScript, with a plug-in architecture for adding more <|cite_start|> (Reference: Service-Oriented Programming with Jolie: ) <|cite_end|>). <|paper_end|>
[ "<|reference_start|> Microservices: yesterday, today, and tomorrow: Microservices is an architectural style inspired by service-oriented computing that has recently started gaining popularity. Before presenting the current state-of-the-art in the field, this chapter reviews the history of software architecture, the reasons that led to the diffusion of objects and services first, and microservices later. Finally, open problems and future challenges are introduced. This survey primarily addresses newcomers to the discipline, while offering an academic viewpoint on the topic. In addition, we investigate some practical issues and point out some potential solutions. <|reference_end|>", "<|reference_start|> Sliceable Monolith: Monolith First, Microservices Later: We propose Sliceable Monolith, a new methodology for developing microservice architectures and perform their integration testing by leveraging most of the simplicity of a monolith: a single codebase and a local execution environment that simulates distribution. Then, a tool compiles a codebase for each microservice and a cloud deployment configuration. The key enabler of our approach is the technology-agnostic service definition language offered by Jolie. <|reference_end|>", "<|reference_start|> Service-Oriented Programming with Jolie: <|reference_end|>", "<|reference_start|> Service-Oriented Programming with Jolie: <|reference_end|>" ]
[ 0, 1, 2, 3 ]
{"<|cite_1|>": "arxiv-99993", "<|multi_cite_2_1|>": "ss-1259204", "<|cite_4|>": "ss-1259205", "<|cite_5|>": "ss-1259205"}
2204.13451
<|paper_start|> Title: Cumulative Stay-time Representation for Electronic Health Records in Medical Event Time Prediction Abstract: Cumulative Stay-time Representation for Electronic Health Records in Medical Event Time Prediction: We address the problem of predicting when a disease will develop, i.e., medical event time (MET), from a patient's electronic health record (EHR). The MET of non-communicable diseases like diabetes is highly correlated to cumulative health conditions, more specifically, how much time the patient spent with specific health conditions in the past. The common time-series representation is indirect in extracting such information from EHR because it focuses on detailed dependencies between values in successive observations, not cumulative information. We propose a novel data representation for EHR called cumulative stay-time representation (CTR), which directly models such cumulative health conditions. We derive a trainable construction of CTR based on neural networks that has the flexibility to fit the target data and scalability to handle high-dimensional EHR. Numerical experiments using synthetic and real-world datasets demonstrate that CTR alone achieves a high prediction performance, and it enhances the performance of existing models when combined with them. Introduction Predicting medical events, such as disease progression, from \emph{electronic health records} (EHR) is an important task in medical and healthcare applications <|cite_start|> (Reference: {DATA-GRU: Dual-Attention Time-Aware Gated Recurrent Unit for Irregular Multivariate Time Series: Due to the discrepancy of diseases and symptoms, patients usually visit hospitals irregularly and different physiological variables are examined at each visit, producing large amounts of irregular multivariate time series (IMTS) data with missing values and varying intervals. Existing methods process IMTS into regular data so that standard machine learning models can be employed. However, time intervals are usually determined by the status of patients, while missing values are caused by changes in symptoms. Therefore, we propose a novel end-to-end Dual-Attention Time-Aware Gated Recurrent Unit (DATA-GRU) for IMTS to predict the mortality risk of patients. In particular, DATA-GRU is able to: 1) preserve the informative varying intervals by introducing a time-aware structure to directly adjust the influence of the previous status in coordination with the elapsed time, and 2) tackle missing values by proposing a novel dual-attention structure to jointly consider data-quality and medical-knowledge. A novel unreliability-aware attention mechanism is designed to handle the diversity in the reliability of different data, while a new symptom-aware attention mechanism is proposed to extract medical reasons from original clinical records. Extensive experimental results on two real-world datasets demonstrate that DATA-GRU can significantly outperform state-of-the-art methods and provide meaningful clinical interpretation.) <|cite_end|>. The EHR represents a patient's health history. Such prediction can assist in providing detailed health guidance, e.g., for early disease detection, intervention, and the allocation of limited resources in healthcare organizations <|cite_start|> (Reference: Increasing tendency of urine protein is a risk factor for rapid EGFR decline in patients with CKD: A machine learning-based prediction model by using a big database: Artificial intelligence is increasingly being adopted in medical fields to predict various outcomes. In particular, chronic kidney disease (CKD) is problematic because it often progresses to end-stage kidney disease. However, the trajectories of kidney function depend on individual patients. In this study, we propose a machine learning-based model to predict the rapid decline in kidney function among CKD patients by using a big hospital database constructed from the information of 118,584 patients derived from the electronic medical records system. The database included the estimated glomerular filtration rate (eGFR) of each patient, recorded at least twice over a period of 90 days. The data of 19,894 patients (16.8%) were observed to satisfy the CKD criteria. We characterized the rapid decline of kidney function by a decline of 30% or more in the eGFR within a period of two years and classified the available patients into two groups—those exhibiting rapid eGFR decline and those exhibiting non-rapid eGFR decline. Following this, we constructed predictive models based on two machine learning algorithms. Longitudinal laboratory data including urine protein, blood pressure, and hemoglobin were used as covariates. We used longitudinal statistics with a baseline corresponding to 90-, 180-, and 360-day windows prior to the baseline point. The longitudinal statistics included the exponentially smoothed average (ESA), where the weight was defined to be 0.9*(t/b), where t denotes the number of days prior to the baseline point and b denotes the decay parameter. In this study, b was taken to be 7 (7-day ESA). We used logistic regression (LR) and random forest (RF) algorithms based on Python code with scikit-learn library (https://scikit-learn.org/) for model creation. The areas under the curve for LR and RF were 0.71 and 0.73, respectively. The 7-day ESA of urine protein ranked within the first two places in terms of importance according to both models. Further, other features related to urine protein were likely to rank higher than the rest. The LR and RF models revealed that the degree of urine protein, especially if it exhibited an increasing tendency, served as a prominent risk factor associated with rapid eGFR decline.) <|cite_end|>. This paper addresses a scenario in which we predict \emph{when} a patient will develop some disease after an index date, i.e., the \emph{medical event time}~(MET), from past observations in EHR, as shown in Fig.~\ref{FigProblem} <|cite_start|> (Reference: Early prediction of diabetes complications from electronic health records: A multi-task survival analysis approach: Type 2 diabetes mellitus (T2DM) is a chronic disease that usually results in multiple complications. Early identification of individuals at risk for complications after being diagnosed with T2DM is of significant clinical value. In this paper, we present a new data-driven predictive approach to predict when a patient will develop complications after the initial T2DM diagnosis. We propose a novel survival analysis method to model the time-to-event of T2DM complications designed to simultaneously achieve two important metrics: 1) accurate prediction of event times, and 2) good ranking of the relative risks of two patients. Moreover, to better capture the correlations of time-to-events of the multiple complications, we further develop a multi-task version of the survival model. To assess the performance of these approaches, we perform extensive experiments on patient level data extracted from a large electronic health record claims database. The results show that our new proposed survival analysis approach consistently outperforms traditional survival models and demonstrate the effectiveness of the multi-task framework over modeling each complication independently.) <|cite_end|>. This is a common task in survival analysis and time-to-event analysis, and we focus on MET, not just its occurrence. The past observations for each patient come from a window that spans the initial observation time to the index date and contain lab test results at each time, as shown in the LHS in Fig.~\ref{FigOrdinary}. From accumulated EHR datasets, we learn a prediction model for MET. \begin{figure}[t] \centering \includegraphics[width=60mm]{problem.pdf} \caption{We predict when patient will develop disease after index date from EHR in observation window.} \label{FigProblem} \end{figure} A patient's cumulative health conditions appearing in past observations in EHR are of help for MET prediction. They can be interpreted as the \emph{cumulative stay-time} in specific health states---more specifically, how much time a patient has spent with different health conditions. For example, when a patient has high blood pressure, hyperglycemia, or high body fat for a long enough period, diseases can develop <|cite_start|> (Reference: 2014 evidence-based guideline for the management of high blood pressure in adults: Report from the panel members appointed to the eighth joint national committee (jnc 8): Hypertension is the most common condition seen in primary care and leads to myocardial infarction, stroke, renal failure, and death if not detected early and treated appropriately. Patients want to be assured that blood pressure (BP) treatment will reduce their disease burden, while clinicians want guidance on hypertension management using the best scientific evidence. This report takes a rigorous, evidence-based approach to recommend treatment thresholds, goals, and medications in the management of hypertension in adults. Evidence was drawn from randomized controlled trials, which represent the gold standard for determining efficacy and effectiveness. Evidence quality and recommendations were graded based on their effect on important outcomes. There is strong evidence to support treating hypertensive persons aged 60 years or older to a BP goal of less than 150/90 mm Hg and hypertensive persons 30 through 59 years of age to a diastolic goal of less than 90 mm Hg; however, there is insufficient evidence in hypertensive persons younger than 60 years for a systolic goal, or in those younger than 30 years for a diastolic goal, so the panel recommends a BP of less than 140/90 mm Hg for those groups based on expert opinion. The same thresholds and goals are recommended for hypertensive adults with diabetes or nondiabetic chronic kidney disease (CKD) as for the general hypertensive population younger than 60 years. There is moderate evidence to support initiating drug treatment with an angiotensin-converting enzyme inhibitor, angiotensin receptor blocker, calcium channel blocker, or thiazide-type diuretic in the nonblack hypertensive population, including those with diabetes. In the black hypertensive population, including those with diabetes, a calcium channel blocker or thiazide-type diuretic is recommended as initial therapy. There is moderate evidence to support initial or add-on antihypertensive therapy with an angiotensin-converting enzyme inhibitor or angiotensin receptor blocker in persons with CKD to improve kidney outcomes. Although this guideline provides evidence-based recommendations for the management of high BP and should meet the clinical needs of most patients, these recommendations are not a substitute for clinical judgment, and decisions about care must carefully consider and incorporate the clinical characteristics and circumstances of each individual patient.) <|cite_end|> <|cite_start|> (Reference: 2. Classification and Diagnosis of Diabetes: Standards of Medical Care in Diabetes-2021.: The American Diabetes Association (ADA) "Standards of Medical Care in Diabetes" includes the ADA's current clinical practice recommendations and is intended to provide the components of diabetes care, general treatment goals and guidelines, and tools to evaluate quality of care. Members of the ADA Professional Practice Committee, a multidisciplinary expert committee (https://doi.org/10.2337/dc21-SPPC), are responsible for updating the Standards of Care annually, or more frequently as warranted. For a detailed description of ADA standards, statements, and reports, as well as the evidence-grading system for ADA's clinical practice recommendations, please refer to the Standards of Care Introduction (https://doi.org/10.2337/dc21-SINT). Readers who wish to comment on the Standards of Care are invited to do so at professional.diabetes.org/SOC.) <|cite_end|>. In particular, for non-communicable diseases, like diabetes, the cumulative stay-time is extremely related to their progress and MET. To utilize information in EHR, the common approach is to formalize the raw observations in EHR into an ordinary time-series representation <|cite_start|> (Reference: {Attain: Attention-based time-aware LSTM networks for disease progression modeling: Modeling patient disease progression using Electronic Health Records (EHRs) is critical to assist clinical decision making. Long-Short Term Memory (LSTM) is an effective model to handle sequential data, such as EHRs, but it encounters two major limitations when applied to EHRs: it is unable to interpret the prediction results and it ignores the irregular time intervals between consecutive events. To tackle these limitations, we propose an attention-based time-aware LSTM Networks (ATTAIN), to improve the interpretability of LSTM and to identify the critical previous events for current diagnosis by modeling the inherent time irregularity. We validate ATTAIN on modeling the progression of an extremely challenging disease, septic shock, by using real-world EHRs. Our results demonstrate that the proposed framework outperforms the state-of-the-art models such as RETAIN and T-LSTM. Also, the generated interpretative time-aware attention weights shed some lights on the progression behaviors of septic shock.) <|cite_end|> <|cite_start|> (Reference: {Latent Ordinary Differential Equations for Irregularly-Sampled Time Series: Time series with non-uniform intervals occur in many applications, and are difficult to model using standard recurrent neural networks (RNNs). We generalize RNNs to have continuous-time hidden dynamics defined by ordinary differential equations (ODEs), a model we call ODE-RNNs. Furthermore, we use ODE-RNNs to replace the recognition network of the recently-proposed Latent ODE model. Both ODE-RNNs and Latent ODEs can naturally handle arbitrary time gaps between observations, and can explicitly model the probability of observation times using Poisson processes. We show experimentally that these ODE-based models outperform their RNN-based counterparts on irregularly-sampled data.) <|cite_end|> <|cite_start|> (Reference: Neural Pharmacodynamic State Space Modeling: Modeling the time-series of high-dimensional, longitudinal data is important for predicting patient disease progression. However, existing neural network based approaches that learn representations of patient state, while very flexible, are susceptible to overfitting. We propose a deep generative model that makes use of a novel attention-based neural architecture inspired by the physics of how treatments affect disease state. The result is a scalable and accurate model of high-dimensional patient biomarkers as they vary over time. Our proposed model yields significant improvements in generalization and, on real-world clinical data, provides interpretable insights into the dynamics of cancer progression.) <|cite_end|>. In this approach, at each time, we record the value of each lab test result, as shown in the table in Fig.~\ref{FigOrdinary}. The focus is on the detailed dependencies between values in successive observations. When we handle the cumulative stay-time with this representation, prediction models, such as recurrent neural networks (RNNs) <|cite_start|> (Reference: {Attain: Attention-based time-aware LSTM networks for disease progression modeling: Modeling patient disease progression using Electronic Health Records (EHRs) is critical to assist clinical decision making. Long-Short Term Memory (LSTM) is an effective model to handle sequential data, such as EHRs, but it encounters two major limitations when applied to EHRs: it is unable to interpret the prediction results and it ignores the irregular time intervals between consecutive events. To tackle these limitations, we propose an attention-based time-aware LSTM Networks (ATTAIN), to improve the interpretability of LSTM and to identify the critical previous events for current diagnosis by modeling the inherent time irregularity. We validate ATTAIN on modeling the progression of an extremely challenging disease, septic shock, by using real-world EHRs. Our results demonstrate that the proposed framework outperforms the state-of-the-art models such as RETAIN and T-LSTM. Also, the generated interpretative time-aware attention weights shed some lights on the progression behaviors of septic shock.) <|cite_end|>, need to encode values in an \emph{entire time series} into the cumulative stay-time. This makes modeling the cumulative stay-time indirect. We therefore propose directly extracting the cumulative stay-time from raw observations in EHR as a novel representation for EHR, that is, the \emph{cumulative stay-time representation} (CTR). In contrast to the time-series representation, we record the cumulative stay-time at each combination of values of lab test results that represents a state, as shown in Fig.~\ref{FigCumTime}. This explicitly represents how long a patient stays in a specific health state. Representations for modeling the cumulative stay-time in specific states and using it in prediction have been proposed in other domains than EHR modeling, such as for the usage history of batteries <|cite_start|> (Reference: Predicting battery life from usage trajectory patterns: This paper addresses the task of predicting the battery capacity degradation ratio for a given usage pattern. This is an interesting pattern recognition task, where each usage pattern is represented as a trajectory in a feature space, and the prediction model captures the previous usage trajectory patterns. The main technical challenge here is how to build a good model from a limited number of training samples. To tackle this, we introduce a new smoothing technique in the trajectory space. The trajectory smoothing technique is shown to be equivalent of a novel regularization scheme for linear regression. Using real Li-ion battery data, we show that our approach outperforms existing methods.) <|cite_end|> and GPS trajectories <|cite_start|> (Reference: Trajectory topic modelling to characterize driving behaviors with GPS-based trajectory data: The rapid accumulation of large-scale driving data represents an opportunity to improve our understanding of driving behavior patterns and driver traveling intentions. However, limited efforts have been devoted to understanding these patterns and the travel intentions behind them. This study proposes a new trajectory topic model (TTM) to explore latent driving patterns from driving trajectory data and to qualitatively analyze drivers’ main traveling intentions. These trajectory data were collected from more than 150,000 commercial vehicles in Fujian Province, China. After data preprocessing, the TTM was then established to decompose trajectory data into various topics with corresponding probabilities, which were correlated to drivers’ preferences. Several experiments conducted in Fuzhou City were performed to evaluate the feasibility and efficiency of the TTM using a real trajectory dataset. The results show that the TTM could effectively mine users’ driving behavior patterns with topic probability. The model would enable us to understand the context in which drivers travel and learn their individual preferences. It is also beneficial in that it can predict drivers’ behaviors, analyze traffic patterns in an entire city, and even help autonomous vehicles to learn from drivers.) <|cite_end|>. However, they are defined only with discrete state modeling that can be seen as bins of non-overlapping segmented values for lab test results, as shown in the table in Fig.~\ref{FigCumTime}. As such, they focus on low-dimensional observations, such as one, two, or three dimensions, and cannot handle more than several dimensions. This is because the number of states increases exponentially against the dimension of observation variables with this state definition. Since observations in EHR have many more dimensions, it is difficult to use these approaches on EHR directly. This paper addresses the above difficulties by deriving methods for constructing CTR with enough scalability to handle EHR. We first formally derive a general construction of CTR by using the discrete state. This formalization leads to further enhancements of CTR with states defined as continuous measurements, CTR-K and CTR-N, which have states based on kernel functions and neural networks, respectively. They are more practical variants that avoid exponential increases in the number of states and lead to smooth interpolation between states. In addition, CTR-N can be learned from data, which enables flexible state modeling. \paragraph{Contributions.} Our main contributions are the following: \begin{itemize} \setlength{\itemsep}{0.01cm} \item We propose a novel representation for EHR for MET prediction, CTR, which represents how long a patient stays in a specific health state. This helps to model the cumulative health conditions of patients. \item We derive a trainable construction of CTR based on neural networks that adapts to data flexibly and has scalability for high-dimensional EHR. \item Extensive experiments on multiple MET prediction tasks with synthetic and real-world datasets show the effectiveness of CTR, especially for EHR with relatively longer observation periods, where cumulative health conditions are more crucial for MET. CTR shows modularity high enough to further improve the prediction performance when combined with other models. \end{itemize} \begin{figure}[t] \centering \includegraphics[width=80mm]{ordinary.pdf} \caption{Ordinary time-series representation.} \label{FigOrdinary} \end{figure} \begin{figure}[t] \centering \includegraphics[width=84mm]{cum_time.pdf} \caption{Cumulative stay-time representation.} \label{FigCumTime} \end{figure} <|paper_end|>
[ "<|reference_start|> 2. Classification and Diagnosis of Diabetes: Standards of Medical Care in Diabetes-2021.: The American Diabetes Association (ADA) \"Standards of Medical Care in Diabetes\" includes the ADA's current clinical practice recommendations and is intended to provide the components of diabetes care, general treatment goals and guidelines, and tools to evaluate quality of care. Members of the ADA Professional Practice Committee, a multidisciplinary expert committee (https://doi.org/10.2337/dc21-SPPC), are responsible for updating the Standards of Care annually, or more frequently as warranted. For a detailed description of ADA standards, statements, and reports, as well as the evidence-grading system for ADA's clinical practice recommendations, please refer to the Standards of Care Introduction (https://doi.org/10.2337/dc21-SINT). Readers who wish to comment on the Standards of Care are invited to do so at professional.diabetes.org/SOC. <|reference_end|>", "<|reference_start|> {Attain: Attention-based time-aware LSTM networks for disease progression modeling: Modeling patient disease progression using Electronic Health Records (EHRs) is critical to assist clinical decision making. Long-Short Term Memory (LSTM) is an effective model to handle sequential data, such as EHRs, but it encounters two major limitations when applied to EHRs: it is unable to interpret the prediction results and it ignores the irregular time intervals between consecutive events. To tackle these limitations, we propose an attention-based time-aware LSTM Networks (ATTAIN), to improve the interpretability of LSTM and to identify the critical previous events for current diagnosis by modeling the inherent time irregularity. We validate ATTAIN on modeling the progression of an extremely challenging disease, septic shock, by using real-world EHRs. Our results demonstrate that the proposed framework outperforms the state-of-the-art models such as RETAIN and T-LSTM. Also, the generated interpretative time-aware attention weights shed some lights on the progression behaviors of septic shock. <|reference_end|>", "<|reference_start|> Predicting battery life from usage trajectory patterns: This paper addresses the task of predicting the battery capacity degradation ratio for a given usage pattern. This is an interesting pattern recognition task, where each usage pattern is represented as a trajectory in a feature space, and the prediction model captures the previous usage trajectory patterns. The main technical challenge here is how to build a good model from a limited number of training samples. To tackle this, we introduce a new smoothing technique in the trajectory space. The trajectory smoothing technique is shown to be equivalent of a novel regularization scheme for linear regression. Using real Li-ion battery data, we show that our approach outperforms existing methods. <|reference_end|>", "<|reference_start|> Trajectory topic modelling to characterize driving behaviors with GPS-based trajectory data: The rapid accumulation of large-scale driving data represents an opportunity to improve our understanding of driving behavior patterns and driver traveling intentions. However, limited efforts have been devoted to understanding these patterns and the travel intentions behind them. This study proposes a new trajectory topic model (TTM) to explore latent driving patterns from driving trajectory data and to qualitatively analyze drivers’ main traveling intentions. These trajectory data were collected from more than 150,000 commercial vehicles in Fujian Province, China. After data preprocessing, the TTM was then established to decompose trajectory data into various topics with corresponding probabilities, which were correlated to drivers’ preferences. Several experiments conducted in Fuzhou City were performed to evaluate the feasibility and efficiency of the TTM using a real trajectory dataset. The results show that the TTM could effectively mine users’ driving behavior patterns with topic probability. The model would enable us to understand the context in which drivers travel and learn their individual preferences. It is also beneficial in that it can predict drivers’ behaviors, analyze traffic patterns in an entire city, and even help autonomous vehicles to learn from drivers. <|reference_end|>" ]
[ 4, 8, 9, 10 ]
{"<|cite_1|>": "ss-897971", "<|cite_2|>": "ss-1237182", "<|cite_3|>": "ss-1237183", "<|multi_cite_4_1|>": "ss-804025", "<|multi_cite_4_2|>": "ss-1237184", "<|multi_cite_5_1|>": "ss-882047", "<|multi_cite_5_2|>": "ss-1096362", "<|multi_cite_5_3|>": "arxiv-322918", "<|cite_6|>": "ss-882047", "<|cite_7|>": "ss-1237185", "<|cite_8|>": "ss-1237186"}
2105.04774-1
<|cite_start|> (Reference: Adapting to Context-Aware Knowledge in Natural Conversation for Multi-Turn Response Selection: Virtual assistants aim to build a human-like conversational agent. However, current human-machine conversations still cannot make users feel intelligent enough to build a continued dialog over time. Some responses from agents are usually inconsistent, uninformative, less-engaging and even memoryless. In recent years, most researchers have tried to employ conversation context and external knowledge, e.g. wiki pages and knowledge graphs, into the model which only focuses on solving some special conversation problems in local perspectives. Few researchers are dedicated to the whole capability of the conversational agent which is endowed with abilities of not only passively reacting the conversation but also proactively leading the conversation. In this paper, we first explore the essence of conversations among humans by analyzing real dialog records. We find that some conversations revolve around the same context and topic, and some require additional information or even move on to a new topic. Base on that, we conclude three conversation modes shown in Figure 1 and try to solve how to adapt to them for a continuous conversation. To this end, we define “Adaptive Knowledge-Grounded Conversations” (AKGCs) where the knowledge is to ground the conversation within a multi-turn context by adapting to three modes. To achieve AKGC, a model called MNDB is proposed to model natural dialog behaviors for multi-turn response selection. To ensure a consistent response, MNDB constructs a multi-turn context flow. Then, to mimic user behaviors of incorporating knowledge in natural conversations, we design a ternary-grounding network along with the context flow. In this network, to gain the ability to adapt to diversified conversation modes, we exploit multi-view semantical relations among response candidates, context and knowledge. Thus, three adaptive matching signals are extracted for final response selection. Evaluation results on two benchmarks indicate that MNDB can significantly outperform state-of-the-art models.) <|cite_end|>. Early work on KG-based CRSs directly generates the recommendation based on connected neighbours of the user's mentioned items in the KG during the conversation <|cite_start|> (Reference: Towards Knowledge-Based Recommender Dialog System: In this paper, we propose a novel end-to-end framework called KBRD, which stands for Knowledge-Based Recommender Dialog System. It integrates the recommender system and the dialog generation system. The dialog system can enhance the performance of the recommendation system by introducing knowledge-grounded information about users' preferences, and the recommender system can improve that of the dialog generation system by providing recommendation-aware vocabulary bias. Experimental results demonstrate that our proposed model has significant advantages over the baselines in both the evaluation of dialog generation and recommendation. A series of analyses show that the two systems can bring mutual benefits to each other, and the introduced knowledge contributes to both their performances.) <|cite_end|> <|cite_start|> (Reference: Towards Deep Conversational Recommendations: There has been growing interest in using neural networks and deep learning techniques to create dialogue systems. Conversational recommendation is an interesting setting for the scientific exploration of dialogue with natural language as the associated discourse involves goal-driven dialogue that often transforms naturally into more free-form chat. This paper provides two contributions. First, until now there has been no publicly available large-scale dataset consisting of real-world dialogues centered around recommendations. To address this issue and to facilitate our exploration here, we have collected ReDial, a dataset consisting of over 10,000 conversations centered around the theme of providing movie recommendations. We make this data available to the community for further research. Second, we use this dataset to explore multiple facets of conversational recommendations. In particular we explore new neural architectures, mechanisms, and methods suitable for composing conversational recommendation systems. Our dataset allows us to systematically probe model sub-components addressing different parts of the overall problem domain ranging from: sentiment analysis and cold-start recommendation generation to detailed aspects of how natural language is used in this setting in the real world. We combine such sub-components into a full-blown dialogue system and examine its behavior.) <|cite_end|>. They can provide users with recommendations and persuasive reasons in a single conversation turn. <|cite_start|> (Reference: Interactive Path Reasoning on Graph for Conversational Recommendation: Traditional recommendation systems estimate user preference on items from past interaction history, thus suffering from the limitations of obtaining fine-grained and dynamic user preference. Conversational recommendation system (CRS) brings revolutions to those limitations by enabling the system to directly ask users about their preferred attributes on items. However, existing CRS methods do not make full use of such advantage -- they only use the attribute feedback in rather implicit ways such as updating the latent user representation. In this paper, we propose Conversational Path Reasoning (CPR), a generic framework that models conversational recommendation as an interactive path reasoning problem on a graph. It walks through the attribute vertices by following user feedback, utilizing the user preferred attributes in an explicit way. By leveraging on the graph structure, CPR is able to prune off many irrelevant candidate attributes, leading to better chance of hitting user preferred attributes. To demonstrate how CPR works, we propose a simple yet effective instantiation named SCPR (Simple CPR). We perform empirical studies on the multi-round conversational recommendation scenario, the most realistic CRS setting so far that considers multiple rounds of asking attributes and recommending items. Through extensive experiments on two datasets Yelp and LastFM, we validate the effectiveness of our SCPR, which significantly outperforms the state-of-the-art CRS methods EAR (arXiv:2002.09102) and CRM (arXiv:1806.03277). In particular, we find that the more attributes there are, the more advantages our method can achieve.) <|cite_end|>applies the KG-based method into a multi-round conversation scenario. It utilizes the structural information of KG to query the user with yes/no questions about her/his explicit preference on the specific attribute value (i.e. entity). Through a series of questions and answers between the user and the dialogue agent, the CRS finds a recommendation path in KG for the final recommendation. Even though the affirmative answer from the user could help CRSs find a promising path in KG for recommendation, the entities in a KG may have multiple relations with other nodes, hence the user may need to answer numerous questions before reaching the correct recommendation. This defect of exisiting KG-based recommenders would significantly weaken and recommendation efficiency and the user experience. \vspace{-0.3cm} <|paper_end|>
[ "<|reference_start|> Adapting to Context-Aware Knowledge in Natural Conversation for Multi-Turn Response Selection: Virtual assistants aim to build a human-like conversational agent. However, current human-machine conversations still cannot make users feel intelligent enough to build a continued dialog over time. Some responses from agents are usually inconsistent, uninformative, less-engaging and even memoryless. In recent years, most researchers have tried to employ conversation context and external knowledge, e.g. wiki pages and knowledge graphs, into the model which only focuses on solving some special conversation problems in local perspectives. Few researchers are dedicated to the whole capability of the conversational agent which is endowed with abilities of not only passively reacting the conversation but also proactively leading the conversation. In this paper, we first explore the essence of conversations among humans by analyzing real dialog records. We find that some conversations revolve around the same context and topic, and some require additional information or even move on to a new topic. Base on that, we conclude three conversation modes shown in Figure 1 and try to solve how to adapt to them for a continuous conversation. To this end, we define “Adaptive Knowledge-Grounded Conversations” (AKGCs) where the knowledge is to ground the conversation within a multi-turn context by adapting to three modes. To achieve AKGC, a model called MNDB is proposed to model natural dialog behaviors for multi-turn response selection. To ensure a consistent response, MNDB constructs a multi-turn context flow. Then, to mimic user behaviors of incorporating knowledge in natural conversations, we design a ternary-grounding network along with the context flow. In this network, to gain the ability to adapt to diversified conversation modes, we exploit multi-view semantical relations among response candidates, context and knowledge. Thus, three adaptive matching signals are extracted for final response selection. Evaluation results on two benchmarks indicate that MNDB can significantly outperform state-of-the-art models. <|reference_end|>", "<|reference_start|> Towards Knowledge-Based Recommender Dialog System: In this paper, we propose a novel end-to-end framework called KBRD, which stands for Knowledge-Based Recommender Dialog System. It integrates the recommender system and the dialog generation system. The dialog system can enhance the performance of the recommendation system by introducing knowledge-grounded information about users' preferences, and the recommender system can improve that of the dialog generation system by providing recommendation-aware vocabulary bias. Experimental results demonstrate that our proposed model has significant advantages over the baselines in both the evaluation of dialog generation and recommendation. A series of analyses show that the two systems can bring mutual benefits to each other, and the introduced knowledge contributes to both their performances. <|reference_end|>", "<|reference_start|> Towards Deep Conversational Recommendations: There has been growing interest in using neural networks and deep learning techniques to create dialogue systems. Conversational recommendation is an interesting setting for the scientific exploration of dialogue with natural language as the associated discourse involves goal-driven dialogue that often transforms naturally into more free-form chat. This paper provides two contributions. First, until now there has been no publicly available large-scale dataset consisting of real-world dialogues centered around recommendations. To address this issue and to facilitate our exploration here, we have collected ReDial, a dataset consisting of over 10,000 conversations centered around the theme of providing movie recommendations. We make this data available to the community for further research. Second, we use this dataset to explore multiple facets of conversational recommendations. In particular we explore new neural architectures, mechanisms, and methods suitable for composing conversational recommendation systems. Our dataset allows us to systematically probe model sub-components addressing different parts of the overall problem domain ranging from: sentiment analysis and cold-start recommendation generation to detailed aspects of how natural language is used in this setting in the real world. We combine such sub-components into a full-blown dialogue system and examine its behavior. <|reference_end|>", "<|reference_start|> Interactive Path Reasoning on Graph for Conversational Recommendation: Traditional recommendation systems estimate user preference on items from past interaction history, thus suffering from the limitations of obtaining fine-grained and dynamic user preference. Conversational recommendation system (CRS) brings revolutions to those limitations by enabling the system to directly ask users about their preferred attributes on items. However, existing CRS methods do not make full use of such advantage -- they only use the attribute feedback in rather implicit ways such as updating the latent user representation. In this paper, we propose Conversational Path Reasoning (CPR), a generic framework that models conversational recommendation as an interactive path reasoning problem on a graph. It walks through the attribute vertices by following user feedback, utilizing the user preferred attributes in an explicit way. By leveraging on the graph structure, CPR is able to prune off many irrelevant candidate attributes, leading to better chance of hitting user preferred attributes. To demonstrate how CPR works, we propose a simple yet effective instantiation named SCPR (Simple CPR). We perform empirical studies on the multi-round conversational recommendation scenario, the most realistic CRS setting so far that considers multiple rounds of asking attributes and recommending items. Through extensive experiments on two datasets Yelp and LastFM, we validate the effectiveness of our SCPR, which significantly outperforms the state-of-the-art CRS methods EAR (arXiv:2002.09102) and CRM (arXiv:1806.03277). In particular, we find that the more attributes there are, the more advantages our method can achieve. <|reference_end|>" ]
[ 0, 1, 2, 3 ]
{"<|cite_1|>": "ss-1400180", "<|cite_2|>": "arxiv-249575", "<|cite_3|>": "arxiv-221953", "<|multi_cite_4_1|>": "arxiv-249575", "<|multi_cite_4_2|>": "arxiv-264266", "<|multi_cite_5_1|>": "arxiv-161813", "<|multi_cite_5_2|>": "ss-1523144", "<|multi_cite_5_4|>": "ss-846950", "<|multi_cite_5_5|>": "arxiv-221953", "<|multi_cite_6_1|>": "arxiv-275563", "<|multi_cite_6_2|>": "arxiv-268254", "<|multi_cite_6_3|>": "arxiv-185003", "<|multi_cite_6_4|>": "arxiv-218786", "<|multi_cite_7_1|>": "arxiv-204968", "<|multi_cite_7_2|>": "ss-1222118", "<|multi_cite_7_3|>": "ss-1855751", "<|multi_cite_7_4|>": "ss-781664", "<|multi_cite_7_5|>": "ss-720996", "<|multi_cite_7_6|>": "ss-1209614", "<|multi_cite_7_7|>": "ss-683721", "<|cite_8|>": "arxiv-204887", "<|cite_9|>": "arxiv-191797", "<|multi_cite_10_1|>": "ss-983139", "<|multi_cite_10_2|>": "ss-1231957", "<|multi_cite_10_3|>": "arxiv-209433", "<|multi_cite_11_1|>": "ss-1400181", "<|multi_cite_11_2|>": "arxiv-151018", "<|multi_cite_11_3|>": "arxiv-204887", "<|multi_cite_12_1|>": "ss-718163", "<|multi_cite_12_2|>": "ss-1121645", "<|multi_cite_12_3|>": "arxiv-191797", "<|multi_cite_12_4|>": "ss-1373464", "<|multi_cite_12_5|>": "ss-911001", "<|multi_cite_13_1|>": "ss-1231957", "<|multi_cite_13_2|>": "arxiv-204968", "<|cite_14|>": "ss-983139", "<|multi_cite_15_1|>": "arxiv-151018", "<|multi_cite_15_2|>": "ss-1400181", "<|cite_16|>": "arxiv-204887", "<|cite_17|>": "ss-1390464", "<|cite_18|>": "arxiv-275563", "<|cite_19|>": "arxiv-161813", "<|multi_cite_20_1|>": "arxiv-161813", "<|multi_cite_20_2|>": "ss-1523144", "<|multi_cite_20_4|>": "ss-846950", "<|multi_cite_20_5|>": "arxiv-221953", "<|multi_cite_20_6|>": "arxiv-266449", "<|multi_cite_21_1|>": "arxiv-275563", "<|multi_cite_21_2|>": "arxiv-268254", "<|multi_cite_21_3|>": "arxiv-185003", "<|multi_cite_21_4|>": "arxiv-218786", "<|multi_cite_21_5|>": "ss-1400182", "<|multi_cite_22_1|>": "arxiv-218786", "<|multi_cite_22_2|>": "arxiv-185003", "<|cite_23|>": "arxiv-275563"}
1704.00708
<|paper_start|> Title: No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis Abstract: No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis: In this paper we develop a new framework that captures the common landscape underlying the common non-convex low-rank matrix problems including matrix sensing, matrix completion and robust PCA. In particular, we show for all above problems (including asymmetric cases): 1) all local minima are also globally optimal; 2) no high-order saddle points exists. These results explain why simple algorithms such as stochastic gradient descent have global converge, and efficiently optimize these non-convex objective functions in practice. Our framework connects and simplifies the existing analyses on optimization landscapes for matrix sensing and symmetric matrix completion. The framework naturally leads to new results for asymmetric matrix completion and robust PCA. Introduction Non-convex optimization is one of the most powerful tools in machine learning. Many popular approaches, from traditional ones such as matrix factorization <|cite_start|> (Reference: Analysis of a complex of statistical variables into principal components.: ) <|cite_end|> to modern deep learning <|cite_start|> (Reference: {Learning Deep Architectures for AI: Theoretical results strongly suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one needs deep architectures. Deep architectures are composed of multiple levels of non-linear operations, such as in neural nets with many hidden layers or in complicated propositional formulae re-using many sub-formulae. Searching the parameter space of deep architectures is a difficult optimization task, but learning algorithms such as those for Deep Belief Networks have recently been proposed to tackle this problem with notable success, beating the state-of-the-art in certain areas. This paper discusses the motivations and principles regarding learning algorithms for deep architectures, in particular those exploiting as building blocks unsupervised learning of single-layer models such as Restricted Boltzmann Machines, used to construct deeper models such as Deep Belief Networks.) <|cite_end|> rely on optimizing non-convex functions. In practice, these functions are optimized using simple algorithms such as alternating minimization or gradient descent. Why such simple algorithms work is still a mystery for many important problems. One way to understand the success of non-convex optimization is to study the optimization landscape: for the objective function, where are the possible locations of global optima, local optima and saddle points. Recently, a line of works showed that several natural problems including tensor decomposition <|cite_start|> (Reference: Escaping from saddle points---Online stochastic gradient for tensor decomposition: We analyze stochastic gradient descent for optimizing non-convex functions. In many cases for non-convex functions the goal is to find a reasonable local minimum, and the main concern is that gradient updates are trapped in saddle points. In this paper we identify strict saddle property for non-convex problem that allows for efficient optimization. Using this property we show that stochastic gradient descent converges to a local minimum in a polynomial number of iterations. To the best of our knowledge this is the first work that gives global convergence guarantees for stochastic gradient descent on non-convex functions with exponentially many local minima and saddle points. Our analysis can be applied to orthogonal tensor decomposition, which is widely used in learning a rich class of latent variable models. We propose a new optimization formulation for the tensor decomposition problem that has strict saddle property. As a result we get the first online algorithm for orthogonal tensor decomposition with global convergence guarantee.) <|cite_end|>, dictionary learning <|cite_start|> (Reference: Complete Dictionary Recovery over the Sphere I: Overview and the Geometric Picture: We consider the problem of recovering a complete (i.e., square and invertible) matrix $\mathbf A_0$, from $\mathbf Y \in \mathbb{R}^{n \times p}$ with $\mathbf Y = \mathbf A_0 \mathbf X_0$, provided $\mathbf X_0$ is sufficiently sparse. This recovery problem is central to theoretical understanding of dictionary learning, which seeks a sparse representation for a collection of input signals and finds numerous applications in modern signal processing and machine learning. We give the first efficient algorithm that provably recovers $\mathbf A_0$ when $\mathbf X_0$ has $O(n)$ nonzeros per column, under suitable probability model for $\mathbf X_0$. In contrast, prior results based on efficient algorithms either only guarantee recovery when $\mathbf X_0$ has $O(\sqrt{n})$ zeros per column, or require multiple rounds of SDP relaxation to work when $\mathbf X_0$ has $O(n^{1-\delta})$ nonzeros per column (for any constant $\delta \in (0, 1)$). } Our algorithmic pipeline centers around solving a certain nonconvex optimization problem with a spherical constraint. In this paper, we provide a geometric characterization of the objective landscape. In particular, we show that the problem is highly structured: with high probability, (1) there are no "spurious" local minimizers; and (2) around all saddle points the objective has a negative directional curvature. This distinctive structure makes the problem amenable to efficient optimization algorithms. In a companion paper (arXiv:1511.04777), we design a second-order trust-region algorithm over the sphere that provably converges to a local minimizer from arbitrary initializations, despite the presence of saddle points.) <|cite_end|>, matrix sensing <|cite_start|> (Reference: Global Optimality of Local Search for Low Rank Matrix Recovery: We show that there are no spurious local minima in the non-convex factorized parametrization of low-rank matrix recovery from incoherent linear measurements. With noisy measurements we show all local minima are very close to a global optimum. Together with a curvature bound at saddle points, this yields a polynomial time global convergence guarantee for stochastic gradient descent {\em from random initialization}.) <|cite_end|> <|cite_start|> (Reference: Non-square matrix sensing without spurious local minima via the Burer-Monteiro approach: We consider the non-square matrix sensing problem, under restricted isometry property (RIP) assumptions. We focus on the non-convex formulation, where any rank-$r$ matrix $X \in \mathbb{R}^{m \times n}$ is represented as $UV^\top$, where $U \in \mathbb{R}^{m \times r}$ and $V \in \mathbb{R}^{n \times r}$. In this paper, we complement recent findings on the non-convex geometry of the analogous PSD setting [5], and show that matrix factorization does not introduce any spurious local minima, under RIP.) <|cite_end|> and matrix completion <|cite_start|> (Reference: Matrix Completion has No Spurious Local Minimum: Matrix completion is a basic machine learning problem that has wide applications, especially in collaborative filtering and recommender systems. Simple non-convex optimization algorithms are popular and effective in practice. Despite recent progress in proving various non-convex algorithms converge from a good initial point, it remains unclear why random or arbitrary initialization suffices in practice. We prove that the commonly used non-convex objective function for \textit{positive semidefinite} matrix completion has no spurious local minima --- all local minima must also be global. Therefore, many popular optimization algorithms such as (stochastic) gradient descent can provably solve positive semidefinite matrix completion with \textit{arbitrary} initialization in polynomial time. The result can be generalized to the setting when the observed entries contain noise. We believe that our main proof strategy can be useful for understanding geometric properties of other statistical problems involving partial or noisy observations.) <|cite_end|> have well-behaved optimization landscape: all local optima are also globally optimal. Combined with recent results (e.g. <|cite_start|> (Reference: Escaping from saddle points---Online stochastic gradient for tensor decomposition: We analyze stochastic gradient descent for optimizing non-convex functions. In many cases for non-convex functions the goal is to find a reasonable local minimum, and the main concern is that gradient updates are trapped in saddle points. In this paper we identify strict saddle property for non-convex problem that allows for efficient optimization. Using this property we show that stochastic gradient descent converges to a local minimum in a polynomial number of iterations. To the best of our knowledge this is the first work that gives global convergence guarantees for stochastic gradient descent on non-convex functions with exponentially many local minima and saddle points. Our analysis can be applied to orthogonal tensor decomposition, which is widely used in learning a rich class of latent variable models. We propose a new optimization formulation for the tensor decomposition problem that has strict saddle property. As a result we get the first online algorithm for orthogonal tensor decomposition with global convergence guarantee.) <|cite_end|> <|cite_start|> (Reference: Accelerated Methods for Non-Convex Optimization: We present an accelerated gradient method for non-convex optimization problems with Lipschitz continuous first and second derivatives. The method requires time $O(\epsilon^{-7/4} \log(1/ \epsilon) )$ to find an $\epsilon$-stationary point, meaning a point $x$ such that $\|\nabla f(x)\| \le \epsilon$. The method improves upon the $O(\epsilon^{-2} )$ complexity of gradient descent and provides the additional second-order guarantee that $\nabla^2 f(x) \succeq -O(\epsilon^{1/2})I$ for the computed $x$. Furthermore, our method is Hessian free, i.e. it only requires gradient computations, and is therefore suitable for large scale applications.) <|cite_end|> <|cite_start|> (Reference: Finding approximate local minima for nonconvex optimization in linear time: We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which is linear in the input representation. The time complexity of our algorithm to find an approximate local minimum is even faster than that of gradient descent to find a critical point. Our algorithm applies to a general class of optimization problems including training a neural network and other non-convex objectives arising in machine learning.) <|cite_end|> <|cite_start|> (Reference: How to Escape Saddle Points Efficiently: This paper shows that a perturbed form of gradient descent converges to a second-order stationary point in a number iterations which depends only poly-logarithmically on dimension (i.e., it is almost "dimension-free"). The convergence rate of this procedure matches the well-known convergence rate of gradient descent to first-order stationary points, up to log factors. When all saddle points are non-degenerate, all second-order stationary points are local minima, and our result thus shows that perturbed gradient descent can escape saddle points almost for free. Our results can be directly applied to many machine learning applications, including deep learning. As a particular concrete example of such an application, we show that our results can be used directly to establish sharp global convergence rates for matrix factorization. Our results rely on a novel characterization of the geometry around saddle points, which may be of independent interest to the non-convex optimization community.) <|cite_end|>) that are guaranteed to find a local minimum for many non-convex functions, such problems can be efficiently solved by basic optimization algorithms such as stochastic gradient descent. In this paper we focus on optimization problems that look for low rank matrices using partial or corrupted observations. Such problems are studied extensively <|cite_start|> (Reference: Hankel Matrix Rank Minimization with Applications to System Identification and Realization: We introduce a flexible optimization framework for nuclear norm minimization of matrices with linear structure, including Hankel, Toeplitz, and moment structures and catalog applications from diverse fields under this framework. We discuss various first-order methods for solving the resulting optimization problem, including alternating direction methods of multipliers, proximal point algorithms, and gradient projection methods. We perform computational experiments to compare these methods on system identification problems and system realization problems. For the system identification problem, the gradient projection method (accelerated by Nesterov's extrapolation techniques) and the proximal point algorithm usually outperform other first-order methods in terms of CPU time on both real and simulated data, for small and large regularization parameters, respectively, while for the system realization problem, the alternating direction method of multipliers, as applied to a certain primal reformulation, usuall...) <|cite_end|> <|cite_start|> (Reference: Fast Maximum Margin Matrix Factorization for Collaborative Prediction: Maximum Margin Matrix Factorization (MMMF) was recently suggested (Srebro et al., 2005) as a convex, infinite dimensional alternative to low-rank approximations and standard factor models. MMMF can be formulated as a semi-definite programming (SDP) and learned using standard SDP solvers. However, current SDP solvers can only handle MMMF problems on matrices of dimensionality up to a few hundred. Here, we investigate a direct gradient-based optimization method for MMMF and demonstrate it on large collaborative prediction problems. We compare against results obtained by Marlin (2004) and find that MMMF substantially outperforms all nine methods he tested.) <|cite_end|> <|cite_start|> (Reference: Exact Matrix Completion via Convex Optimization: We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m >= C n^{1.2} r log n for some positive numerical constant C, then with very high probability, most n by n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.) <|cite_end|> and has many applications in recommendation systems <|cite_start|> (Reference: The bellkor solution to the netflix grand prize: This article describes part of our contribution to the “BellKor’s Pragmatic Chaos” final solution, which won the Netflix Grand Prize. The other portion of the contribution was created while working at AT&T with Robert Bell and Chris Volinsky, as reported in our 2008 Progress Prize report [3]. The final solution includes all the predictors described there. In this article we describe only the newer predictors. So what is new over last year’s solution? First we further improved the baseline predictors (Sec. III). This in turn improves our other models, which incorporate those predictors, like the matrix factorization model (Sec. IV). In addition, an extension of the neighborhood model that addresses temporal dynamics was introduced (Sec. V). On the Restricted Boltzmann Machines (RBM) front, we use a new RBM model with superior accuracy by conditioning the visible units (Sec. VI). The final addition is the introduction of a new blending algorithm, which is based on gradient boosted decision trees (GBDT) (Sec. VII).) <|cite_end|>, see survey by <|cite_start|> (Reference: An overview of low-rank matrix recovery from incomplete observations: Low-rank matrices play a fundamental role in modeling and computational methods for signal processing and machine learning. In many applications where low-rank matrices arise, these matrices cannot be fully sampled or directly observed, and one encounters the problem of recovering the matrix given only incomplete and indirect observations. This paper provides an overview of modern techniques for exploiting low-rank structure to perform matrix recovery in these settings, providing a survey of recent advances in this rapidly-developing field. Specific attention is paid to the algorithms most commonly used in practice, the existing theoretical guarantees for these algorithms, and representative practical applications of these techniques.) <|cite_end|>. These optimization problems can be formalized as follows: \begin{align} \min_{\M\in \R^{d_1\times d_2}} &\quad f(\M), \label{eq:convexobj}\\ s.t. & \quad \mbox{rank}(\M) = r. \nonumber \end{align} Here $\M$ is an $d_1\times d_2$ matrix and $f$ is a convex function of $\M$. The non-convexity of this problem stems from the low rank constraint. Several interesting problems, such as matrix sensing <|cite_start|> (Reference: Guaranteed Minimum-Rank Solutions of Linear Matrix equations via Nuclear Norm Minimization: The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.) <|cite_end|>, matrix completion <|cite_start|> (Reference: Exact Matrix Completion via Convex Optimization: We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m >= C n^{1.2} r log n for some positive numerical constant C, then with very high probability, most n by n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.) <|cite_end|> and robust PCA <|cite_start|> (Reference: Robust Principal Component Analysis?: This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.) <|cite_end|> can all be framed as optimization problems of this form(see Section~\ref{sec:problems}). In practice, <|cite_start|> (Reference: A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization: ) <|cite_end|> heuristic is often used \--- replace $\M$ with an explicit low rank representation $\M = \U\V^\top$, where $\U\in \R^{d_1\times r}$ and $\V\in\R^{d_2\times r}$. The new optimization problem becomes \begin{equation} \min_{\U\in \R^{d_1\times r},\V\in\R^{d_2\times r}} f(\U\V^\top) + Q(\U,\V). \label{eq:asymmetricobj} \end{equation} Here $Q(\U,\V)$ is a (optional) regularizer. Despite the objective being non-convex, for all the problems mentioned above, simple iterative updates from random or even arbitrary initial point find the optimal solution in practice. It is then natural to ask: {\bf Can we characterize the similarities between the optimization landscape of these problems?} We show this is indeed possible: \begin{theorem}[informal] The objective function of matrix sensing, matrix completion and robust PCA have similar optimization landscape. In particular, for all these problems, 1) all local minima are also globally optimal; 2) any saddle point has at least one strictly negative eigenvalue in its Hessian. \end{theorem} More precise theorem statements appear in Section~\ref{sec:problems}. Note that there were several cases (matrix sensing <|cite_start|> (Reference: Global Optimality of Local Search for Low Rank Matrix Recovery: We show that there are no spurious local minima in the non-convex factorized parametrization of low-rank matrix recovery from incoherent linear measurements. With noisy measurements we show all local minima are very close to a global optimum. Together with a curvature bound at saddle points, this yields a polynomial time global convergence guarantee for stochastic gradient descent {\em from random initialization}.) <|cite_end|> <|cite_start|> (Reference: Non-square matrix sensing without spurious local minima via the Burer-Monteiro approach: We consider the non-square matrix sensing problem, under restricted isometry property (RIP) assumptions. We focus on the non-convex formulation, where any rank-$r$ matrix $X \in \mathbb{R}^{m \times n}$ is represented as $UV^\top$, where $U \in \mathbb{R}^{m \times r}$ and $V \in \mathbb{R}^{n \times r}$. In this paper, we complement recent findings on the non-convex geometry of the analogous PSD setting [5], and show that matrix factorization does not introduce any spurious local minima, under RIP.) <|cite_end|>, symmetric matrix completion <|cite_start|> (Reference: Matrix Completion has No Spurious Local Minimum: Matrix completion is a basic machine learning problem that has wide applications, especially in collaborative filtering and recommender systems. Simple non-convex optimization algorithms are popular and effective in practice. Despite recent progress in proving various non-convex algorithms converge from a good initial point, it remains unclear why random or arbitrary initialization suffices in practice. We prove that the commonly used non-convex objective function for \textit{positive semidefinite} matrix completion has no spurious local minima --- all local minima must also be global. Therefore, many popular optimization algorithms such as (stochastic) gradient descent can provably solve positive semidefinite matrix completion with \textit{arbitrary} initialization in polynomial time. The result can be generalized to the setting when the observed entries contain noise. We believe that our main proof strategy can be useful for understanding geometric properties of other statistical problems involving partial or noisy observations.) <|cite_end|>) where similar results on the optimization landscape were known. However the techniques in previous works are tailored to the specific problems and hard to generalize. Our framework captures and simplifies all these previous results, and also gives new results on asymmetric matrix completion and robust PCA. The key observation in our analysis is that for matrix sensing, matrix completion, and robust PCA (when fixing sparse estimate), function $f$ (in Equation \eqref{eq:convexobj}) is a quadratic function over the matrix $\M$. Hence the Hessian $\H$ of $f$ with respect to $\M$ is a constant. More importantly, the Hessian $\H$ in all above problems has similar properties (that it approximately preserves norm, similar to the RIP properties used in matrix sensing <|cite_start|> (Reference: Guaranteed Minimum-Rank Solutions of Linear Matrix equations via Nuclear Norm Minimization: The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.) <|cite_end|>), which allows their optimization landscapes to be characterized in a unified way. Specifically, our framework gives principled way of defining a {\em direction of improvement} for all points that are not globally optimal. Another crucial property of our framework is the interaction between the regularizer and the Hessian $\H$. Intuitively, the regularizer makes sure the solution is in a nice region $\mathcal{B}$ (e.g. set of incoherent matrices for matrix completion), and only within $\mathcal{B}$ the Hessian has the norm preserving property. On the other hand, regularizer should not be too large to severely distort the landscape. This interaction is crucial for matrix completion, and is also very useful in handling noise and perturbations. In Section~\ref{sec:symmetric}, we discuss ideas required to apply this framework to matrix sensing, matrix completion and robust PCA. Using this framework, we also give a way to {\em reduce} asymmetric matrix problems to symmetric PSD problems (where the desired matrix is of the form $\U\U^\top$). See Section~\ref{sec:asymmetric} for more details. In addition to the results of no spurious local minima, our framework also implies that any saddle point has at least one strictly negative eigenvalue in its Hessian. Formally, we proved all above problems satisfy a robust version of this claim --- strict saddle property (see Definition~\ref{def:strict_saddle}), which is one of crucial sufficient conditions to admit efficient optimization algorithms, and thus following corollary (see Section \ref{sec:runtime} for more details). \begin{corollary}[informal]\label{cor:runtime} For matrix sensing, matrix completion and robust PCA, simple local search algorithms can find the desired low rank matrix $\U\V^\top = \M^\star$ from an arbitrary starting point in polynomial time with high probability. \end{corollary} For simplicity, we present most results in the noiseless setting, but our results can also be generalized to handle noise. As an example, we show how to do this for matrix sensing in Section~\ref{sec:noise}. \subsection{Related Works} The landscape of low rank matrix problems have recently received a lot of attention. <|cite_start|> (Reference: Matrix Completion has No Spurious Local Minimum: Matrix completion is a basic machine learning problem that has wide applications, especially in collaborative filtering and recommender systems. Simple non-convex optimization algorithms are popular and effective in practice. Despite recent progress in proving various non-convex algorithms converge from a good initial point, it remains unclear why random or arbitrary initialization suffices in practice. We prove that the commonly used non-convex objective function for \textit{positive semidefinite} matrix completion has no spurious local minima --- all local minima must also be global. Therefore, many popular optimization algorithms such as (stochastic) gradient descent can provably solve positive semidefinite matrix completion with \textit{arbitrary} initialization in polynomial time. The result can be generalized to the setting when the observed entries contain noise. We believe that our main proof strategy can be useful for understanding geometric properties of other statistical problems involving partial or noisy observations.) <|cite_end|> showed symmetric matrix completion has no spurious local minimum. At the same time, <|cite_start|> (Reference: Global Optimality of Local Search for Low Rank Matrix Recovery: We show that there are no spurious local minima in the non-convex factorized parametrization of low-rank matrix recovery from incoherent linear measurements. With noisy measurements we show all local minima are very close to a global optimum. Together with a curvature bound at saddle points, this yields a polynomial time global convergence guarantee for stochastic gradient descent {\em from random initialization}.) <|cite_end|> proved similar result for symmetric matrix sensing. <|cite_start|> (Reference: Non-square matrix sensing without spurious local minima via the Burer-Monteiro approach: We consider the non-square matrix sensing problem, under restricted isometry property (RIP) assumptions. We focus on the non-convex formulation, where any rank-$r$ matrix $X \in \mathbb{R}^{m \times n}$ is represented as $UV^\top$, where $U \in \mathbb{R}^{m \times r}$ and $V \in \mathbb{R}^{n \times r}$. In this paper, we complement recent findings on the non-convex geometry of the analogous PSD setting [5], and show that matrix factorization does not introduce any spurious local minima, under RIP.) <|cite_end|> extended the matrix sensing result to asymmetric case. All of these works guarantee global convergence to the correct solution. There has been a lot of work on the local convergence analysis for various algorithms and problems. For matrix sensing or matrix completion, the works <|cite_start|> (Reference: Matrix Completion from a Few Entries: Let M be a random (alpha n) x n matrix of rank r<<n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E| = O(rn) observed entries with relative root mean square error RMSE <= C(rn/|E|)^0.5 . Further, if r=O(1), M can be reconstructed exactly from |E| = O(n log(n)) entries. These results apply beyond random matrices to general low-rank incoherent matrices. This settles (in the case of bounded rank) a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log(n)), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices.) <|cite_end|> <|cite_start|> (Reference: Matrix Completion from Noisy Entries: Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations of a small, random subset of its entries. The problem arises in a variety of applications, from collaborative filtering (the `Netflix problem') to structure-from-motion and positioning. We study a low complexity algorithm introduced by Keshavan et al.(2009), based on a combination of spectral techniques and manifold optimization, that we call here OptSpace. We prove performance guarantees that are order-optimal in a number of circumstances.) <|cite_end|> <|cite_start|> (Reference: Fast matrix completion without the condition number: We give the first algorithm for Matrix Completion whose running time and sample complexity is polynomial in the rank of the unknown target matrix, linear in the dimension of the matrix, and logarithmic in the condition number of the matrix. To the best of our knowledge, all previous algorithms either incurred a quadratic dependence on the condition number of the unknown matrix or a quadratic dependence on the dimension of the matrix in the running time. Our algorithm is based on a novel extension of Alternating Minimization which we show has theoretical guarantees under standard assumptions even in the presence of noise.) <|cite_end|> <|cite_start|> (Reference: Understanding Alternating Minimization for Matrix Completion: Alternating Minimization is a widely used and empirically successful heuristic for matrix completion and related low-rank optimization problems. Theoretical guarantees for Alternating Minimization have been hard to come by and are still poorly understood. This is in part because the heuristic is iterative and non-convex in nature. We give a new algorithm based on Alternating Minimization that provably recovers an unknown low-rank matrix from a random subsample of its entries under a standard incoherence assumption. Our results reduce the sample size requirements of the Alternating Minimization approach by at least a quartic factor in the rank and the condition number of the unknown matrix. These improvements apply even if the matrix is only close to low-rank in the Frobenius norm. Our algorithm runs in nearly linear time in the dimension of the matrix and, in a broad range of parameters, gives the strongest sample bounds among all subquadratic time algorithms that we are aware of. Underlying our work is a new robust convergence analysis of the well-known Power Method for computing the dominant singular vectors of a matrix. This viewpoint leads to a conceptually simple understanding of Alternating Minimization. In addition, we contribute a new technique for controlling the coherence of intermediate solutions arising in iterative algorithms based on a smoothed analysis of the QR factorization. These techniques may be of interest beyond their application here.) <|cite_end|> <|cite_start|> (Reference: Low-rank Matrix Completion using Alternating Minimization: Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix Challenge. In the alternating minimization approach, the low-rank target matrix is written in a bi-linear form, i.e. $X = UV^\dag$; the algorithm then alternates between finding the best $U$ and the best $V$. Typically, each alternating step in isolation is convex and tractable. However the overall problem becomes non-convex and there has been almost no theoretical understanding of when this approach yields a good result. In this paper we present first theoretical analysis of the performance of alternating minimization for matrix completion, and the related problem of matrix sensing. For both these problems, celebrated recent results have shown that they become well-posed and tractable once certain (now standard) conditions are imposed on the problem. We show that alternating minimization also succeeds under similar conditions. Moreover, compared to existing results, our paper shows that alternating minimization guarantees faster (in particular, geometric) convergence to the true matrix, while allowing a simpler analysis.) <|cite_end|> <|cite_start|> (Reference: Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees: Optimization problems with rank constraints arise in many applications, including matrix regression, structured PCA, matrix completion and matrix decomposition problems. An attractive heuristic for solving such problems is to factorize the low-rank matrix, and to run projected gradient descent on the nonconvex factorized optimization problem. The goal of this problem is to provide a general theoretical framework for understanding when such methods work well, and to characterize the nature of the resulting fixed point. We provide a simple set of conditions under which projected gradient descent, when given a suitable initialization, converges geometrically to a statistically useful solution. Our results are applicable even when the initial solution is outside any region of local convexity, and even when the problem is globally concave. Working in a non-asymptotic framework, we show that our conditions are satisfied for a wide range of concrete models, including matrix regression, structured PCA, matrix completion with real and quantized observations, matrix decomposition, and graph clustering problems. Simulation results show excellent agreement with the theoretical predictions.) <|cite_end|> <|cite_start|> (Reference: Guaranteed matrix completion via nonconvex factorization: Matrix factorization is a popular approach for large-scale matrix completion. In this approach, the unknown low-rank matrix is expressed as the product of two much smaller matrices so that the low-rank property is automatically fulfilled. The resulting optimization problem, even with huge size, can be solved (to stationary points) very efficiently through standard optimization algorithms such as alternating minimization and stochastic gradient descent (SGD). However, due to the non-convexity caused by the factorization model, there is a limited theoretical understanding of whether these algorithms will generate a good solution. In this paper, we establish a theoretical guarantee for the factorization based formulation to correctly recover the underlying low-rank matrix. In particular, we show that under similar conditions to those in previous works, many standard optimization algorithms converge to the global optima of the factorization based formulation, and recover the true low-rank matrix. A major difference of our work from the existing results is that we do not need resampling (i.e., Using independent samples at each iteration) in either the algorithm or its analysis. To the best of our knowledge, our result is the first one that provides exact recovery guarantee for many standard algorithms such as gradient descent, SGD and block coordinate gradient descent.) <|cite_end|> <|cite_start|> (Reference: A nonconvex optimization framework for low rank matrix estimation: We study the estimation of low rank matrices via nonconvex optimization. Compared with convex relaxation, nonconvex optimization exhibits superior empirical performance for large scale instances of low rank matrix estimation. However, the understanding of its theoretical guarantees are limited. In this paper, we define the notion of projected oracle divergence based on which we establish sufficient conditions for the success of nonconvex optimization. We illustrate the consequences of this general framework for matrix sensing. In particular, we prove that a broad class of nonconvex optimization algorithms, including alternating minimization and gradient-type methods, geometrically converge to the global optimum and exactly recover the true low rank matrices under standard conditions.) <|cite_end|> <|cite_start|> (Reference: Convergence Analysis for Rectangular Matrix Completion Using Burer-Monteiro Factorization and Gradient Descent: We address the rectangular matrix completion problem by lifting the unknown matrix to a positive semidefinite matrix in higher dimension, and optimizing a nonconvex objective over the semidefinite factor using a simple gradient descent scheme. With $O( \mu r^2 \kappa^2 n \max(\mu, \log n))$ random observations of a $n_1 \times n_2$ $\mu$-incoherent matrix of rank $r$ and condition number $\kappa$, where $n = \max(n_1, n_2)$, the algorithm linearly converges to the global optimum with high probability.) <|cite_end|> <|cite_start|> (Reference: Low-rank Solutions of Linear Matrix Equations via Procrustes Flow: In this paper we study the problem of recovering a low-rank matrix from linear measurements. Our algorithm, which we call Procrustes Flow, starts from an initial estimate obtained by a thresholding scheme followed by gradient descent on a non-convex objective. We show that as long as the measurements obey a standard restricted isometry property, our algorithm converges to the unknown matrix at a geometric rate. In the case of Gaussian measurements, such convergence occurs for a $n_1 \times n_2$ matrix of rank $r$ when the number of measurements exceeds a constant times $(n_1+n_2)r$.) <|cite_end|> showed that given a good enough initialization, many simple local search algorithms, including gradient descent and alternating least squares, succeed. Particularly, several works (e.g. <|cite_start|> (Reference: Guaranteed matrix completion via nonconvex factorization: Matrix factorization is a popular approach for large-scale matrix completion. In this approach, the unknown low-rank matrix is expressed as the product of two much smaller matrices so that the low-rank property is automatically fulfilled. The resulting optimization problem, even with huge size, can be solved (to stationary points) very efficiently through standard optimization algorithms such as alternating minimization and stochastic gradient descent (SGD). However, due to the non-convexity caused by the factorization model, there is a limited theoretical understanding of whether these algorithms will generate a good solution. In this paper, we establish a theoretical guarantee for the factorization based formulation to correctly recover the underlying low-rank matrix. In particular, we show that under similar conditions to those in previous works, many standard optimization algorithms converge to the global optima of the factorization based formulation, and recover the true low-rank matrix. A major difference of our work from the existing results is that we do not need resampling (i.e., Using independent samples at each iteration) in either the algorithm or its analysis. To the best of our knowledge, our result is the first one that provides exact recovery guarantee for many standard algorithms such as gradient descent, SGD and block coordinate gradient descent.) <|cite_end|> <|cite_start|> (Reference: Convergence Analysis for Rectangular Matrix Completion Using Burer-Monteiro Factorization and Gradient Descent: We address the rectangular matrix completion problem by lifting the unknown matrix to a positive semidefinite matrix in higher dimension, and optimizing a nonconvex objective over the semidefinite factor using a simple gradient descent scheme. With $O( \mu r^2 \kappa^2 n \max(\mu, \log n))$ random observations of a $n_1 \times n_2$ $\mu$-incoherent matrix of rank $r$ and condition number $\kappa$, where $n = \max(n_1, n_2)$, the algorithm linearly converges to the global optimum with high probability.) <|cite_end|>) accomplished this by showing a geometric property which is very similar to strong convexity holds in the neighborhood of optimal solution. For robust PCA, there are also many analysis for local convergence <|cite_start|> (Reference: The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices: This paper proposes scalable and fast algorithms for solving the Robust PCA problem, namely recovering a low-rank matrix with an unknown fraction of its entries being arbitrarily corrupted. This problem arises in many applications, such as image processing, web data ranking, and bioinformatic data analysis. It was recently shown that under surprisingly broad conditions, the Robust PCA problem can be exactly solved via convex optimization that minimizes a combination of the nuclear norm and the $\ell^1$-norm . In this paper, we apply the method of augmented Lagrange multipliers (ALM) to solve this convex program. As the objective function is non-smooth, we show how to extend the classical analysis of ALM to such new objective functions and prove the optimality of the proposed algorithms and characterize their convergence rate. Empirically, the proposed new algorithms can be more than five times faster than the previous state-of-the-art algorithms for Robust PCA, such as the accelerated proximal gradient (APG) algorithm. Moreover, the new algorithms achieve higher precision, yet being less storage/memory demanding. We also show that the ALM technique can be used to solve the (related but somewhat simpler) matrix completion problem and obtain rather promising results too. We further prove the necessary and sufficient condition for the inexact ALM to converge globally. Matlab code of all algorithms discussed are available at http://perception.csl.illinois.edu/matrix-rank/home.html) <|cite_end|> <|cite_start|> (Reference: Non-convex Robust PCA: We propose a new method for robust PCA -- the task of recovering a low-rank matrix from sparse corruptions that are of unknown value and support. Our method involves alternating between projecting appropriate residuals onto the set of low-rank matrices, and the set of sparse matrices; each projection is {\em non-convex} but easy to compute. In spite of this non-convexity, we establish exact recovery of the low-rank matrix, under the same conditions that are required by existing methods (which are based on convex optimization). For an $m \times n$ input matrix ($m \leq n)$, our method has a running time of $O(r^2mn)$ per iteration, and needs $O(\log(1/\epsilon))$ iterations to reach an accuracy of $\epsilon$. This is close to the running time of simple PCA via the power method, which requires $O(rmn)$ per iteration, and $O(\log(1/\epsilon))$ iterations. In contrast, existing methods for robust PCA, which are based on convex optimization, have $O(m^2n)$ complexity per iteration, and take $O(1/\epsilon)$ iterations, i.e., exponentially more iterations for the same accuracy. Experiments on both synthetic and real data establishes the improved speed and accuracy of our method over existing convex implementations.) <|cite_end|> <|cite_start|> (Reference: Fast Algorithms for Robust PCA via Gradient Descent: We consider the problem of Robust PCA in the fully and partially observed settings. Without corruptions, this is the well-known matrix completion problem. From a statistical standpoint this problem has been recently well-studied, and conditions on when recovery is possible (how many observations do we need, how many corruptions can we tolerate) via polynomial-time algorithms is by now understood. This paper presents and analyzes a non-convex optimization approach that greatly reduces the computational complexity of the above problems, compared to the best available algorithms. In particular, in the fully observed case, with $r$ denoting rank and $d$ dimension, we reduce the complexity from $\mathcal{O}(r^2d^2\log(1/\varepsilon))$ to $\mathcal{O}(rd^2\log(1/\varepsilon))$ -- a big savings when the rank is big. For the partially observed case, we show the complexity of our algorithm is no more than $\mathcal{O}(r^4d \log d \log(1/\varepsilon))$. Not only is this the best-known run-time for a provable algorithm under partial observation, but in the setting where $r$ is small compared to $d$, it also allows for near-linear-in-$d$ run-time that can be exploited in the fully-observed case as well, by simply running our algorithm on a subset of the observations.) <|cite_end|> <|cite_start|> (Reference: A nonconvex free lunch for low-rank plus sparse matrix recovery: We study the problem of low-rank plus sparse matrix recovery. We propose a generic and efficient nonconvex optimization algorithm based on projected gradient descent and double thresholding operator, with much lower computational complexity. Compared with existing convex-relaxation based methods, the proposed algorithm recovers the low-rank plus sparse matrices for free, without incurring any additional statistical cost. It not only enables exact recovery of the unknown low-rank and sparse matrices in the noiseless setting, and achieves minimax optimal statistical error rate in the noisy case, but also matches the best-known robustness guarantee (i.e., tolerance for sparse corruption). At the core of our theory is a novel structural Lipschitz gradient condition for low-rank plus sparse matrices, which is essential for proving the linear convergence rate of our algorithm, and we believe is of independent interest to prove fast rates for general superposition-structured models. We demonstrate the superiority of our generic algorithm, both theoretically and experimentally, through three concrete applications: robust matrix sensing, robust PCA and one-bit matrix decomposition.) <|cite_end|>. Several works also try to unify the analysis for similar problems. <|cite_start|> (Reference: Dropping Convexity for Faster Semi-definite Optimization: We study the minimization of a convex function $f(X)$ over the set of $n\times n$ positive semi-definite matrices, but when the problem is recast as $\min_U g(U) := f(UU^\top)$, with $U \in \mathbb{R}^{n \times r}$ and $r \leq n$. We study the performance of gradient descent on $g$---which we refer to as Factored Gradient Descent (FGD)---under standard assumptions on the original function $f$. We provide a rule for selecting the step size and, with this choice, show that the local convergence rate of FGD mirrors that of standard gradient descent on the original $f$: i.e., after $k$ steps, the error is $O(1/k)$ for smooth $f$, and exponentially small in $k$ when $f$ is (restricted) strongly convex. In addition, we provide a procedure to initialize FGD for (restricted) strongly convex objectives and when one only has access to $f$ via a first-order oracle; for several problem instances, such proper initialization leads to global convergence guarantees. FGD and similar procedures are widely used in practice for problems that can be posed as matrix factorization. To the best of our knowledge, this is the first paper to provide precise convergence rate guarantees for general convex functions under standard convex assumptions.) <|cite_end|> gave a framework for local analysis for these low rank problems. <|cite_start|> (Reference: Basis learning as an algorithmic primitive: A number of important problems in theoretical computer science and machine learning can be interpreted as recovering a certain basis. These include symmetric matrix eigendecomposition, certain tensor decompositions, Independent Component Analysis (ICA), spectral clustering and Gaussian mixture learning. Each of these problems reduces to an instance of our general model, which we call a "Basis Encoding Function" (BEF). We show that learning a basis within this model can then be provably and efficiently achieved using a first order iteration algorithm (gradient iteration). Our algorithm goes beyond tensor methods while generalizing a number of existing algorithms---e.g., the power method for symmetric matrices, the tensor power iteration for orthogonal decomposable tensors, and cumulant-based FastICA---all within a broader function-based dynamical systems framework. Our framework also unifies the unusual phenomenon observed in these domains that they can be solved using efficient non-convex optimization. Specifically, we describe a class of BEFs such that their local maxima on the unit sphere are in one-to-one correspondence with the basis elements. This description relies on a certain "hidden convexity" property of these functions. We provide a complete theoretical analysis of the gradient iteration even when the BEF is perturbed. We show convergence and complexity bounds polynomial in dimension and other relevant parameters, such as perturbation size. Our perturbation results can be considered as a non-linear version of the classical Davis-Kahan theorem for perturbations of eigenvectors of symmetric matrices. In addition we show that our algorithm exhibits fast (superlinear) convergence and relate the speed of convergence to the properties of the BEF. Moreover, the gradient iteration algorithm can be easily and efficiently implemented in practice.) <|cite_end|> showed a framework of learning basis functions, which generalizes tensor decompositions. Their techniques imply the optimization landscape for all such problems are very similar. For problems looking for a symmetric PSD matrix, <|cite_start|> (Reference: The nonconvex geometry of low-rank matrix optimizations with general objective functions: This work considers the minimization of a general convex function f (X) over the cone of positive semi-definite matrices whose optimal solution X∗ is of low-rank. Standard first-order convex solvers require performing an eigenvalue decomposition in each iteration, severely limiting their scalability. A natural nonconvex reformulation of the problem factors the variable X into the product of a rectangular matrix with fewer columns and its transpose. For a special class of matrix sensing and completion problems with quadratic objective functions, local search algorithms applied to the factored problem have been shown to be much more efficient and, in spite of being nonconvex, to converge to the global optimum. The purpose of this work is to extend this line of study to general convex objective functions f (X) and investigate the geometry of the resulting factored formulations. Specifically, we prove that when f (X) satisfies the restricted well-conditioned assumption, each critical point of the factored problem either corresponds to the optimal solution X∗ or a strict saddle where the Hessian matrix has a strictly negative eigenvalue. Such a geometric structure of the factored formulation ensures that many local search algorithms can converge to the global optimum with random initializations.) <|cite_end|> showed for objective similar to \eqref{eq:asymmetricobj} (but in the symmetric setting), restricted smoothness/strong convexity on the function $f$ suffices for local analysis. However, their framework does not address the interaction between regularizer and the function $f$, hence cannot be directly applied to problems such as matrix completion or robust PCA. \paragraph{Organization} We will first introduce notations and basic optimality conditions in Section~\ref{sec:prelim}. Then Section~\ref{sec:problems} introduces the problems and our results. For simplicity, we present our framework for the symmetric case in Section~\ref{sec:symmetric}, and briefly discuss how to reduce asymmetric problem to symmetric problem in Section~\ref{sec:asymmetric}. We discuss how our geometric result implies efficient algorithms in Section~\ref{sec:runtime}. We then show how our geometric results imply fast runtime of popular local search algorithms in Section~\ref{sec:runtime}. For clean presentation, many proofs are deferred to appendix . <|paper_end|>
[ "<|reference_start|> Accelerated Methods for Non-Convex Optimization: We present an accelerated gradient method for non-convex optimization problems with Lipschitz continuous first and second derivatives. The method requires time $O(\\epsilon^{-7/4} \\log(1/ \\epsilon) )$ to find an $\\epsilon$-stationary point, meaning a point $x$ such that $\\|\\nabla f(x)\\| \\le \\epsilon$. The method improves upon the $O(\\epsilon^{-2} )$ complexity of gradient descent and provides the additional second-order guarantee that $\\nabla^2 f(x) \\succeq -O(\\epsilon^{1/2})I$ for the computed $x$. Furthermore, our method is Hessian free, i.e. it only requires gradient computations, and is therefore suitable for large scale applications. <|reference_end|>", "<|reference_start|> Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees: Optimization problems with rank constraints arise in many applications, including matrix regression, structured PCA, matrix completion and matrix decomposition problems. An attractive heuristic for solving such problems is to factorize the low-rank matrix, and to run projected gradient descent on the nonconvex factorized optimization problem. The goal of this problem is to provide a general theoretical framework for understanding when such methods work well, and to characterize the nature of the resulting fixed point. We provide a simple set of conditions under which projected gradient descent, when given a suitable initialization, converges geometrically to a statistically useful solution. Our results are applicable even when the initial solution is outside any region of local convexity, and even when the problem is globally concave. Working in a non-asymptotic framework, we show that our conditions are satisfied for a wide range of concrete models, including matrix regression, structured PCA, matrix completion with real and quantized observations, matrix decomposition, and graph clustering problems. Simulation results show excellent agreement with the theoretical predictions. <|reference_end|>", "<|reference_start|> Dropping Convexity for Faster Semi-definite Optimization: We study the minimization of a convex function $f(X)$ over the set of $n\\times n$ positive semi-definite matrices, but when the problem is recast as $\\min_U g(U) := f(UU^\\top)$, with $U \\in \\mathbb{R}^{n \\times r}$ and $r \\leq n$. We study the performance of gradient descent on $g$---which we refer to as Factored Gradient Descent (FGD)---under standard assumptions on the original function $f$. We provide a rule for selecting the step size and, with this choice, show that the local convergence rate of FGD mirrors that of standard gradient descent on the original $f$: i.e., after $k$ steps, the error is $O(1/k)$ for smooth $f$, and exponentially small in $k$ when $f$ is (restricted) strongly convex. In addition, we provide a procedure to initialize FGD for (restricted) strongly convex objectives and when one only has access to $f$ via a first-order oracle; for several problem instances, such proper initialization leads to global convergence guarantees. FGD and similar procedures are widely used in practice for problems that can be posed as matrix factorization. To the best of our knowledge, this is the first paper to provide precise convergence rate guarantees for general convex functions under standard convex assumptions. <|reference_end|>", "<|reference_start|> Basis learning as an algorithmic primitive: A number of important problems in theoretical computer science and machine learning can be interpreted as recovering a certain basis. These include symmetric matrix eigendecomposition, certain tensor decompositions, Independent Component Analysis (ICA), spectral clustering and Gaussian mixture learning. Each of these problems reduces to an instance of our general model, which we call a \"Basis Encoding Function\" (BEF). We show that learning a basis within this model can then be provably and efficiently achieved using a first order iteration algorithm (gradient iteration). Our algorithm goes beyond tensor methods while generalizing a number of existing algorithms---e.g., the power method for symmetric matrices, the tensor power iteration for orthogonal decomposable tensors, and cumulant-based FastICA---all within a broader function-based dynamical systems framework. Our framework also unifies the unusual phenomenon observed in these domains that they can be solved using efficient non-convex optimization. Specifically, we describe a class of BEFs such that their local maxima on the unit sphere are in one-to-one correspondence with the basis elements. This description relies on a certain \"hidden convexity\" property of these functions. \nWe provide a complete theoretical analysis of the gradient iteration even when the BEF is perturbed. We show convergence and complexity bounds polynomial in dimension and other relevant parameters, such as perturbation size. Our perturbation results can be considered as a non-linear version of the classical Davis-Kahan theorem for perturbations of eigenvectors of symmetric matrices. In addition we show that our algorithm exhibits fast (superlinear) convergence and relate the speed of convergence to the properties of the BEF. Moreover, the gradient iteration algorithm can be easily and efficiently implemented in practice. <|reference_end|>" ]
[ 8, 32, 43, 44 ]
{"<|cite_1|>": "ss-835638", "<|cite_2|>": "ss-1269519", "<|cite_3|>": "ss-697593", "<|cite_4|>": "arxiv-86986", "<|multi_cite_5_1|>": "arxiv-98519", "<|multi_cite_5_2|>": "arxiv-105578", "<|cite_6|>": "arxiv-98531", "<|multi_cite_17_1|>": "ss-697593", "<|multi_cite_17_2|>": "arxiv-109226", "<|multi_cite_17_3|>": "ss-1023217", "<|multi_cite_17_4|>": "arxiv-118050", "<|multi_cite_7_1|>": "ss-1382744", "<|multi_cite_7_2|>": "ss-889410", "<|multi_cite_7_3|>": "arxiv-3881", "<|cite_8|>": "ss-1295171", "<|cite_18|>": "arxiv-90925", "<|cite_9|>": "ss-1071734", "<|cite_10|>": "arxiv-3881", "<|cite_11|>": "arxiv-10591", "<|cite_19|>": "ss-751859", "<|multi_cite_12_1|>": "arxiv-98519", "<|multi_cite_12_2|>": "arxiv-105578", "<|cite_13|>": "arxiv-98531", "<|cite_14|>": "ss-1071734", "<|cite_20|>": "arxiv-98531", "<|cite_21|>": "arxiv-98519", "<|cite_22|>": "arxiv-105578", "<|multi_cite_15_1|>": "arxiv-6122", "<|multi_cite_15_2|>": "arxiv-7788", "<|multi_cite_15_3|>": "arxiv-63591", "<|multi_cite_15_4|>": "arxiv-53489", "<|multi_cite_15_5|>": "arxiv-38804", "<|multi_cite_15_6|>": "arxiv-83757", "<|multi_cite_15_7|>": "ss-2554302", "<|multi_cite_15_8|>": "ss-2142371", "<|multi_cite_15_9|>": "arxiv-98478", "<|multi_cite_15_10|>": "ss-1579868", "<|multi_cite_23_1|>": "ss-2554302", "<|multi_cite_23_2|>": "arxiv-98478", "<|multi_cite_16_1|>": "arxiv-16283", "<|multi_cite_16_2|>": "arxiv-67959", "<|multi_cite_16_3|>": "arxiv-98638", "<|multi_cite_16_4|>": "ss-2238255", "<|cite_24|>": "arxiv-83916", "<|cite_25|>": "ss-2238256", "<|cite_26|>": "ss-998671"}
2309.03728-1
] Let $\varepsilon \in (0,1)$. Any adjacency sketch for trees that only errs on edges with probability at most $\varepsilon$ must use labels of size $\log n+\log (1-\varepsilon)-O(1)$ for $n$-vertex trees. \end{theorem} A possible relaxation of Definition~\ref{resilience_definition} is to allow two-sided errors, i.e.\ to err both on edges and non-edges (with an error probability of $1/3$ or some other fixed value bounded away from $1/2$). All the constructions (Theorems ~\ref{thm: projective}, \ref{thm: coloring}, and \ref{thm: general}) hold of course, and the lower bound (Theorem~\ref{thm: lower}) can be modified for the two-sided variant as discussed in~\cref{lower bound for two-sided error}. We choose to focus on the one-sided variant due to the natural one-sided properties of our constructions and for the simplicity of some of the proofs. \paragraph{Amplification with Two-Sided Error:} The natural amplification of error technique is to employ several schemes independently in parallel and take the majority. We can get a probability of forgery $\varepsilon$ by using $k=O(\log (1/\varepsilon ))$ schemes with some fixed probability of forgery bounded away from $1/2$, but unlike \cref{amplification_method}, accept a pair of labels when at least $k/2$ of the schemes accept. This technique is described in more detail in Proposition 2.2 of <|cite_start|> (Reference: Randomized Communication and Implicit Graph Representations: We study constant-cost randomized communication problems and relate them to implicit graph representations in structural graph theory. Specifically, constant-cost communication problems correspond to hereditary graph families that admit constant-size adjacency sketches, or equivalently constant-size probabilistic universal graphs (PUGs), and these graph families are a subset of families that admit adjacency labeling schemes of size O(log n), which are the subject of the well-studied implicit graph question (IGQ). We initiate the study of the hereditary graph families that admit constant-size PUGs, with the two (equivalent) goals of (1) understanding randomized constant-cost communication problems, and (2) understanding a probabilistic version of the IGQ. For each family $\mathcal F$ studied in this paper (including the monogenic bipartite families, product graphs, interval and permutation graphs, families of bounded twin-width, and others), it holds that the subfamilies $\mathcal H \subseteq \mathcal F$ admit constant-size PUGs (i.e. adjacency sketches) if and only if they are stable (i.e. they forbid a half-graph as a semi-induced subgraph). The correspondence between communication problems and hereditary graph families allows for a new method of constructing adjacency labeling schemes. By this method, we show that the induced subgraphs of any Cartesian products are positive examples to the IGQ. We prove that this probabilistic construction cannot be derandomized by using an Equality oracle, i.e. the Equality oracle cannot simulate the k-Hamming Distance communication protocol. We also obtain constant-size sketches for deciding $\mathsf{dist}(x, y) \le k$ for vertices $x$, $y$ in any stable graph family with bounded twin-width. This generalizes to constant-size sketches for deciding first-order formulas over the same graphs.) <|cite_end|>for sketches in the non-adversarial case. In our case, we need to refer to a general argument for games against an adversary such as parallel repetition of single prover interactive proofs as in~\cref{amplification_method}. The proof we mentioned in~\cref{sec: adaptive adversaries} (due to Goldreich <|cite_start|> (Reference: Modern Cryptography, Probabilistic Proofs and Pseudorandomness: ) <|cite_end|>) bounds the winning probability when using parallel repetition and where the adversary wins only if it wins in all games. This is also true for any subset $S\subseteq \left[k\right]$ of the $k$ repetitions: the probability the adversary wins in all games in $S$ is the product of the probabilities of winning each game (again from <|cite_start|> (Reference: Modern Cryptography, Probabilistic Proofs and Pseudorandomness: ) <|cite_end|>). To apply this result to the case where the final winner is determined by the player who won in a majority of games, we can apply the generalized concentration bound (an extension of the Chernoff-Hoeffding bounds) by Panconesi and Srinivasan <|cite_start|> (Reference: Randomized distributed edge coloring via an extension of the Chernoff-Hoeffding bounds: Certain types of routing, scheduling, and resource-allocation problems in a distributed setting can be modeled as edge-coloring problems. We present fast and simple randomized algorithms for edge coloring a graph in the synchronous distributed point-to-point model of computation. Our algorithms compute an edge coloring of a graph G with n nodes and maximum degree ∆ with at most 1.6∆ +O(log n) colors with high probability (arbitrarily close to 1) for any fixed δ > 0; they run in polylogarithmic time. The upper bound on the number of colors improves upon the (2∆ − 1)-coloring achievable by a simple reduction to vertex coloring. To analyze the performance of our algorithms, we introduce new techniques for proving upper bounds on the tail probabilities of certain random variables. The Chernoff–Hoeffding bounds are fundamental tools that are used very frequently in estimating tail probabilities. However, they assume stochastic independence among certain random variables, which may not always hold. Our results extend the Chernoff–Hoeffding bounds to certain types of random variables which are not stochastically independent. We believe that these results are of independent interest and merit further study.) <|cite_end|>(see also Impagliazzo and Kabanets <|cite_start|> (Reference: Constructive Proofs of Concentration Bounds: ) <|cite_end|>). These bounds do not assume independence of the events, just that for every subset, the probability that all of it be '1' should be the product of the individual probabilities.\footnote{For instance, Theorem 3.1 in <|cite_start|> (Reference: Constructive Proofs of Concentration Bounds: ) <|cite_end|>says that if $X_1, X_2, \ldots, X_k$ are Boolean random variables and there is a $\delta$ such that for every set $S \subseteq [k]$, $\mathds{P}[\wedge_{i \in S} X_i = 1] \leq \delta^{|S|}$. Then for any $\gamma$ such that $\delta \leq \gamma \leq 1$, $$\mathds{P}\left[\sum_{i=1}^k X_i > \gamma k\right] \leq e^{-k \cdot D(\gamma\parallel \delta)} $$ where $D(\cdot\parallel \cdot)$ is the relative entropy function.} <|paper_end|>
[ "<|reference_start|> Modern Cryptography, Probabilistic Proofs and Pseudorandomness: <|reference_end|>", "<|reference_start|> Modern Cryptography, Probabilistic Proofs and Pseudorandomness: <|reference_end|>", "<|reference_start|> Randomized distributed edge coloring via an extension of the Chernoff-Hoeffding bounds: Certain types of routing, scheduling, and resource-allocation problems in a distributed setting can be modeled as edge-coloring problems. We present fast and simple randomized algorithms for edge coloring a graph in the synchronous distributed point-to-point model of computation. Our algorithms compute an edge coloring of a graph G with n nodes and maximum degree ∆ with at most 1.6∆ +O(log n) colors with high probability (arbitrarily close to 1) for any fixed δ > 0; they run in polylogarithmic time. The upper bound on the number of colors improves upon the (2∆ − 1)-coloring achievable by a simple reduction to vertex coloring. To analyze the performance of our algorithms, we introduce new techniques for proving upper bounds on the tail probabilities of certain random variables. The Chernoff–Hoeffding bounds are fundamental tools that are used very frequently in estimating tail probabilities. However, they assume stochastic independence among certain random variables, which may not always hold. Our results extend the Chernoff–Hoeffding bounds to certain types of random variables which are not stochastically independent. We believe that these results are of independent interest and merit further study. <|reference_end|>", "<|reference_start|> Constructive Proofs of Concentration Bounds: <|reference_end|>" ]
[ 1, 2, 3, 4 ]
{"<|cite_1|>": "ss-1838888", "<|cite_2|>": "arxiv-59399", "<|cite_3|>": "arxiv-75827", "<|cite_4|>": "ss-1251598", "<|cite_5|>": "ss-1251599", "<|cite_6|>": "arxiv-379413", "<|multi_cite_7_1|>": "arxiv-383322", "<|multi_cite_7_2|>": "arxiv-382169", "<|cite_8|>": "arxiv-70921", "<|cite_9|>": "ss-1543105", "<|cite_10|>": "arxiv-37885", "<|cite_11|>": "ss-1838888", "<|cite_12|>": "ss-1251599", "<|cite_13|>": "arxiv-233279", "<|cite_14|>": "arxiv-379413", "<|multi_cite_15_1|>": "arxiv-233279", "<|multi_cite_15_2|>": "arxiv-379413", "<|cite_16|>": "ss-1251600", "<|cite_17|>": "arxiv-3153", "<|cite_18|>": "ss-1251601", "<|cite_19|>": "ss-1251601", "<|cite_21|>": "ss-1855695", "<|cite_22|>": "ss-1855695", "<|cite_23|>": "ss-1251602", "<|cite_24|>": "ss-1251603", "<|cite_25|>": "ss-1251599", "<|cite_26|>": "ss-1251599", "<|cite_27|>": "arxiv-379413", "<|cite_28|>": "ss-1251600", "<|cite_29|>": "ss-1251600", "<|cite_30|>": "ss-794468", "<|cite_31|>": "ss-1251604", "<|cite_32|>": "ss-1251604"}
1810.09992-0
<|paper_start|> Title: Computation Scheduling for Distributed Machine Learning with Straggling Workers Abstract: Computation Scheduling for Distributed Machine Learning with Straggling Workers: We study scheduling of computation tasks across n workers in a large scale distributed learning problem with the help of a master. Computation and communication delays are assumed to be random, and redundant computations are assigned to workers in order to tolerate stragglers. We consider sequential computation of tasks assigned to a worker, while the result of each computation is sent to the master right after its completion. Each computation round, which can model an iteration of the stochastic gradient descent (SGD) algorithm, is completed once the master receives k distinct computations, referred to as the computation target. Our goal is to characterize the average completion time as a function of the computation load, which denotes the portion of the dataset available at each worker, and the computation target. We propose two computation scheduling schemes that specify the tasks assigned to each worker, as well as their computation schedule, i.e., the order of execution. Assuming a general statistical model for computation and communication delays, we derive the average completion time of the proposed schemes. We also establish a lower bound on the minimum average completion time by assuming prior knowledge of the random delays. Experimental results carried out on Amazon EC2 cluster show a significant reduction in the average completion time over existing coded and uncoded computing schemes. It is also shown numerically that the gap between the proposed scheme and the lower bound is relatively small, confirming the efficiency of the proposed scheduling design. Introduction \label{SecIntro} The growing computational complexity and memory requirements of emerging machine learning applications involving massive datasets cannot be satisfied on a single machine. Thus, distributed computation across tens or even hundreds of computation servers, called \textit{workers}, has been a topic of great recent interest <|cite_start|> (Reference: Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.) <|cite_end|> <|cite_start|> (Reference: Pumma: Parallel universal matrix multiplication algorithms on distributed memory concurrent computers: This paper describes the Parallel Universal Matrix Multiplication Algorithms (PUMMA) on distributed memory concurrent computers. The PUMMA package includes not only the non-transposed matrix multiplication routine C = A{center_dot}B, but also transposed multiplication routines C = A{sup T}{center_dot}B, C = A{center_dot}B{sup T}, and C = A{sup T}{center_dot}B{sup T}, for a block scattered data distribution. The routines perform efficiently for a wide range of processor configurations and block sizes. The PUMMA together provide the same functionality as the Level 3 BLAS routine xGEMM. Details of the parallel implementation of the routines are given, and results are presented for runs on the Intel Touchstone Delta computer.) <|cite_end|>. A major bottleneck in distributed computation is that the overall performance can significantly deteriorate due to slow servers, referred to as \textit{stragglers}. To mitigate the limitation of stragglers, coded computation techniques, inspired by erasure codes against packet losses, have been proposed recently <|cite_start|> (Reference: Speeding Up Distributed Machine Learning Using Codes: Codes are widely used in many engineering applications to offer robustness against noise. In large-scale systems there are several types of noise that can affect the performance of distributed machine learning algorithms -- straggler nodes, system failures, or communication bottlenecks -- but there has been little interaction cutting across codes, machine learning, and distributed systems. In this work, we provide theoretical insights on how coded solutions can achieve significant gains compared to uncoded ones. We focus on two of the most basic building blocks of distributed learning algorithms: matrix multiplication and data shuffling. For matrix multiplication, we use codes to alleviate the effect of stragglers, and show that if the number of homogeneous workers is $n$, and the runtime of each subtask has an exponential tail, coded computation can speed up distributed matrix multiplication by a factor of $\log n$. For data shuffling, we use codes to reduce communication bottlenecks, exploiting the excess in storage. We show that when a constant fraction $\alpha$ of the data matrix can be cached at each worker, and $n$ is the number of workers, \emph{coded shuffling} reduces the communication cost by a factor of $(\alpha + \frac{1}{n})\gamma(n)$ compared to uncoded shuffling, where $\gamma(n)$ is the ratio of the cost of unicasting $n$ messages to $n$ users to multicasting a common message (of the same size) to $n$ users. For instance, $\gamma(n) \simeq n$ if multicasting a message to $n$ users is as cheap as unicasting a message to one user. We also provide experiment results, corroborating our theoretical gains of the coded algorithms.) <|cite_end|> <|cite_start|> (Reference: Gradient coding: Avoiding stragglers in distributed learning: We propose a novel coding theoretic framework for mitigating stragglers in distributed learning. We show how carefully replicating data blocks and coding across gradients can provide tolerance to failures and stragglers for synchronous Gradient Descent. We implement our schemes in python (using MPI) to run on Amazon EC2, and show how we compare against baseline approaches in running time and generalization error.) <|cite_end|> <|cite_start|> (Reference: Improving Distributed Gradient Descent Using Reed-Solomon Codes: Today's massively-sized datasets have made it necessary to often perform computations on them in a distributed manner. In principle, a computational task is divided into subtasks which are distributed over a cluster operated by a taskmaster. One issue faced in practice is the delay incurred due to the presence of slow machines, known as \emph{stragglers}. Several schemes, including those based on replication, have been proposed in the literature to mitigate the effects of stragglers and more recently, those inspired by coding theory have begun to gain traction. In this work, we consider a distributed gradient descent setting suitable for a wide class of machine learning problems. We adapt the framework of Tandon et al. (arXiv:1612.03301) and present a deterministic scheme that, for a prescribed per-machine computational effort, recovers the gradient from the least number of machines $f$ theoretically permissible, via an $O(f^2)$ decoding algorithm. We also provide a theoretical delay model which can be used to minimize the expected waiting time per computation by optimally choosing the parameters of the scheme. Finally, we supplement our theoretical findings with numerical results that demonstrate the efficacy of the method and its advantages over competing schemes.) <|cite_end|> <|cite_start|> (Reference: Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD: Distributed Stochastic Gradient Descent (SGD) when run in a synchronous manner, suffers from delays in waiting for the slowest learners (stragglers). Asynchronous methods can alleviate stragglers, but cause gradient staleness that can adversely affect convergence. In this work we present a novel theoretical characterization of the speed-up offered by asynchronous methods by analyzing the trade-off between the error in the trained model and the actual training runtime (wallclock time). The novelty in our work is that our runtime analysis considers random straggler delays, which helps us design and compare distributed SGD algorithms that strike a balance between stragglers and staleness. We also present a new convergence analysis of asynchronous SGD variants without bounded or exponential delay assumptions, and a novel learning rate schedule to compensate for gradient staleness.) <|cite_end|> <|cite_start|> (Reference: Robust Gradient Descent via Moment Encoding with LDPC Codes: This paper considers the problem of implementing large-scale gradient descent algorithms in a distributed computing setting in the presence of {\em straggling} processors. To mitigate the effect of the stragglers, it has been previously proposed to encode the data with an erasure-correcting code and decode at the master server at the end of the computation. We, instead, propose to encode the second-moment of the data with a low density parity-check (LDPC) code. The iterative decoding algorithms for LDPC codes have very low computational overhead and the number of decoding iterations can be made to automatically adjust with the number of stragglers in the system. We show that for a random model for stragglers, the proposed moment encoding based gradient descent method can be viewed as the stochastic gradient descent method. This allows us to obtain convergence guarantees for the proposed solution. Furthermore, the proposed moment encoding based method is shown to outperform the existing schemes in a real distributed computing setup.) <|cite_end|> <|cite_start|> (Reference: Communication-Computation Efficient Gradient Coding: This paper develops coding techniques to reduce the running time of distributed learning tasks. It characterizes the fundamental tradeoff to compute gradients (and more generally vector summations) in terms of three parameters: computation load, straggler tolerance and communication cost. It further gives an explicit coding scheme that achieves the optimal tradeoff based on recursive polynomial constructions, coding both across data subsets and vector components. As a result, the proposed scheme allows to minimize the running time for gradient computations. Implementations are made on Amazon EC2 clusters using Python with mpi4py package. Results show that the proposed scheme maintains the same generalization error while reducing the running time by $32\%$ compared to uncoded schemes and $23\%$ compared to prior coded schemes focusing only on stragglers (Tandon et al., ICML 2017).) <|cite_end|>. With coded computation, computations from only a subset of non-straggling workers are sufficient to complete the computation task, thanks to redundant computations performed by the faster workers. In <|cite_start|> (Reference: Speeding Up Distributed Machine Learning Using Codes: Codes are widely used in many engineering applications to offer robustness against noise. In large-scale systems there are several types of noise that can affect the performance of distributed machine learning algorithms -- straggler nodes, system failures, or communication bottlenecks -- but there has been little interaction cutting across codes, machine learning, and distributed systems. In this work, we provide theoretical insights on how coded solutions can achieve significant gains compared to uncoded ones. We focus on two of the most basic building blocks of distributed learning algorithms: matrix multiplication and data shuffling. For matrix multiplication, we use codes to alleviate the effect of stragglers, and show that if the number of homogeneous workers is $n$, and the runtime of each subtask has an exponential tail, coded computation can speed up distributed matrix multiplication by a factor of $\log n$. For data shuffling, we use codes to reduce communication bottlenecks, exploiting the excess in storage. We show that when a constant fraction $\alpha$ of the data matrix can be cached at each worker, and $n$ is the number of workers, \emph{coded shuffling} reduces the communication cost by a factor of $(\alpha + \frac{1}{n})\gamma(n)$ compared to uncoded shuffling, where $\gamma(n)$ is the ratio of the cost of unicasting $n$ messages to $n$ users to multicasting a common message (of the same size) to $n$ users. For instance, $\gamma(n) \simeq n$ if multicasting a message to $n$ users is as cheap as unicasting a message to one user. We also provide experiment results, corroborating our theoretical gains of the coded algorithms.) <|cite_end|>the authors employ a maximum-distance separable (MDS) code-inspired distributed computation scheme in a distributed matrix-vector multiplication problem. A more general distributed gradient descent (DGD) problem is considered in <|cite_start|> (Reference: Gradient coding: Avoiding stragglers in distributed learning: We propose a novel coding theoretic framework for mitigating stragglers in distributed learning. We show how carefully replicating data blocks and coding across gradients can provide tolerance to failures and stragglers for synchronous Gradient Descent. We implement our schemes in python (using MPI) to run on Amazon EC2, and show how we compare against baseline approaches in running time and generalization error.) <|cite_end|>, where labeled dataset is distributed across workers, each evaluating the gradient on its own partition. Various coding schemes have been introduced in <|cite_start|> (Reference: Gradient coding: Avoiding stragglers in distributed learning: We propose a novel coding theoretic framework for mitigating stragglers in distributed learning. We show how carefully replicating data blocks and coding across gradients can provide tolerance to failures and stragglers for synchronous Gradient Descent. We implement our schemes in python (using MPI) to run on Amazon EC2, and show how we compare against baseline approaches in running time and generalization error.) <|cite_end|> <|cite_start|> (Reference: Improving Distributed Gradient Descent Using Reed-Solomon Codes: Today's massively-sized datasets have made it necessary to often perform computations on them in a distributed manner. In principle, a computational task is divided into subtasks which are distributed over a cluster operated by a taskmaster. One issue faced in practice is the delay incurred due to the presence of slow machines, known as \emph{stragglers}. Several schemes, including those based on replication, have been proposed in the literature to mitigate the effects of stragglers and more recently, those inspired by coding theory have begun to gain traction. In this work, we consider a distributed gradient descent setting suitable for a wide class of machine learning problems. We adapt the framework of Tandon et al. (arXiv:1612.03301) and present a deterministic scheme that, for a prescribed per-machine computational effort, recovers the gradient from the least number of machines $f$ theoretically permissible, via an $O(f^2)$ decoding algorithm. We also provide a theoretical delay model which can be used to minimize the expected waiting time per computation by optimally choosing the parameters of the scheme. Finally, we supplement our theoretical findings with numerical results that demonstrate the efficacy of the method and its advantages over competing schemes.) <|cite_end|> <|cite_start|> (Reference: Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD: Distributed Stochastic Gradient Descent (SGD) when run in a synchronous manner, suffers from delays in waiting for the slowest learners (stragglers). Asynchronous methods can alleviate stragglers, but cause gradient staleness that can adversely affect convergence. In this work we present a novel theoretical characterization of the speed-up offered by asynchronous methods by analyzing the trade-off between the error in the trained model and the actual training runtime (wallclock time). The novelty in our work is that our runtime analysis considers random straggler delays, which helps us design and compare distributed SGD algorithms that strike a balance between stragglers and staleness. We also present a new convergence analysis of asynchronous SGD variants without bounded or exponential delay assumptions, and a novel learning rate schedule to compensate for gradient staleness.) <|cite_end|> <|cite_start|> (Reference: Robust Gradient Descent via Moment Encoding with LDPC Codes: This paper considers the problem of implementing large-scale gradient descent algorithms in a distributed computing setting in the presence of {\em straggling} processors. To mitigate the effect of the stragglers, it has been previously proposed to encode the data with an erasure-correcting code and decode at the master server at the end of the computation. We, instead, propose to encode the second-moment of the data with a low density parity-check (LDPC) code. The iterative decoding algorithms for LDPC codes have very low computational overhead and the number of decoding iterations can be made to automatically adjust with the number of stragglers in the system. We show that for a random model for stragglers, the proposed moment encoding based gradient descent method can be viewed as the stochastic gradient descent method. This allows us to obtain convergence guarantees for the proposed solution. Furthermore, the proposed moment encoding based method is shown to outperform the existing schemes in a real distributed computing setup.) <|cite_end|> <|cite_start|> (Reference: Communication-Computation Efficient Gradient Coding: This paper develops coding techniques to reduce the running time of distributed learning tasks. It characterizes the fundamental tradeoff to compute gradients (and more generally vector summations) in terms of three parameters: computation load, straggler tolerance and communication cost. It further gives an explicit coding scheme that achieves the optimal tradeoff based on recursive polynomial constructions, coding both across data subsets and vector components. As a result, the proposed scheme allows to minimize the running time for gradient computations. Implementations are made on Amazon EC2 clusters using Python with mpi4py package. Results show that the proposed scheme maintains the same generalization error while reducing the running time by $32\%$ compared to uncoded schemes and $23\%$ compared to prior coded schemes focusing only on stragglers (Tandon et al., ICML 2017).) <|cite_end|>, that assign redundant computations to workers to attain tolerance against stragglers. Coded distributed computation has also been studied for matrix-matrix multiplication, where the labeled data is coded before being delivered to workers <|cite_start|> (Reference: Straggler Mitigation in Distributed Matrix Multiplication: Fundamental Limits and Optimal Coding: We consider the problem of massive matrix multiplication, which underlies many data analytic applications, in a large-scale distributed system comprising a group of worker nodes. We target the stragglers' delay performance bottleneck, which is due to the unpredictable latency in waiting for slowest nodes (or stragglers) to finish their tasks. We propose a novel coding strategy, named \emph{entangled polynomial code}, for designing the intermediate computations at the worker nodes in order to minimize the recovery threshold (i.e., the number of workers that we need to wait for in order to compute the final output). We demonstrate the optimality of entangled polynomial code in several cases, and show that it provides orderwise improvement over the conventional schemes for straggler mitigation. Furthermore, we characterize the optimal recovery threshold among all linear coding strategies within a factor of $2$ using \emph{bilinear complexity}, by developing an improved version of the entangled polynomial code. In particular, while evaluating bilinear complexity is a well-known challenging problem, we show that optimal recovery threshold for linear coding strategies can be approximated within a factor of $2$ of this fundamental quantity. On the other hand, the improved version of the entangled polynomial code enables further and orderwise reduction in the recovery threshold, compared to its basic version. Finally, we show that the techniques developed in this paper can also be extended to several other problems such as coded convolution and fault-tolerant computing, leading to tight characterizations.) <|cite_end|> <|cite_start|> (Reference: On the Optimal Recovery Threshold of Coded Matrix Multiplication: We provide novel coded computation strategies for distributed matrix-matrix products that outperform the recent "Polynomial code" constructions in recovery threshold, i.e., the required number of successful workers. When $m$-th fraction of each matrix can be stored in each worker node, Polynomial codes require $m^2$ successful workers, while our MatDot codes only require $2m-1$ successful workers, albeit at a higher communication cost from each worker to the fusion node. We also provide a systematic construction of MatDot codes. Further, we propose "PolyDot" coding that interpolates between Polynomial codes and MatDot codes to trade off communication cost and recovery threshold. Finally, we demonstrate a coding technique for multiplying $n$ matrices ($n \geq 3$) by applying MatDot and PolyDot coding ideas.) <|cite_end|> <|cite_start|> (Reference: Polynomial Codes: an Optimal Design for High-Dimensional Coded Matrix Multiplication: We consider a large-scale matrix multiplication problem where the computation is carried out using a distributed system with a master node and multiple worker nodes, where each worker can store parts of the input matrices. We propose a computation strategy that leverages ideas from coding theory to design intermediate computations at the worker nodes, in order to efficiently deal with straggling workers. The proposed strategy, named as \emph{polynomial codes}, achieves the optimum recovery threshold, defined as the minimum number of workers that the master needs to wait for in order to compute the output. Furthermore, by leveraging the algebraic structure of polynomial codes, we can map the reconstruction problem of the final output to a polynomial interpolation problem, which can be solved efficiently. Polynomial codes provide order-wise improvement over the state of the art in terms of recovery threshold, and are also optimal in terms of several other metrics. Furthermore, we extend this code to distributed convolution and show its order-wise optimality.) <|cite_end|>, and for distributed computing of a polynomial function <|cite_start|> (Reference: Lagrange Coded Computing: Optimal Design for Resiliency, Security and Privacy: We consider a scenario involving computations over a massive dataset stored distributedly across multiple workers, which is at the core of distributed learning algorithms. We propose Lagrange Coded Computing (LCC), a new framework to simultaneously provide (1) resiliency against stragglers that may prolong computations; (2) security against Byzantine (or malicious) workers that deliberately modify the computation for their benefit; and (3) (information-theoretic) privacy of the dataset amidst possible collusion of workers. LCC, which leverages the well-known Lagrange polynomial to create computation redundancy in a novel coded form across workers, can be applied to any computation scenario in which the function of interest is an arbitrary multivariate polynomial of the input dataset, hence covering many computations of interest in machine learning. LCC significantly generalizes prior works to go beyond linear computations. It also enables secure and private computing in distributed settings, improving the computation and communication efficiency of the state-of-the-art. Furthermore, we prove the optimality of LCC by showing that it achieves the optimal tradeoff between resiliency, security, and privacy, i.e., in terms of tolerating the maximum number of stragglers and adversaries, and providing data privacy against the maximum number of colluding workers. Finally, we show via experiments on Amazon EC2 that LCC speeds up the conventional uncoded implementation of distributed least-squares linear regression by up to $13.43\times$, and also achieves a $2.36\times$-$12.65\times$ speedup over the state-of-the-art straggler mitigation strategies.) <|cite_end|>. Also, for a linear regression problem, a polynomially coded approach is proposed in <|cite_start|> (Reference: Polynomially Coded Regression: Optimal Straggler Mitigation via Data Encoding: We consider the problem of training a least-squares regression model on a large dataset using gradient descent. The computation is carried out on a distributed system consisting of a master node and multiple worker nodes. Such distributed systems are significantly slowed down due to the presence of slow-running machines (stragglers) as well as various communication bottlenecks. We propose "polynomially coded regression" (PCR) that substantially reduces the effect of stragglers and lessens the communication burden in such systems. The key idea of PCR is to encode the partial data stored at each worker, such that the computations at the workers can be viewed as evaluating a polynomial at distinct points. This allows the master to compute the final gradient by interpolating this polynomial. PCR significantly reduces the recovery threshold, defined as the number of workers the master has to wait for prior to computing the gradient. In particular, PCR requires a recovery threshold that scales inversely proportionally with the amount of computation/storage available at each worker. In comparison, state-of-the-art straggler-mitigation schemes require a much higher recovery threshold that only decreases linearly in the per worker computation/storage load. We prove that PCR's recovery threshold is near minimal and within a factor two of the best possible scheme. Our experiments over Amazon EC2 demonstrate that compared with state-of-the-art schemes, PCR improves the run-time by 1.50x ~ 2.36x with naturally occurring stragglers, and by as much as 2.58x ~ 4.29x with artificial stragglers.) <|cite_end|>, where the data is encoded and distributed across the workers to compute the gradient of the loss function. Most existing coded computation techniques are designed to tolerate persistent stragglers, and discard computations performed by stragglers. However, in practice we often encounter \textit{non-persistent stragglers}, which, despite being slower, complete a significant portion of the assigned tasks by the time faster workers complete all their tasks <|cite_start|> (Reference: Hierarchical Coded Computation: Coded computation is a method to mitigate "stragglers" in distributed computing systems through the use of error correction coding that has lately received significant attention. First used in vector-matrix multiplication, the range of application was later extended to include matrix-matrix multiplication, heterogeneous networks, convolution, and approximate computing. A drawback to previous results is they completely ignore work completed by stragglers. While stragglers are slower compute nodes, in many settings the amount of work completed by stragglers can be non-negligible. Thus, in this work, we propose a hierarchical coded computation method that exploits the work completed by all compute nodes. We partition each node's computation into layers of sub-computations such that each layer can be treated as (distinct) erasure channel. We then design different erasure codes for each layer so that all layers have the same failure exponent. We propose design guidelines to optimize parameters of such codes. Numerical results show the proposed scheme has an improvement of a factor of 1.5 in the expected finishing time compared to previous work.) <|cite_end|>. Recently, there have been efforts to exploit the computations carried out by non-persistent stragglers at the expense of increasing the communication load from the workers to the master <|cite_start|> (Reference: Hierarchical Coded Computation: Coded computation is a method to mitigate "stragglers" in distributed computing systems through the use of error correction coding that has lately received significant attention. First used in vector-matrix multiplication, the range of application was later extended to include matrix-matrix multiplication, heterogeneous networks, convolution, and approximate computing. A drawback to previous results is they completely ignore work completed by stragglers. While stragglers are slower compute nodes, in many settings the amount of work completed by stragglers can be non-negligible. Thus, in this work, we propose a hierarchical coded computation method that exploits the work completed by all compute nodes. We partition each node's computation into layers of sub-computations such that each layer can be treated as (distinct) erasure channel. We then design different erasure codes for each layer so that all layers have the same failure exponent. We propose design guidelines to optimize parameters of such codes. Numerical results show the proposed scheme has an improvement of a factor of 1.5 in the expected finishing time compared to previous work.) <|cite_end|> <|cite_start|> (Reference: Exploitation of Stragglers in Coded Computation: In cloud computing systems slow processing nodes, often referred to as "stragglers", can significantly extend the computation time. Recent results have shown that error correction coding can be used to reduce the effect of stragglers. In this work we introduce a scheme that, in addition to using error correction to distribute mixed jobs across nodes, is also able to exploit the work completed by all nodes, including stragglers. We first consider vector-matrix multiplication and apply maximum distance separable (MDS) codes to small blocks of sub-matrices. The worker nodes process blocks sequentially, working block-by-block, transmitting partial per-block results to the master as they are completed. Sub-blocking allows a more continuous completion process, which thereby allows us to exploit the work of a much broader spectrum of processors and reduces computation time. We then apply this technique to matrix-matrix multiplication using product code. In this case, we show that the order of computing sub-tasks is a new degree of design freedom that can be exploited to reduce computation time further. We propose a novel approach to analyze the finishing time, which is different from typical order statistics. Simulation results show that the expected computation time decreases by a factor of at least two in compared to previous methods.) <|cite_end|> <|cite_start|> (Reference: Rateless Codes for Near-Perfect Load Balancing in Distributed Matrix-Vector Multiplication: Large-scale machine learning and data mining applications require computer systems to perform massive matrix-vector and matrix-matrix multiplication operations that need to be parallelized across multiple nodes. The presence of straggling nodes -- computing nodes that unpredictably slowdown or fail -- is a major bottleneck in such distributed computations. Ideal load balancing strategies that dynamically allocate more tasks to faster nodes require knowledge or monitoring of node speeds as well as the ability to quickly move data. Recently proposed fixed-rate erasure coding strategies can handle unpredictable node slowdown, but they ignore partial work done by straggling nodes thus resulting in a lot of redundant computation. We propose a \emph{rateless fountain coding} strategy that achieves the best of both worlds -- we prove that its latency is asymptotically equal to ideal load balancing, and it performs asymptotically zero redundant computations. Our idea is to create linear combinations of the $m$ rows of the matrix and assign these encoded rows to different worker nodes. The original matrix-vector product can be decoded as soon as slightly more than $m$ row-vector products are collectively finished by the nodes. We conduct experiments in three computing environments: local parallel computing, Amazon EC2, and Amazon Lambda, which show that rateless coding gives as much as $3\times$ speed-up over uncoded schemes.) <|cite_end|> <|cite_start|> (Reference: Speeding Up Distributed Gradient Descent by Utilizing Non-persistent Stragglers: Distributed gradient descent (DGD) is an efficient way of implementing gradient descent (GD), especially for large data sets, by dividing the computation tasks into smaller subtasks and assigning to different computing servers (CSs) to be executed in parallel. In standard parallel execution, per-iteration waiting time is limited by the execution time of the straggling servers. Coded DGD techniques have been introduced recently, which can tolerate straggling servers via assigning redundant computation tasks to the CSs. In most of the existing DGD schemes, either with coded computation or coded communication, the non-straggling CSs transmit one message per iteration once they complete all their assigned computation tasks. However, although the straggling servers cannot complete all their assigned tasks, they are often able to complete a certain portion of them. In this paper, we allow multiple transmissions from each CS at each iteration in order to make sure a maximum number of completed computations can be reported to the aggregating server (AS), including the straggling servers. We numerically show that the average completion time per iteration can be reduced significantly by slightly increasing the communication load per server.) <|cite_end|> <|cite_start|> (Reference: Near-Optimal Straggler Mitigation for Distributed Gradient Methods: Modern learning algorithms use gradient descent updates to train inferential models that best explain data. Scaling these approaches to massive data sizes requires proper distributed gradient descent schemes where distributed worker nodes compute partial gradients based on their partial and local data sets, and send the results to a master node where all the computations are aggregated into a full gradient and the learning model is updated. However, a major performance bottleneck that arises is that some of the worker nodes may run slow. These nodes a.k.a. stragglers can significantly slow down computation as the slowest node may dictate the overall computational time. We propose a distributed computing scheme, called Batched Coupon's Collector (BCC) to alleviate the effect of stragglers in gradient methods. We prove that our BCC scheme is robust to a near optimal number of random stragglers. We also empirically demonstrate that our proposed BCC scheme reduces the run-time by up to 85.4% over Amazon EC2 clusters when compared with other straggler mitigation strategies. We also generalize the proposed BCC scheme to minimize the completion time when implementing gradient descent-based algorithms over heterogeneous worker nodes.) <|cite_end|>. Techniques studied in <|cite_start|> (Reference: Hierarchical Coded Computation: Coded computation is a method to mitigate "stragglers" in distributed computing systems through the use of error correction coding that has lately received significant attention. First used in vector-matrix multiplication, the range of application was later extended to include matrix-matrix multiplication, heterogeneous networks, convolution, and approximate computing. A drawback to previous results is they completely ignore work completed by stragglers. While stragglers are slower compute nodes, in many settings the amount of work completed by stragglers can be non-negligible. Thus, in this work, we propose a hierarchical coded computation method that exploits the work completed by all compute nodes. We partition each node's computation into layers of sub-computations such that each layer can be treated as (distinct) erasure channel. We then design different erasure codes for each layer so that all layers have the same failure exponent. We propose design guidelines to optimize parameters of such codes. Numerical results show the proposed scheme has an improvement of a factor of 1.5 in the expected finishing time compared to previous work.) <|cite_end|> <|cite_start|> (Reference: Exploitation of Stragglers in Coded Computation: In cloud computing systems slow processing nodes, often referred to as "stragglers", can significantly extend the computation time. Recent results have shown that error correction coding can be used to reduce the effect of stragglers. In this work we introduce a scheme that, in addition to using error correction to distribute mixed jobs across nodes, is also able to exploit the work completed by all nodes, including stragglers. We first consider vector-matrix multiplication and apply maximum distance separable (MDS) codes to small blocks of sub-matrices. The worker nodes process blocks sequentially, working block-by-block, transmitting partial per-block results to the master as they are completed. Sub-blocking allows a more continuous completion process, which thereby allows us to exploit the work of a much broader spectrum of processors and reduces computation time. We then apply this technique to matrix-matrix multiplication using product code. In this case, we show that the order of computing sub-tasks is a new degree of design freedom that can be exploited to reduce computation time further. We propose a novel approach to analyze the finishing time, which is different from typical order statistics. Simulation results show that the expected computation time decreases by a factor of at least two in compared to previous methods.) <|cite_end|> <|cite_start|> (Reference: Rateless Codes for Near-Perfect Load Balancing in Distributed Matrix-Vector Multiplication: Large-scale machine learning and data mining applications require computer systems to perform massive matrix-vector and matrix-matrix multiplication operations that need to be parallelized across multiple nodes. The presence of straggling nodes -- computing nodes that unpredictably slowdown or fail -- is a major bottleneck in such distributed computations. Ideal load balancing strategies that dynamically allocate more tasks to faster nodes require knowledge or monitoring of node speeds as well as the ability to quickly move data. Recently proposed fixed-rate erasure coding strategies can handle unpredictable node slowdown, but they ignore partial work done by straggling nodes thus resulting in a lot of redundant computation. We propose a \emph{rateless fountain coding} strategy that achieves the best of both worlds -- we prove that its latency is asymptotically equal to ideal load balancing, and it performs asymptotically zero redundant computations. Our idea is to create linear combinations of the $m$ rows of the matrix and assign these encoded rows to different worker nodes. The original matrix-vector product can be decoded as soon as slightly more than $m$ row-vector products are collectively finished by the nodes. We conduct experiments in three computing environments: local parallel computing, Amazon EC2, and Amazon Lambda, which show that rateless coding gives as much as $3\times$ speed-up over uncoded schemes.) <|cite_end|> <|cite_start|> (Reference: Speeding Up Distributed Gradient Descent by Utilizing Non-persistent Stragglers: Distributed gradient descent (DGD) is an efficient way of implementing gradient descent (GD), especially for large data sets, by dividing the computation tasks into smaller subtasks and assigning to different computing servers (CSs) to be executed in parallel. In standard parallel execution, per-iteration waiting time is limited by the execution time of the straggling servers. Coded DGD techniques have been introduced recently, which can tolerate straggling servers via assigning redundant computation tasks to the CSs. In most of the existing DGD schemes, either with coded computation or coded communication, the non-straggling CSs transmit one message per iteration once they complete all their assigned computation tasks. However, although the straggling servers cannot complete all their assigned tasks, they are often able to complete a certain portion of them. In this paper, we allow multiple transmissions from each CS at each iteration in order to make sure a maximum number of completed computations can be reported to the aggregating server (AS), including the straggling servers. We numerically show that the average completion time per iteration can be reduced significantly by slightly increasing the communication load per server.) <|cite_end|>are based on coding with associated encoding and decoding complexities, which require the availability and processing of all the data points at the \textit{master}. In <|cite_start|> (Reference: Speeding Up Distributed Gradient Descent by Utilizing Non-persistent Stragglers: Distributed gradient descent (DGD) is an efficient way of implementing gradient descent (GD), especially for large data sets, by dividing the computation tasks into smaller subtasks and assigning to different computing servers (CSs) to be executed in parallel. In standard parallel execution, per-iteration waiting time is limited by the execution time of the straggling servers. Coded DGD techniques have been introduced recently, which can tolerate straggling servers via assigning redundant computation tasks to the CSs. In most of the existing DGD schemes, either with coded computation or coded communication, the non-straggling CSs transmit one message per iteration once they complete all their assigned computation tasks. However, although the straggling servers cannot complete all their assigned tasks, they are often able to complete a certain portion of them. In this paper, we allow multiple transmissions from each CS at each iteration in order to make sure a maximum number of completed computations can be reported to the aggregating server (AS), including the straggling servers. We numerically show that the average completion time per iteration can be reduced significantly by slightly increasing the communication load per server.) <|cite_end|>a linear regression problem is studied, and the scheme in <|cite_start|> (Reference: Polynomially Coded Regression: Optimal Straggler Mitigation via Data Encoding: We consider the problem of training a least-squares regression model on a large dataset using gradient descent. The computation is carried out on a distributed system consisting of a master node and multiple worker nodes. Such distributed systems are significantly slowed down due to the presence of slow-running machines (stragglers) as well as various communication bottlenecks. We propose "polynomially coded regression" (PCR) that substantially reduces the effect of stragglers and lessens the communication burden in such systems. The key idea of PCR is to encode the partial data stored at each worker, such that the computations at the workers can be viewed as evaluating a polynomial at distinct points. This allows the master to compute the final gradient by interpolating this polynomial. PCR significantly reduces the recovery threshold, defined as the number of workers the master has to wait for prior to computing the gradient. In particular, PCR requires a recovery threshold that scales inversely proportionally with the amount of computation/storage available at each worker. In comparison, state-of-the-art straggler-mitigation schemes require a much higher recovery threshold that only decreases linearly in the per worker computation/storage load. We prove that PCR's recovery threshold is near minimal and within a factor two of the best possible scheme. Our experiments over Amazon EC2 demonstrate that compared with state-of-the-art schemes, PCR improves the run-time by 1.50x ~ 2.36x with naturally occurring stragglers, and by as much as 2.58x ~ 4.29x with artificial stragglers.) <|cite_end|>is extended by allowing each worker to communicate multiple computations sequentially, where the computations are carried out using coded data. The authors in <|cite_start|> (Reference: Hierarchical Coded Computation: Coded computation is a method to mitigate "stragglers" in distributed computing systems through the use of error correction coding that has lately received significant attention. First used in vector-matrix multiplication, the range of application was later extended to include matrix-matrix multiplication, heterogeneous networks, convolution, and approximate computing. A drawback to previous results is they completely ignore work completed by stragglers. While stragglers are slower compute nodes, in many settings the amount of work completed by stragglers can be non-negligible. Thus, in this work, we propose a hierarchical coded computation method that exploits the work completed by all compute nodes. We partition each node's computation into layers of sub-computations such that each layer can be treated as (distinct) erasure channel. We then design different erasure codes for each layer so that all layers have the same failure exponent. We propose design guidelines to optimize parameters of such codes. Numerical results show the proposed scheme has an improvement of a factor of 1.5 in the expected finishing time compared to previous work.) <|cite_end|>propose to split the computation tasks into multiple levels, and code each level using MDS coding. However, the coding scheme depends on the statistical behavior of the stragglers, which may not be possible to predict accurately in practice. Distributed matrix-vector multiplication is studied in <|cite_start|> (Reference: Exploitation of Stragglers in Coded Computation: In cloud computing systems slow processing nodes, often referred to as "stragglers", can significantly extend the computation time. Recent results have shown that error correction coding can be used to reduce the effect of stragglers. In this work we introduce a scheme that, in addition to using error correction to distribute mixed jobs across nodes, is also able to exploit the work completed by all nodes, including stragglers. We first consider vector-matrix multiplication and apply maximum distance separable (MDS) codes to small blocks of sub-matrices. The worker nodes process blocks sequentially, working block-by-block, transmitting partial per-block results to the master as they are completed. Sub-blocking allows a more continuous completion process, which thereby allows us to exploit the work of a much broader spectrum of processors and reduces computation time. We then apply this technique to matrix-matrix multiplication using product code. In this case, we show that the order of computing sub-tasks is a new degree of design freedom that can be exploited to reduce computation time further. We propose a novel approach to analyze the finishing time, which is different from typical order statistics. Simulation results show that the expected computation time decreases by a factor of at least two in compared to previous methods.) <|cite_end|>. It is shown that, by performing random coding across the dataset, the results can be obtained from a subset of all the tasks assigned to the workers with high probability, where each completes the assigned tasks sequentially. To execute the tasks which are linear functions of their arguments, e.g., matrix-vector multiplication, rateless codes are used in <|cite_start|> (Reference: Rateless Codes for Near-Perfect Load Balancing in Distributed Matrix-Vector Multiplication: Large-scale machine learning and data mining applications require computer systems to perform massive matrix-vector and matrix-matrix multiplication operations that need to be parallelized across multiple nodes. The presence of straggling nodes -- computing nodes that unpredictably slowdown or fail -- is a major bottleneck in such distributed computations. Ideal load balancing strategies that dynamically allocate more tasks to faster nodes require knowledge or monitoring of node speeds as well as the ability to quickly move data. Recently proposed fixed-rate erasure coding strategies can handle unpredictable node slowdown, but they ignore partial work done by straggling nodes thus resulting in a lot of redundant computation. We propose a \emph{rateless fountain coding} strategy that achieves the best of both worlds -- we prove that its latency is asymptotically equal to ideal load balancing, and it performs asymptotically zero redundant computations. Our idea is to create linear combinations of the $m$ rows of the matrix and assign these encoded rows to different worker nodes. The original matrix-vector product can be decoded as soon as slightly more than $m$ row-vector products are collectively finished by the nodes. We conduct experiments in three computing environments: local parallel computing, Amazon EC2, and Amazon Lambda, which show that rateless coding gives as much as $3\times$ speed-up over uncoded schemes.) <|cite_end|>, requiring a large number of data points assigned to each worker to guarantee decodability of the target function at the master. While significant research efforts have been invested in designing coded computation <|cite_start|> (Reference: Gradient coding: Avoiding stragglers in distributed learning: We propose a novel coding theoretic framework for mitigating stragglers in distributed learning. We show how carefully replicating data blocks and coding across gradients can provide tolerance to failures and stragglers for synchronous Gradient Descent. We implement our schemes in python (using MPI) to run on Amazon EC2, and show how we compare against baseline approaches in running time and generalization error.) <|cite_end|> <|cite_start|> (Reference: Improving Distributed Gradient Descent Using Reed-Solomon Codes: Today's massively-sized datasets have made it necessary to often perform computations on them in a distributed manner. In principle, a computational task is divided into subtasks which are distributed over a cluster operated by a taskmaster. One issue faced in practice is the delay incurred due to the presence of slow machines, known as \emph{stragglers}. Several schemes, including those based on replication, have been proposed in the literature to mitigate the effects of stragglers and more recently, those inspired by coding theory have begun to gain traction. In this work, we consider a distributed gradient descent setting suitable for a wide class of machine learning problems. We adapt the framework of Tandon et al. (arXiv:1612.03301) and present a deterministic scheme that, for a prescribed per-machine computational effort, recovers the gradient from the least number of machines $f$ theoretically permissible, via an $O(f^2)$ decoding algorithm. We also provide a theoretical delay model which can be used to minimize the expected waiting time per computation by optimally choosing the parameters of the scheme. Finally, we supplement our theoretical findings with numerical results that demonstrate the efficacy of the method and its advantages over competing schemes.) <|cite_end|> <|cite_start|> (Reference: Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD: Distributed Stochastic Gradient Descent (SGD) when run in a synchronous manner, suffers from delays in waiting for the slowest learners (stragglers). Asynchronous methods can alleviate stragglers, but cause gradient staleness that can adversely affect convergence. In this work we present a novel theoretical characterization of the speed-up offered by asynchronous methods by analyzing the trade-off between the error in the trained model and the actual training runtime (wallclock time). The novelty in our work is that our runtime analysis considers random straggler delays, which helps us design and compare distributed SGD algorithms that strike a balance between stragglers and staleness. We also present a new convergence analysis of asynchronous SGD variants without bounded or exponential delay assumptions, and a novel learning rate schedule to compensate for gradient staleness.) <|cite_end|> <|cite_start|> (Reference: Robust Gradient Descent via Moment Encoding with LDPC Codes: This paper considers the problem of implementing large-scale gradient descent algorithms in a distributed computing setting in the presence of {\em straggling} processors. To mitigate the effect of the stragglers, it has been previously proposed to encode the data with an erasure-correcting code and decode at the master server at the end of the computation. We, instead, propose to encode the second-moment of the data with a low density parity-check (LDPC) code. The iterative decoding algorithms for LDPC codes have very low computational overhead and the number of decoding iterations can be made to automatically adjust with the number of stragglers in the system. We show that for a random model for stragglers, the proposed moment encoding based gradient descent method can be viewed as the stochastic gradient descent method. This allows us to obtain convergence guarantees for the proposed solution. Furthermore, the proposed moment encoding based method is shown to outperform the existing schemes in a real distributed computing setup.) <|cite_end|> <|cite_start|> (Reference: Communication-Computation Efficient Gradient Coding: This paper develops coding techniques to reduce the running time of distributed learning tasks. It characterizes the fundamental tradeoff to compute gradients (and more generally vector summations) in terms of three parameters: computation load, straggler tolerance and communication cost. It further gives an explicit coding scheme that achieves the optimal tradeoff based on recursive polynomial constructions, coding both across data subsets and vector components. As a result, the proposed scheme allows to minimize the running time for gradient computations. Implementations are made on Amazon EC2 clusters using Python with mpi4py package. Results show that the proposed scheme maintains the same generalization error while reducing the running time by $32\%$ compared to uncoded schemes and $23\%$ compared to prior coded schemes focusing only on stragglers (Tandon et al., ICML 2017).) <|cite_end|> <|cite_start|> (Reference: Straggler Mitigation in Distributed Matrix Multiplication: Fundamental Limits and Optimal Coding: We consider the problem of massive matrix multiplication, which underlies many data analytic applications, in a large-scale distributed system comprising a group of worker nodes. We target the stragglers' delay performance bottleneck, which is due to the unpredictable latency in waiting for slowest nodes (or stragglers) to finish their tasks. We propose a novel coding strategy, named \emph{entangled polynomial code}, for designing the intermediate computations at the worker nodes in order to minimize the recovery threshold (i.e., the number of workers that we need to wait for in order to compute the final output). We demonstrate the optimality of entangled polynomial code in several cases, and show that it provides orderwise improvement over the conventional schemes for straggler mitigation. Furthermore, we characterize the optimal recovery threshold among all linear coding strategies within a factor of $2$ using \emph{bilinear complexity}, by developing an improved version of the entangled polynomial code. In particular, while evaluating bilinear complexity is a well-known challenging problem, we show that optimal recovery threshold for linear coding strategies can be approximated within a factor of $2$ of this fundamental quantity. On the other hand, the improved version of the entangled polynomial code enables further and orderwise reduction in the recovery threshold, compared to its basic version. Finally, we show that the techniques developed in this paper can also be extended to several other problems such as coded convolution and fault-tolerant computing, leading to tight characterizations.) <|cite_end|> <|cite_start|> (Reference: On the Optimal Recovery Threshold of Coded Matrix Multiplication: We provide novel coded computation strategies for distributed matrix-matrix products that outperform the recent "Polynomial code" constructions in recovery threshold, i.e., the required number of successful workers. When $m$-th fraction of each matrix can be stored in each worker node, Polynomial codes require $m^2$ successful workers, while our MatDot codes only require $2m-1$ successful workers, albeit at a higher communication cost from each worker to the fusion node. We also provide a systematic construction of MatDot codes. Further, we propose "PolyDot" coding that interpolates between Polynomial codes and MatDot codes to trade off communication cost and recovery threshold. Finally, we demonstrate a coding technique for multiplying $n$ matrices ($n \geq 3$) by applying MatDot and PolyDot coding ideas.) <|cite_end|> <|cite_start|> (Reference: Polynomial Codes: an Optimal Design for High-Dimensional Coded Matrix Multiplication: We consider a large-scale matrix multiplication problem where the computation is carried out using a distributed system with a master node and multiple worker nodes, where each worker can store parts of the input matrices. We propose a computation strategy that leverages ideas from coding theory to design intermediate computations at the worker nodes, in order to efficiently deal with straggling workers. The proposed strategy, named as \emph{polynomial codes}, achieves the optimum recovery threshold, defined as the minimum number of workers that the master needs to wait for in order to compute the output. Furthermore, by leveraging the algebraic structure of polynomial codes, we can map the reconstruction problem of the final output to a polynomial interpolation problem, which can be solved efficiently. Polynomial codes provide order-wise improvement over the state of the art in terms of recovery threshold, and are also optimal in terms of several other metrics. Furthermore, we extend this code to distributed convolution and show its order-wise optimality.) <|cite_end|> <|cite_start|> (Reference: Lagrange Coded Computing: Optimal Design for Resiliency, Security and Privacy: We consider a scenario involving computations over a massive dataset stored distributedly across multiple workers, which is at the core of distributed learning algorithms. We propose Lagrange Coded Computing (LCC), a new framework to simultaneously provide (1) resiliency against stragglers that may prolong computations; (2) security against Byzantine (or malicious) workers that deliberately modify the computation for their benefit; and (3) (information-theoretic) privacy of the dataset amidst possible collusion of workers. LCC, which leverages the well-known Lagrange polynomial to create computation redundancy in a novel coded form across workers, can be applied to any computation scenario in which the function of interest is an arbitrary multivariate polynomial of the input dataset, hence covering many computations of interest in machine learning. LCC significantly generalizes prior works to go beyond linear computations. It also enables secure and private computing in distributed settings, improving the computation and communication efficiency of the state-of-the-art. Furthermore, we prove the optimality of LCC by showing that it achieves the optimal tradeoff between resiliency, security, and privacy, i.e., in terms of tolerating the maximum number of stragglers and adversaries, and providing data privacy against the maximum number of colluding workers. Finally, we show via experiments on Amazon EC2 that LCC speeds up the conventional uncoded implementation of distributed least-squares linear regression by up to $13.43\times$, and also achieves a $2.36\times$-$12.65\times$ speedup over the state-of-the-art straggler mitigation strategies.) <|cite_end|> <|cite_start|> (Reference: Polynomially Coded Regression: Optimal Straggler Mitigation via Data Encoding: We consider the problem of training a least-squares regression model on a large dataset using gradient descent. The computation is carried out on a distributed system consisting of a master node and multiple worker nodes. Such distributed systems are significantly slowed down due to the presence of slow-running machines (stragglers) as well as various communication bottlenecks. We propose "polynomially coded regression" (PCR) that substantially reduces the effect of stragglers and lessens the communication burden in such systems. The key idea of PCR is to encode the partial data stored at each worker, such that the computations at the workers can be viewed as evaluating a polynomial at distinct points. This allows the master to compute the final gradient by interpolating this polynomial. PCR significantly reduces the recovery threshold, defined as the number of workers the master has to wait for prior to computing the gradient. In particular, PCR requires a recovery threshold that scales inversely proportionally with the amount of computation/storage available at each worker. In comparison, state-of-the-art straggler-mitigation schemes require a much higher recovery threshold that only decreases linearly in the per worker computation/storage load. We prove that PCR's recovery threshold is near minimal and within a factor two of the best possible scheme. Our experiments over Amazon EC2 demonstrate that compared with state-of-the-art schemes, PCR improves the run-time by 1.50x ~ 2.36x with naturally occurring stragglers, and by as much as 2.58x ~ 4.29x with artificial stragglers.) <|cite_end|>techniques, we argue in this paper that uncoded computing and communication can be even more effective in tackling stragglers and reducing the average computation time. We consider computation of an arbitrary function over a dataset, and introduce a centralized scheduling strategy for uncoded distributed computation, where the tasks are assigned to the workers by the master. Each worker can compute a limited number of tasks, referred to as the \textit{computation load}. Computations are carried out sequentially, and the result of each computation is sent to the master right after it is completed. Communication delay from the workers to the master is also taken into account. We assume that both the computation and communication delays are independent across the workers, but may be correlated for different tasks carried out at the same worker. This sequential computation and communication framework allows the master to exploit partial computations by slow workers. The computation is assumed to be completed when the master receives sufficient number of distinct computations, referred to as the \textit{computation target}. Unlike coded computation, uncoded computing approach does not introduce any encoding and decoding delays and complexities; hence, can be particularly efficient for edge learning where the data is inherently distributed <|cite_start|> (Reference: Machine Learning at the Wireless Edge: Distributed Stochastic Gradient Descent Over-the-Air: We study collaborative machine learning at the wireless edge, where power and bandwidth-limited devices (workers), with limited local datasets, implement distributed stochastic gradient descent (DSGD) over-the-air with the help of a remote parameter server (PS). We consider a wireless multiple access channel (MAC) from the workers to the PS for communicating the local gradient estimates. We first introduce a digital DSGD (D-DSGD) scheme, assuming that the workers operate on the boundary of the MAC capacity region at each iteration of the DSGD algorithm, and digitize their estimates within the bit budget allowed by the employed power allocation. We then introduce an analog scheme, called A-DSGD, motivated by the additive nature of the wireless MAC, where the workers send their gradient estimates over the MAC through the available channel bandwidth without employing any digital code. Numerical results show that A-DSGD converges much faster than D-DSGD. The improvement is particularly compelling at low power and low bandwidth regimes. We also observe that the performance of A-DSGD improves with the number of workers, while D-DSGD deteriorates, limiting the ability of the latter in harnessing the computation power of many edge devices.) <|cite_end|>. It also allows partial decoding, which can be exploited to reduce the communication load for distributed learning <|cite_start|> (Reference: Federated Learning: Strategies for Improving Communication Efficiency: Federated Learning is a machine learning setting where the goal is to train a high-quality centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections. We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server, where the client-side updates are aggregated to compute a new global model. The typical clients in this setting are mobile phones, and communication efficiency is of the utmost importance. In this paper, we propose two ways to reduce the uplink communication costs: structured updates, where we directly learn an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, where we learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling before sending it to the server. Experiments on both convolutional and recurrent networks show that the proposed methods can reduce the communication cost by two orders of magnitude.) <|cite_end|> <|cite_start|> (Reference: {1-Bit Stochastic Gradient Descent and Its Application to Data-Parallel Distributed Training of Speech DNNs: We show empirically that in SGD training of deep neural networks, one can, at no or nearly no loss of accuracy, quantize the gradients aggressively—to but one bit per value—if the quantization error is carried forward across minibatches (error feedback). This size reduction makes it feasible to parallelize SGD through data-parallelism with fast processors like recent GPUs. We implement data-parallel deterministically distributed SGD by combining this finding with AdaGrad, automatic minibatch-size selection, double buffering, and model parallelism. Unexpectedly, quantization benefits AdaGrad, giving a small accuracy gain. For a typical Switchboard DNN with 46M parameters, we reach computation speeds of 27k frames per second (kfps) when using 2880 samples per minibatch, and 51kfps with 16k, on a server with 8 K20X GPUs. This corresponds to speed-ups over a single GPU of 3.6 and 6.3, respectively. 7 training passes over 309h of data complete in under 7h. A 160M-parameter model training processes 3300h of data in under 16h on 20 dual-GPU servers—a 10 times speed-up—albeit at a small accuracy loss.) <|cite_end|> <|cite_start|> (Reference: Scalable Distributed DNN Training using Commodity GPU Cloud Computing: We introduce a new method for scaling up distributed Stochastic Gradient Descent (SGD) training of Deep Neural Networks (DNN). The method solves the well-known communication bottleneck problem that arises for data-parallel SGD because compute nodes frequently need to synchronize a replica of the model. We solve it by purposefully controlling the rate of weight-update per individual weight, which is in contrast to the uniform update-rate customarily imposed by the size of a mini-batch. It is shown empirically that the method can reduce the amount of communication by three orders of magnitude while training a typical DNN for acoustic modelling. This reduction in communication bandwidth enables efficient scaling to more parallel GPU nodes than any other method that we are aware of, and it can be achieved with neither loss in convergence rate nor accuracy in the resulting DNN. Furthermore, the training can be performed on commodity cloud infrastructure and networking.) <|cite_end|>. An uncoded computation approach is also considered in <|cite_start|> (Reference: Near-Optimal Straggler Mitigation for Distributed Gradient Methods: Modern learning algorithms use gradient descent updates to train inferential models that best explain data. Scaling these approaches to massive data sizes requires proper distributed gradient descent schemes where distributed worker nodes compute partial gradients based on their partial and local data sets, and send the results to a master node where all the computations are aggregated into a full gradient and the learning model is updated. However, a major performance bottleneck that arises is that some of the worker nodes may run slow. These nodes a.k.a. stragglers can significantly slow down computation as the slowest node may dictate the overall computational time. We propose a distributed computing scheme, called Batched Coupon's Collector (BCC) to alleviate the effect of stragglers in gradient methods. We prove that our BCC scheme is robust to a near optimal number of random stragglers. We also empirically demonstrate that our proposed BCC scheme reduces the run-time by up to 85.4% over Amazon EC2 clusters when compared with other straggler mitigation strategies. We also generalize the proposed BCC scheme to minimize the completion time when implementing gradient descent-based algorithms over heterogeneous worker nodes.) <|cite_end|>, where the dataset is split into a limited number of mini-batches, and each worker is randomly assigned a mini-batch of data. This approach requires a large number of workers compared to the number of mini-batches to ensure that the master can recover all the data from the workers with high probability. The authors in <|cite_start|> (Reference: Combating Computational Heterogeneity in Large-Scale Distributed Computing via Work Exchange: Owing to data-intensive large-scale applications, distributed computation systems have gained significant recent interest, due to their ability of running such tasks over a large number of commodity nodes in a time efficient manner. One of the major bottlenecks that adversely impacts the time efficiency is the computational heterogeneity of distributed nodes, often limiting the task completion time due to the slowest worker. In this paper, we first present a lower bound on the expected computation time based on the work-conservation principle. We then present our approach of work exchange to combat the latency problem, in which faster workers can be reassigned additional leftover computations that were originally assigned to slower workers. We present two variations of the work exchange approach: a) when the computational heterogeneity knowledge is known a priori; and b) when heterogeneity is unknown and is estimated in an online manner to assign tasks to distributed workers. As a baseline, we also present and analyze the use of an optimized Maximum Distance Separable (MDS) coded distributed computation scheme over heterogeneous nodes. Simulation results also compare the proposed approach of work exchange, the baseline MDS coded scheme and the lower bound obtained via work-conservation principle. We show that the work exchange scheme achieves time for computation which is very close to the lower bound with limited coordination and communication overhead even when the knowledge about heterogeneity levels is not available.) <|cite_end|>study dynamic computation allocation across the workers with feedback providing information about the workers' speeds. The proposed uncoded computation approach in this paper does not impose any constraint on the number of workers, and is designed without any prior knowledge or feedback on the computation and communication delays at the workers. The problem under consideration is similar to the well-known job scheduling problem <|cite_start|> (Reference: Bounds on multiprocessing anomalies and packing algorithms: ) <|cite_end|>, in which a set of tasks are to be executed by multiple workers given a partial ordering of task execution and the delay associated with each task. The goal is to find a schedule minimizing the total delay, which is shown to be NP-complete <|cite_start|> (Reference: NP-Complete Scheduling Problems: ) <|cite_end|>. This problem has been studied under different constraints for different applications, such as cloud computing <|cite_start|> (Reference: An ACO-LB algorithm for task scheduling in the cloud environment: In the face of a large number of task requests which are submitted by users, the cloud data centers need not only to finish these massive tasks but also to satisfy the user's service demand. How to allocate virtual machine reasonably and schedule the tasks efficiently becomes a key problem to be solved in the cloud environment. This paper proposes a ACO-LB(Load balancing optimization algorithm based on ant colony algorithm) algorithm to solve the load imbalance of virtual machine in the process of task scheduling .The ACO-LB algorithm can adapt to the dynamic cloud environment. It will not only shorten the makespan of task scheduling, but also maintain the load balance of virtual machines in the data center. In this paper, the workflow scheduling is simulated in CloudSim. The results show that the proposed ACO-LB algorithm has better performance and load balancing ability.) <|cite_end|> <|cite_start|> (Reference: Improved PSO-based task scheduling algorithm in cloud computing: Job scheduling system problem is a core and challenging issue in cloud computing. How to use cloud computing resources efficiently and gain the maximum profits with job scheduling system is one of the cloud computing service providers’ ultimate goals. For characteristics of particle swarm optimization algorithm in solving the large-scale combination optimization problem easy to fall into the search speed slowly and partially the most superior, the global fast convergence of simulated annealing algorithm is utilized to combine particle swarm optimization algorithm in each iteration, which enhances the convergence rate and improves the efficiency. This paper proposed the improve particle swarm optimization algorithm in resources scheduling strategy of the cloud computing. Through experiments, the results show that this method can reduce the task average running time, and raises the rate availability of resources.) <|cite_end|> <|cite_start|> (Reference: Multi objective task scheduling using modified ant colony optimization in cloud computing: : Cloud computing is the development of distributed computing, parallel computing, and grid computing, or defined as a commercial implementation of such computer science concepts. One of the main issues in a cloud computing environment is Task scheduling (TS). In Cloud task scheduling, many Non deterministic Polynomial time-hard optimization problem, and many meta-heuristic (MH) algorithms have been proposed to solve it. A task scheduler should adapt its scheduling strategy to changing environment and variable tasks. This paper amends a cloud task scheduling policy based on Modified Ant Colony Optimization (MACO) algorithm. The main contribution of recommended method is to minimize makespan and to perform Multi Objective Task Scheduling (MOTS) process by assigning pheromone amount relative to corresponding virtual machine efficiency. MACO algorithm improves the performance of task scheduling by reducing makespan and degree of imbalance comparatively lower than a basic ACO algorithm by its multi-objective and deliberate nature. Experimental outcomes have shown that proposed MACO to have makespan 350 milliseconds and average utilization of 0.51 for a set of 100 tasks.) <|cite_end|>, edge computing <|cite_start|> (Reference: Prioritized task scheduling in fog computing: Fog computing, similar to edge computing, has been proposed as a model to introduce a virtualized layer between the end users and the back-end cloud data centers. Fog computing has attracted much attention due to the recent rapid deployment of smart devices and Internet-of-Things (IoT) systems, which often requires real-time, stringent-delay services. The fog layer placed between client and cloud layers aims to reduce the delay in terms of transmission and processing times, as well as the overall cost. To support the increasing number of IoT, smart devices, and to improve performance and reduce cost, this paper proposes a task scheduling algorithm in the fog layer based on priority levels. The proposed architecture, queueing and priority models, priority assignment module, and the priority-based task scheduling algorithms are carefully described. Performance evaluation shows that, comparing with existing task scheduling algorithms, the proposed algorithm reduces the overall response time and notably decreases the total cost. We believe that this work is significant to the emerging fog computing technology, and the priority-based algorithm is useful to a wide range of application domains.) <|cite_end|> <|cite_start|> (Reference: Tasks scheduling and resource allocation in fog computing based on containers for smart manufacturing: Fog computing has been proposed as an extension of cloud computing to provide computation, storage, and network services in network edge. For smart manufacturing, fog computing can provide a wealth of computational and storage services, such as fault detection and state analysis of devices in assembly lines, if the middle layer between the industrial cloud and the terminal device is considered. However, limited resources and low-delay services hinder the application of new virtualization technologies in the task scheduling and resource management of fog computing. Thus, we build a new task-scheduling model by considering the role of containers. Then, we construct a task-scheduling algorithm to ensure that the tasks are completed on time and the number of concurrent tasks for the fog node is optimized. Finally, we propose a reallocation mechanism to reduce task delays in accordance with the characteristics of the containers. The results showed that our proposed task-scheduling algorithm and reallocation scheme can effectively reduce task delays and improve the concurrency number of the tasks in fog nodes.) <|cite_end|>, and dispersed computing <|cite_start|> (Reference: Scheduling tasks with precedence constraints on multiple servers: We consider the problem of scheduling jobs which are modeled by directed acyclic graphs (DAG). In such graphs, nodes represent tasks of a job and edges represent precedence constraints in processing these tasks. The DAG scheduling problem, also known as scheduling in fork-join processing networks, is motivated by examples such as job scheduling in data centers and cloud computing, patient flow scheduling in health systems and many other applications. We consider a flexible system, in which servers may process different, possibly overlapping, sets of task types. In this paper, we first discuss the difficulties in designing provably efficient policies for DAG scheduling, which arise due to interactions between the flexibility of the processing environment and the precedence constraints in the system. A major difficulty is the classical synchronization issue, which is further complicated in the presence of system flexibility. Then, we propose two queueing networks to model the scheduling problem that overcome this difficulty. These are virtual queues that enable us to design provably efficient scheduling policies. We show that the well-known Max-Weight policy for these queueing networks is throughput-optimal. Finally, to compare the delay performance of the two queueing networks, we consider a simplified model in which tasks and servers are identical. We characterize their delay performances under a simple first-come-first-serve policy, via a novel coupling argument.) <|cite_end|> <|cite_start|> (Reference: Communication-Aware Scheduling of Serial Tasks for Dispersed Computing: There is a growing interest in development of in-network dispersed computing paradigms that leverage the computing capabilities of heterogeneous resources dispersed across the network for processing massive amount of data is collected at the edge of the network. We consider the problem of task scheduling for such networks, in a dynamic setting in which arriving computation jobs are modeled as chains, with nodes representing tasks, and edges representing precedence constraints among tasks. In our proposed model, motivated by significant communication costs in dispersed computing environments, the communication times are taken into account. More specifically, we consider a network where servers are capable of serving all task types, and sending the results of processed tasks from one server to another server results in some communication delay that makes the design of optimal scheduling policy significantly more challenging than classical queueing networks. As the main contributions of the paper, we first characterize the capacity region of the network, then propose a novel virtual queueing network encoding the state of the network. Finally, we propose a Max-Weight type scheduling policy, and considering the virtual queueing network in the fluid limit, we use a Lyapunov argument to show that the policy is throughput-optimal.) <|cite_end|>. Our problem differs from the job scheduling one, since no ordering of task execution is imposed, and each task can be executed by an arbitrary number of workers. Also, in our model, the scheduling is designed without having any prior knowledge about the computation and communication delays of the tasks. Assuming that the computation and communication delays are random variables, our goal is to characterize the minimum \textit{average completion time} as a function of the computation load and computation target. We first provide a generic expression for the average completion time as a function of the \textit{computation schedule}, which specifies both the tasks assigned to each worker and their computation order. We propose two different computation scheduling schemes, and obtain closed-form expressions for their average completion times for a general statistical model of the random delays, which upper bound the minimum average completion time. We also establish a lower bound on the minimum average completion time. The experiments on Amazon EC2 cluster illustrate a substantial reduction in the average completion time with the proposed uncoded computing schemes with task scheduling compared to coded computation schemes and uncoded computation without scheduling of the tasks at the workers
[ "<|reference_start|> Pumma: Parallel universal matrix multiplication algorithms on distributed memory concurrent computers: This paper describes the Parallel Universal Matrix Multiplication Algorithms (PUMMA) on distributed memory concurrent computers. The PUMMA package includes not only the non-transposed matrix multiplication routine C = A{center_dot}B, but also transposed multiplication routines C = A{sup T}{center_dot}B, C = A{center_dot}B{sup T}, and C = A{sup T}{center_dot}B{sup T}, for a block scattered data distribution. The routines perform efficiently for a wide range of processor configurations and block sizes. The PUMMA together provide the same functionality as the Level 3 BLAS routine xGEMM. Details of the parallel implementation of the routines are given, and results are presented for runs on the Intel Touchstone Delta computer. <|reference_end|>", "<|reference_start|> Gradient coding: Avoiding stragglers in distributed learning: We propose a novel coding theoretic framework for mitigating stragglers in distributed learning. We show how carefully replicating data blocks and coding across gradients can provide tolerance to failures and stragglers for synchronous Gradient Descent. We implement our schemes in python (using MPI) to run on Amazon EC2, and show how we compare against baseline approaches in running time and generalization error. <|reference_end|>", "<|reference_start|> Hierarchical Coded Computation: Coded computation is a method to mitigate \"stragglers\" in distributed computing systems through the use of error correction coding that has lately received significant attention. First used in vector-matrix multiplication, the range of application was later extended to include matrix-matrix multiplication, heterogeneous networks, convolution, and approximate computing. A drawback to previous results is they completely ignore work completed by stragglers. While stragglers are slower compute nodes, in many settings the amount of work completed by stragglers can be non-negligible. Thus, in this work, we propose a hierarchical coded computation method that exploits the work completed by all compute nodes. We partition each node's computation into layers of sub-computations such that each layer can be treated as (distinct) erasure channel. We then design different erasure codes for each layer so that all layers have the same failure exponent. We propose design guidelines to optimize parameters of such codes. Numerical results show the proposed scheme has an improvement of a factor of 1.5 in the expected finishing time compared to previous work. <|reference_end|>", "<|reference_start|> Speeding Up Distributed Gradient Descent by Utilizing Non-persistent Stragglers: Distributed gradient descent (DGD) is an efficient way of implementing gradient descent (GD), especially for large data sets, by dividing the computation tasks into smaller subtasks and assigning to different computing servers (CSs) to be executed in parallel. In standard parallel execution, per-iteration waiting time is limited by the execution time of the straggling servers. Coded DGD techniques have been introduced recently, which can tolerate straggling servers via assigning redundant computation tasks to the CSs. In most of the existing DGD schemes, either with coded computation or coded communication, the non-straggling CSs transmit one message per iteration once they complete all their assigned computation tasks. However, although the straggling servers cannot complete all their assigned tasks, they are often able to complete a certain portion of them. In this paper, we allow multiple transmissions from each CS at each iteration in order to make sure a maximum number of completed computations can be reported to the aggregating server (AS), including the straggling servers. We numerically show that the average completion time per iteration can be reduced significantly by slightly increasing the communication load per server. <|reference_end|>" ]
[ 1, 9, 21, 24 ]
{"<|multi_cite_1_1|>": "ss-689032", "<|multi_cite_1_2|>": "ss-1142855", "<|multi_cite_2_1|>": "arxiv-88739", "<|multi_cite_2_2|>": "ss-1516646", "<|multi_cite_2_3|>": "arxiv-126996", "<|multi_cite_2_4|>": "arxiv-150390", "<|multi_cite_2_5|>": "arxiv-159364", "<|multi_cite_2_6|>": "arxiv-147821", "<|cite_3|>": "arxiv-88739", "<|cite_4|>": "ss-1516646", "<|multi_cite_5_1|>": "ss-1516646", "<|multi_cite_5_2|>": "arxiv-126996", "<|multi_cite_5_3|>": "arxiv-150390", "<|multi_cite_5_4|>": "arxiv-159364", "<|multi_cite_5_5|>": "arxiv-147821", "<|multi_cite_6_1|>": "arxiv-146142", "<|multi_cite_6_2|>": "arxiv-146789", "<|multi_cite_6_3|>": "arxiv-125477", "<|cite_7|>": "arxiv-161114", "<|cite_8|>": "arxiv-159938", "<|cite_9|>": "arxiv-163912", "<|multi_cite_10_1|>": "arxiv-163912", "<|multi_cite_10_2|>": "arxiv-163913", "<|multi_cite_10_3|>": "arxiv-156526", "<|multi_cite_10_4|>": "arxiv-168496", "<|multi_cite_10_5|>": "arxiv-138326", "<|multi_cite_11_1|>": "arxiv-163912", "<|multi_cite_11_2|>": "arxiv-163913", "<|multi_cite_11_3|>": "arxiv-156526", "<|multi_cite_11_4|>": "arxiv-168496", "<|cite_12|>": "arxiv-168496", "<|cite_13|>": "arxiv-159938", "<|cite_14|>": "arxiv-163912", "<|cite_15|>": "arxiv-163913", "<|cite_16|>": "arxiv-156526", "<|multi_cite_17_1|>": "ss-1516646", "<|multi_cite_17_2|>": "arxiv-126996", "<|multi_cite_17_3|>": "arxiv-150390", "<|multi_cite_17_4|>": "arxiv-159364", "<|multi_cite_17_5|>": "arxiv-147821", "<|multi_cite_17_6|>": "arxiv-146142", "<|multi_cite_17_7|>": "arxiv-146789", "<|multi_cite_17_8|>": "arxiv-125477", "<|multi_cite_17_9|>": "arxiv-161114", "<|multi_cite_17_10|>": "arxiv-159938", "<|cite_18|>": "ss-1142856", "<|multi_cite_19_1|>": "arxiv-108095", "<|multi_cite_19_2|>": "ss-708784", "<|multi_cite_19_3|>": "ss-1037869", "<|cite_20|>": "arxiv-138326", "<|cite_21|>": "arxiv-140988", "<|cite_22|>": "ss-1142857", "<|cite_23|>": "ss-1154518", "<|multi_cite_24_1|>": "ss-1142858", "<|multi_cite_24_2|>": "ss-2050999", "<|multi_cite_24_3|>": "ss-1142859", "<|multi_cite_25_1|>": "ss-2138627", "<|multi_cite_25_2|>": "ss-2321750", "<|multi_cite_26_1|>": "ss-1448254", "<|multi_cite_26_2|>": "arxiv-155350", "<|cite_27|>": "arxiv-138326"}
2301.08202
<|paper_start|> Title: Differentially Private Online Bayesian Estimation With Adaptive Truncation Abstract: Differentially Private Online Bayesian Estimation With Adaptive Truncation: We propose a novel online and adaptive truncation method for differentially private Bayesian online estimation of a static parameter regarding a population. We assume that sensitive information from individuals is collected sequentially and the inferential aim is to estimate, on-the-fly, a static parameter regarding the population to which those individuals belong. We propose sequential Monte Carlo to perform online Bayesian estimation. When individuals provide sensitive information in response to a query, it is necessary to perturb it with privacy-preserving noise to ensure the privacy of those individuals. The amount of perturbation is proportional to the sensitivity of the query, which is determined usually by the range of the queried information. The truncation technique we propose adapts to the previously collected observations to adjust the query range for the next individual. The idea is that, based on previous observations, we can carefully arrange the interval into which the next individual's information is to be truncated before being perturbed with privacy-preserving noise. In this way, we aim to design predictive queries with small sensitivity, hence small privacy-preserving noise, enabling more accurate estimation while maintaining the same level of privacy. To decide on the location and the width of the interval, we use an exploration-exploitation approach a la Thompson sampling with an objective function based on the Fisher information of the generated observation. We show the merits of our methodology with numerical examples. Introduction \label{sec: Introduction} During the past couple of decades, there has been a rapid increase in the amount of collected data as well as concerns about individuals' privacy. This has made privacy-preserving data analysis a popular and important subject in data science. Along the way, \emph{differential privacy} has become a popular framework for privacy-preserving data sharing algorithms <|cite_start|> (Reference: Differential {{Privacy}}: A discussion with Miguel Guevara, Damien Desfontaines, Jim Waldo, and Terry Coatta) <|cite_end|> <|cite_start|> (Reference: The algorithmic foundations of Differential Privacy: The problem of privacy-preserving data analysis has a long history spanning multiple disciplines. As electronic data about individuals becomes increasingly detailed, and as technology enables ever more powerful collection and curation of these data, the need increases for a robust, meaningful, and mathematically rigorous definition of privacy, together with a computationally rich class of algorithms that satisfy this definition. Differential Privacy is such a definition.After motivating and discussing the meaning of differential privacy, the preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example. A key point is that, by rethinking the computational goal, one can often obtain far better results than would be achieved by methodically replacing each step of a non-private computation with a differentially private implementation. Despite some astonishingly powerful computational results, there are still fundamental limitations — not just on what can be achieved with differential privacy but on what can be achieved with any method that protects against a complete breakdown in privacy. Virtually all the algorithms discussed herein maintain differential privacy against adversaries of arbitrary computational power. Certain algorithms are computationally intensive, others are efficient. Computational complexity for the adversary and the algorithm are both discussed.We then turn from fundamentals to applications other than queryrelease, discussing differentially private methods for mechanism design and machine learning. The vast majority of the literature on differentially private algorithms considers a single, static, database that is subject to many analyses. Differential privacy in other models, including distributed databases and computations on data streams is discussed.Finally, we note that this work is meant as a thorough introduction to the problems and techniques of differential privacy, but is not intended to be an exhaustive survey — there is by now a vast amount of work in differential privacy, and we can cover only a small portion of it.) <|cite_end|>. There are two conflicting interests in privacy-preserving data analysis: (i) The individuals of a population who contribute to a data set with their sensitive information want to protect their privacy against all possible adversaries. (ii) Conflicting with that, it is desired to be able to estimate a common quantity of interest regarding the population based on sensitive data with reasonable accuracy. To put the conflict in a statistical context, we let $X_{t} \sim \mathcal{P}_{\theta}$ be the sensitive information of $t$'th individual of a sample randomly chosen from a large population with a population distribution $\mathcal{P}_{\theta}$. We want to estimate $\theta$ while also protecting the privacy of the individuals contributing to the sample, i.e., without revealing `much' information about $X_{t}$s individually. In this paper, we are particularly interested in online Bayesian estimation of $\theta$ as we continually collect $Y_{1}, Y_{2}, \ldots$, which are the \emph{perturbed} versions of $X_{1}, X_{2}, \ldots$ respectively. The cases where individuals contribute to a data set continually are not rare: Imagine web users registering to a web application after entering their information, patients being admitted to a hospital, customers applying for a bank loan, etc. We address two interrelated questions: \begin{itemize} \item How can we improve the estimate of $\theta$ as we collect $Y_{1}, Y_{2}, \ldots$ continually? \item As we estimate $\theta$, how can we continually adjust the privacy-preserving mechanism that generates $Y_{t}$ from $X_{t}$ so that the estimation performance is improved as $t$ increases? \end{itemize} Differentially private Bayesian inference of $\theta$ has been the subject of several recent studies, with Monte Carlo being the main methodological tool for inference. Stochastic gradient MCMC algorithms were proposed in <|cite_start|> (Reference: Privacy for Free: Posterior Sampling and Stochastic Gradient Monte Carlo: We consider the problem of Bayesian learning on sensitive datasets and present two simple but somewhat surprising results that connect Bayesian learning to "differential privacy", a cryptographic approach to protect individual-level privacy while permitting database-level utility. Specifically, we show that under standard assumptions, getting one sample from a posterior distribution is differentially private "for free"; and this sample as a statistical estimator is often consistent, near optimal, and computationally tractable. Similarly but separately, we show that a recent line of work that use stochastic gradient for Hybrid Monte Carlo (HMC) sampling also preserve differentially privacy with minor or no modifications of the algorithmic procedure at all, these observations lead to an "anytime" algorithm for Bayesian learning under privacy constraint. We demonstrate that it performs much better than the state-of-the-art differential private methods on synthetic and real datasets.) <|cite_end|> <|cite_start|> (Reference: On Connecting Stochastic Gradient MCMC and Differential Privacy: Significant success has been realized recently on applying machine learning to real-world applications. There have also been corresponding concerns on the privacy of training data, which relates to data security and confidentiality issues. Differential privacy provides a principled and rigorous privacy guarantee on machine learning models. While it is common to design a model satisfying a required differential-privacy property by injecting noise, it is generally hard to balance the trade-off between privacy and utility. We show that stochastic gradient Markov chain Monte Carlo (SG-MCMC) -- a class of scalable Bayesian posterior sampling algorithms proposed recently -- satisfies strong differential privacy with carefully chosen step sizes. We develop theory on the performance of the proposed differentially-private SG-MCMC method. We conduct experiments to support our analysis and show that a standard SG-MCMC sampler without any modification (under a default setting) can reach state-of-the-art performance in terms of both privacy and utility on Bayesian learning.) <|cite_end|>, while reversible MCMC algorithms were proposed in <|cite_start|> (Reference: Differentially Private Markov Chain Monte Carlo: Recent developments in differentially private (DP) machine learning and DP Bayesian learning have enabled learning under strong privacy guarantees for the training data subjects. In this paper, we further extend the applicability of DP Bayesian learning by presenting the first general DP Markov chain Monte Carlo (MCMC) algorithm whose privacy-guarantees are not subject to unrealistic assumptions on Markov chain convergence and that is applicable to posterior inference in arbitrary models. Our algorithm is based on a decomposition of the Barker acceptance test that allows evaluating the R\'enyi DP privacy cost of the accept-reject choice. We further show how to improve the DP guarantee through data subsampling and approximate acceptance tests.) <|cite_end|> <|cite_start|> (Reference: Exact MCMC with differentially private moves: ) <|cite_end|> <|cite_start|> (Reference: Differentially Private Hamiltonian Monte Carlo: Markov chain Monte Carlo (MCMC) algorithms have long been the main workhorses of Bayesian inference. Among them, Hamiltonian Monte Carlo (HMC) has recently become very popular due to its efficiency resulting from effective use of the gradients of the target distribution. In privacy-preserving machine learning, differential privacy (DP) has become the gold standard in ensuring that the privacy of data subjects is not violated. Existing DP MCMC algorithms either use random-walk proposals, or do not use the Metropolis--Hastings (MH) acceptance test to ensure convergence without decreasing their step size to zero. We present a DP variant of HMC using the MH acceptance test that builds on a recently proposed DP MCMC algorithm called the penalty algorithm, and adds noise to the gradient evaluations of HMC. We prove that the resulting algorithm converges to the correct distribution, and is ergodic. We compare DP-HMC with the existing penalty, DP-SGLD and DP-SGNHT algorithms, and find that DP-HMC has better or equal performance than the penalty algorithm, and performs more consistently than DP-SGLD or DP-SGNHT.) <|cite_end|>. Those algorithms require as many interactions with sensitive data as the number of iterations they run for. An alternative scheme to that is called input perturbation, where the sensitive data are perturbed and shared once and for all, and all the subsequent Bayesian inference is performed on the perturbed data without further interaction with the sensitive data <|cite_start|> (Reference: On the Theory and Practice of Privacy-Preserving Bayesian Data Analysis: Bayesian inference has great promise for the privacy-preserving analysis of sensitive data, as posterior sampling automatically preserves differential privacy, an algorithmic notion of data privacy, under certain conditions (Dimitrakakis et al., 2014; Wang et al., 2015). While this one posterior sample (OPS) approach elegantly provides privacy "for free," it is data inefficient in the sense of asymptotic relative efficiency (ARE). We show that a simple alternative based on the Laplace mechanism, the workhorse of differential privacy, is as asymptotically efficient as non-private posterior inference, under general assumptions. This technique also has practical advantages including efficient use of the privacy budget for MCMC. We demonstrate the practicality of our approach on a time-series analysis of sensitive military records from the Afghanistan and Iraq wars disclosed by the Wikileaks organization.) <|cite_end|> <|cite_start|> (Reference: Probabilistic Inference and Differential Privacy: We identify and investigate a strong connection between probabilistic inference and differential privacy, the latter being a recent privacy definition that permits only indirect observation of data through noisy measurement. Previous research on differential privacy has focused on designing measurement processes whose output is likely to be useful on its own. We consider the potential of applying probabilistic inference to the measurements and measurement process to derive posterior distributions over the data sets and model parameters thereof. We find that probabilistic inference can improve accuracy, integrate multiple observations, measure uncertainty, and even provide posterior distributions over quantities that were not directly measured.) <|cite_end|> <|cite_start|> (Reference: Differentially Private Exponential Random Graphs: We propose methods to release and analyze synthetic graphs in order to protect privacy of individual relationships captured by the social network. Proposed techniques aim at fitting and estimating a wide class of exponential random graph models (ERGMs) in a differentially private manner, and thus offer rigorous privacy guarantees. More specifically, we use the randomized response mechanism to release networks under $\epsilon$-edge differential privacy. To maintain utility for statistical inference, treating the original graph as missing, we propose a way to use likelihood based inference and Markov chain Monte Carlo (MCMC) techniques to fit ERGMs to the produced synthetic networks. We demonstrate the usefulness of the proposed techniques on a real data example.) <|cite_end|> <|cite_start|> (Reference: Differentially Private Bayesian Inference for Exponential Families: The study of private inference has been sparked by growing concern regarding the analysis of data when it stems from sensitive sources. We present the first method for private Bayesian inference in exponential families that properly accounts for noise introduced by the privacy mechanism. It is efficient because it works only with sufficient statistics and not individual data. Unlike other methods, it gives properly calibrated posterior beliefs in the non-asymptotic data regime.) <|cite_end|> <|cite_start|> (Reference: ABCDP: Approximate Bayesian Computation with Differential Privacy: We develop a novel approximate Bayesian computation (ABC) framework, ABCDP, that produces differentially private (DP) and approximate posterior samples. Our framework takes advantage of the Sparse Vector Technique (SVT), widely studied in the differential privacy literature. SVT incurs the privacy cost only when a condition (whether a quantity of interest is above/below a threshold) is met. If the condition is met sparsely during the repeated queries, SVT can drastically reduces the cumulative privacy loss, unlike the usual case where every query incurs the privacy loss. In ABC, the quantity of interest is the distance between observed and simulated data, and only when the distance is below a threshold, we take the corresponding prior sample as a posterior sample. Hence, applying SVT to ABC is an organic way to transform an ABC algorithm to a privacy-preserving variant with minimal modification, but yields the posterior samples with a high privacy level. We theoretically analyze the interplay between the noise added for privacy and the accuracy of the posterior samples.) <|cite_end|> <|cite_start|> (Reference: Exact Inference with Approximate Computation for Differentially Private Data via Perturbations: This paper discusses how two classes of approximate computation algorithms can be adapted, in a modular fashion, to achieve exact statistical inference from differentially private data products. Considered are approximate Bayesian computation for Bayesian inference, and Monte Carlo Expectation-Maximization for likelihood inference. Up to Monte Carlo error, inference from these algorithms is exact with respect to the joint specification of both the analyst's original data model, and the curator's differential privacy mechanism. Highlighted is a duality between approximate computation on exact data, and exact computation on approximate data, which can be leveraged by a well-designed computational procedure for statistical inference.) <|cite_end|> <|cite_start|> (Reference: Statistic Selection and MCMC for Differentially Private Bayesian Estimation: This paper concerns differentially private Bayesian estimation of the parameters of a population distribution, when a statistic of a sample from that population is shared in noise to provide differential privacy. This work mainly addresses two problems: (1) What statistic of the sample should be shared privately? For the first question, i.e., the one about statistic selection, we promote using the Fisher information. We find out that, the statistic that is most informative in a non-privacy setting may not be the optimal choice under the privacy restrictions. We provide several examples to support that point. We consider several types of data sharing settings and propose several Monte Carlo-based numerical estimation methods for calculating the Fisher information for those settings. The second question concerns inference: (2) Based on the shared statistics, how could we perform effective Bayesian inference? We propose several Markov chain Monte Carlo (MCMC) algorithms for sampling from the posterior distribution of the parameter given the noisy statistic. The proposed MCMC algorithms can be preferred over one another depending on the problem. For example, when the shared statistics is additive and added Gaussian noise, a simple Metropolis-Hasting algorithm that utilizes the central limit theorem is a decent choice. We propose more advanced MCMC algorithms for several other cases of practical relevance. Our numerical examples involve comparing several candidate statistics to be shared privately. For each statistic, we perform Bayesian estimation based on the posterior distribution conditional on the privatized version of that statistic. We demonstrate that, the relative performance of a statistic, in terms of the mean squared error of the Bayesian estimator based on the corresponding privatized statistic, is adequately predicted by the Fisher information of the privatized statistic.) <|cite_end|> <|cite_start|> (Reference: Data augmentation mcmc for bayesian inference from privatized data: Differentially private mechanisms protect privacy by introducing additional randomness into the data. Restricting access to only the privatized data makes it challenging to perform valid statistical inference on parameters underlying the confidential data. Specifically, the likelihood function of the privatized data requires integrating over the large space of confidential databases and is typically intractable. For Bayesian analysis, this results in a posterior distribution that is doubly intractable, rendering traditional MCMC techniques inapplicable. We propose an MCMC framework to perform Bayesian inference from the privatized data, which is applicable to a wide range of statistical models and privacy mechanisms. Our MCMC algorithm augments the model parameters with the unobserved confidential data, and alternately updates each one conditional on the other. For the potentially challenging step of updating the confidential data, we propose a generic approach that exploits the privacy guarantee of the mechanism to ensure efficiency. We give results on the computational complexity, acceptance rate, and mixing properties of our MCMC. We illustrate the efficacy and applicability of our methods on a na\"ive-Bayes log-linear model as well as on a linear regression model.) <|cite_end|>. All the cited works above consider differentially private Bayesian inference conditional on a batch (static) data set. Unlike those works, in this paper, we consider the case with continual observations, where data from the individuals are collected \emph{sequentially} in a privacy-preserving way. This scenario enables two methodological opportunities and/or challenges: \begin{enumerate} \item One can (and/or should) estimate the static parameter on-the-fly, that is, update the estimate as data are being received. Differentially private estimation under continual observation has been the subject of several works that are initiated by <|cite_start|> (Reference: {Differential Privacy under Continual Observation: Differential privacy is a recent notion of privacy tailored to privacy-preserving data analysis [11]. Up to this point, research on differentially private data analysis has focused on the setting of a trusted curator holding a large, static, data set; thus every computation is a "one-shot" object: there is no point in computing something twice, since the result will be unchanged, up to any randomness introduced for privacy. However, many applications of data analysis involve repeated computations, either because the entire goal is one of monitoring, e.g., of traffic conditions, search trends, or incidence of influenza, or because the goal is some kind of adaptive optimization, e.g., placement of data to minimize access costs. In these cases, the algorithm must permit continual observation of the system's state. We therefore initiate a study of differential privacy under continual observation. We identify the problem of maintaining a counter in a privacy preserving manner and show its wide applicability to many different problems.) <|cite_end|>; other important contributions include <|cite_start|> (Reference: Private and Continual Release of Statistics: We ask the question: how can Web sites and data aggregators continually release updated statistics, and meanwhile preserve each individual user’s privacy? Suppose we are given a stream of 0’s and 1’s. We propose a differentially private continual counter that outputs at every time step the approximate number of 1’s seen thus far. Our counter construction has error that is only poly-log in the number of time steps. We can extend the basic counter construction to allow Web sites to continually give top-k and hot items suggestions while preserving users’ privacy.) <|cite_end|> <|cite_start|> (Reference: Quantifying Differential Privacy under Temporal Correlations: Differential Privacy (DP) has received increased attention as a rigorous privacy framework. Existing studies employ traditional DP mechanisms (e.g., the Laplace mechanism) as primitives, which assume that the data are independent, or that adversaries do not have knowledge of the data correlations. However, continuously generated data in the real world tend to be temporally correlated, and such correlations can be acquired by adversaries. In this paper, we investigate the potential privacy loss of a traditional DP mechanism under temporal correlations in the context of continuous data release. First, we model the temporal correlations using Markov model and analyze the privacy leakage of a DP mechanism when adversaries have knowledge of such temporal correlations. Our analysis reveals that the privacy leakage of a DP mechanism may accumulate and increase over time. We call it temporal privacy leakage. Second, to measure such privacy leakage, we design an efficient algorithm for calculating it in polynomial time. Although the temporal privacy leakage may increase over time, we also show that its supremum may exist in some cases. Third, to bound the privacy loss, we propose mechanisms that convert any existing DP mechanism into one against temporal privacy leakage. Experiments with synthetic data confirm that our approach is efficient and effective.) <|cite_end|>. However, those works are usually applied to online tracking of dynamic summaries of data, such as the count of a certain property, rather than estimating a static parameter of the population from which the sensitive data are being received. In particular, they do not consider Bayesian estimation. \item As we estimate the parameter, we can adaptively adjust the query for the next individual's information to make the response as \emph{informative} as possible. For example, if, based on the noisy income values collected so far from 100 individuals, we have estimated that the mean income of the population is around $\hat{\mu}$, we can ask the $101$'th individual to provide their income information after \emph{truncating} it to an interval around $\hat{\mu}$, such as $[\hat{\mu} - \Delta, \hat{\mu} + \Delta]$, and \emph{then} privatising it by adding noise to the (possibly) truncated value. The motivation behind pursuing such an adaptive truncation technique is to improve the estimation performance with less noisy data while maintaining a given level of privacy. The standard deviation of the privacy-preserving noise added to the outcome of a query is proportional to the sensitivity of the query. By default, the queried information may be unbounded or have very large ranges, resulting in low utility. Continuing with the income example above, assume that the natural limits of an income are $[x_{\min}, X_{\max}]$ so that a query that directly asks for income information has a sensitivity of $X_{\max} - x_{\min}$, which is expectedly large. If adaptive truncation were used, instead, referring to the above example, the query interval for $101$'th individual would be $[\hat{\mu} - \Delta, \hat{\mu} + \Delta]$ with sensitivity $2 \Delta$. \end{enumerate} Truncation is considered in many works as a natural way to have finite sensitivity, see <|cite_start|> (Reference: Differentially Private Bayesian Learning on Distributed Data: Many applications of machine learning, for example in health care, would benefit from methods that can guarantee privacy of data subjects. Differential privacy (DP) has become established as a standard for protecting learning results. The standard DP algorithms require a single trusted party to have access to the entire data, which is a clear weakness. We consider DP Bayesian learning in a distributed setting, where each party only holds a single sample or a few samples of the data. We propose a learning strategy based on a secure multi-party sum function for aggregating summaries from data holders and the Gaussian mechanism for DP. Our method builds on an asymptotically optimal and practically efficient DP Bayesian inference with rapidly diminishing extra cost.) <|cite_end|> <|cite_start|> (Reference: Data augmentation mcmc for bayesian inference from privatized data: Differentially private mechanisms protect privacy by introducing additional randomness into the data. Restricting access to only the privatized data makes it challenging to perform valid statistical inference on parameters underlying the confidential data. Specifically, the likelihood function of the privatized data requires integrating over the large space of confidential databases and is typically intractable. For Bayesian analysis, this results in a posterior distribution that is doubly intractable, rendering traditional MCMC techniques inapplicable. We propose an MCMC framework to perform Bayesian inference from the privatized data, which is applicable to a wide range of statistical models and privacy mechanisms. Our MCMC algorithm augments the model parameters with the unobserved confidential data, and alternately updates each one conditional on the other. For the potentially challenging step of updating the confidential data, we propose a generic approach that exploits the privacy guarantee of the mechanism to ensure efficiency. We give results on the computational complexity, acceptance rate, and mixing properties of our MCMC. We illustrate the efficacy and applicability of our methods on a na\"ive-Bayes log-linear model as well as on a linear regression model.) <|cite_end|> for examples of differentially private Bayesian estimation based on truncated data. Those works regard estimation based on batch data; adaptive truncation during online Bayesian learning, as done in this paper, is not considered. This paper contributes to the literature on differential privacy by addressing the two challenges described above with a novel methodology. For the first challenge, that is, online Bayesian estimation of $\theta$, we propose a sequential Monte Carlo (SMC) method for static parameter estimation as studied in <|cite_start|> (Reference: Following a Moving Target-Monte Carlo Inference for Dynamic Bayesian Models: Markov chain Monte Carlo (MCMC) sampling is a numerically intensive simulation technique which has greatly improved the practicality of Bayesian inference and prediction. However, MCMC sampling is too slow to be of practical use in problems involving a large number of posterior (target) distributions, as in dynamic modelling and predictive model selection. Alternative simulation techniques for tracking moving target distributions, known as particle filters, which combine importance sampling, importance resampling and MCMC sampling, tend to suffer from a progressive degeneration as the target sequence evolves. We propose a new technique, based on these same simulation methodologies, which does not suffer from this progressive degeneration.) <|cite_end|> <|cite_start|> (Reference: A Sequential Particle Filter Method for Static Models: Particle filter methods are complex inference procedures, which combine importance sampling and Monte Carlo schemes in order to explore consistently a sequence of multiple distributions of interest. We show that such methods can also offer an efficient estimation tool in 'static' set-ups, in which case p(t | y-sub-1, …, y-sub-N) (n) <|cite_end|>. For the second challenge, we propose a novel adaptive truncation method that employs an \emph{exploration-exploitation} heuristic to maximise the aggregate `information' in the sequence of observations $Y_{1}, Y_{2}, \ldots$ about $\theta$. To measure the amount of `information', we choose the Fisher information as suggested in <|cite_start|> (Reference: Statistic Selection and MCMC for Differentially Private Bayesian Estimation: This paper concerns differentially private Bayesian estimation of the parameters of a population distribution, when a statistic of a sample from that population is shared in noise to provide differential privacy. This work mainly addresses two problems: (1) What statistic of the sample should be shared privately? For the first question, i.e., the one about statistic selection, we promote using the Fisher information. We find out that, the statistic that is most informative in a non-privacy setting may not be the optimal choice under the privacy restrictions. We provide several examples to support that point. We consider several types of data sharing settings and propose several Monte Carlo-based numerical estimation methods for calculating the Fisher information for those settings. The second question concerns inference: (2) Based on the shared statistics, how could we perform effective Bayesian inference? We propose several Markov chain Monte Carlo (MCMC) algorithms for sampling from the posterior distribution of the parameter given the noisy statistic. The proposed MCMC algorithms can be preferred over one another depending on the problem. For example, when the shared statistics is additive and added Gaussian noise, a simple Metropolis-Hasting algorithm that utilizes the central limit theorem is a decent choice. We propose more advanced MCMC algorithms for several other cases of practical relevance. Our numerical examples involve comparing several candidate statistics to be shared privately. For each statistic, we perform Bayesian estimation based on the posterior distribution conditional on the privatized version of that statistic. We demonstrate that, the relative performance of a statistic, in terms of the mean squared error of the Bayesian estimator based on the corresponding privatized statistic, is adequately predicted by the Fisher information of the privatized statistic.) <|cite_end|>. As we show in Section \ref{sec: Adaptive truncation for the transformation}, the \emph{exploration} part of the proposed approach can be seen as an instance of Thompson sampling <|cite_start|> (Reference: A Tutorial on Thompson Sampling: Thompson sampling is an algorithm for online decision problems where actions are taken sequentially in a manner that must balance between exploiting what is known to maximize immediate performance and investing to accumulate new information that may improve future performance. The algorithm addresses a broad range of problems in a computationally efficient manner and is therefore enjoying wide use. This tutorial covers the algorithm and its application, illustrating concepts through a range of examples, including Bernoulli bandit problems, shortest path problems, product recommendation, assortment, active learning with neural networks, and reinforcement learning in Markov decision processes. Most of these problems involve complex information structures, where information revealed by taking an action informs beliefs about other actions. We will also discuss when and why Thompson sampling is or is not effective and relations to alternative algorithms.) <|cite_end|> from reinforcement learning. The \emph{exploitation} part consists of finding the truncation points that make the resulting observations most informative in terms of Fisher information. Finally, for the \emph{exploitation} step, we pay special attention to \emph{location-scale} families and show that the maximisation task can be performed for all time steps once and for all. To the best of our knowledge, this is the first work that tackles the problem of online differentially private Bayesian estimation with adaptive queries. The paper is organised as follows. In Section \ref{sec: Differential Privacy}, we introduce the basic concepts of differential privacy. In Section \ref{sec: Adaptive differentially private parameter estimation}, we discuss the problem of online parameter estimation using privatised noisy statistics of the sensitive data and present our methodology in general. In Sections \ref{sec: Sequential Monte Carlo for Bayesian estimation} and \ref{sec: Adaptive truncation for the transformation}, we describe the details of our methodology. In Section \ref{sec: Numerical results} we present the results of some numerical experiments. Finally, we give our concluding remarks in Section \ref{sec: Conclusion}. This paper has an Appendix section for some deferred details. <|paper_end|>
[ "<|reference_start|> The algorithmic foundations of Differential Privacy: The problem of privacy-preserving data analysis has a long history spanning multiple disciplines. As electronic data about individuals becomes increasingly detailed, and as technology enables ever more powerful collection and curation of these data, the need increases for a robust, meaningful, and mathematically rigorous definition of privacy, together with a computationally rich class of algorithms that satisfy this definition. Differential Privacy is such a definition.After motivating and discussing the meaning of differential privacy, the preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example. A key point is that, by rethinking the computational goal, one can often obtain far better results than would be achieved by methodically replacing each step of a non-private computation with a differentially private implementation. Despite some astonishingly powerful computational results, there are still fundamental limitations — not just on what can be achieved with differential privacy but on what can be achieved with any method that protects against a complete breakdown in privacy. Virtually all the algorithms discussed herein maintain differential privacy against adversaries of arbitrary computational power. Certain algorithms are computationally intensive, others are efficient. Computational complexity for the adversary and the algorithm are both discussed.We then turn from fundamentals to applications other than queryrelease, discussing differentially private methods for mechanism design and machine learning. The vast majority of the literature on differentially private algorithms considers a single, static, database that is subject to many analyses. Differential privacy in other models, including distributed databases and computations on data streams is discussed.Finally, we note that this work is meant as a thorough introduction to the problems and techniques of differential privacy, but is not intended to be an exhaustive survey — there is by now a vast amount of work in differential privacy, and we can cover only a small portion of it. <|reference_end|>", "<|reference_start|> On the Theory and Practice of Privacy-Preserving Bayesian Data Analysis: Bayesian inference has great promise for the privacy-preserving analysis of sensitive data, as posterior sampling automatically preserves differential privacy, an algorithmic notion of data privacy, under certain conditions (Dimitrakakis et al., 2014; Wang et al., 2015). While this one posterior sample (OPS) approach elegantly provides privacy \"for free,\" it is data inefficient in the sense of asymptotic relative efficiency (ARE). We show that a simple alternative based on the Laplace mechanism, the workhorse of differential privacy, is as asymptotically efficient as non-private posterior inference, under general assumptions. This technique also has practical advantages including efficient use of the privacy budget for MCMC. We demonstrate the practicality of our approach on a time-series analysis of sensitive military records from the Afghanistan and Iraq wars disclosed by the Wikileaks organization. <|reference_end|>", "<|reference_start|> Exact Inference with Approximate Computation for Differentially Private Data via Perturbations: This paper discusses how two classes of approximate computation algorithms can be adapted, in a modular fashion, to achieve exact statistical inference from differentially private data products. Considered are approximate Bayesian computation for Bayesian inference, and Monte Carlo Expectation-Maximization for likelihood inference. Up to Monte Carlo error, inference from these algorithms is exact with respect to the joint specification of both the analyst's original data model, and the curator's differential privacy mechanism. Highlighted is a duality between approximate computation on exact data, and exact computation on approximate data, which can be leveraged by a well-designed computational procedure for statistical inference. <|reference_end|>", "<|reference_start|> Data augmentation mcmc for bayesian inference from privatized data: Differentially private mechanisms protect privacy by introducing additional randomness into the data. Restricting access to only the privatized data makes it challenging to perform valid statistical inference on parameters underlying the confidential data. Specifically, the likelihood function of the privatized data requires integrating over the large space of confidential databases and is typically intractable. For Bayesian analysis, this results in a posterior distribution that is doubly intractable, rendering traditional MCMC techniques inapplicable. We propose an MCMC framework to perform Bayesian inference from the privatized data, which is applicable to a wide range of statistical models and privacy mechanisms. Our MCMC algorithm augments the model parameters with the unobserved confidential data, and alternately updates each one conditional on the other. For the potentially challenging step of updating the confidential data, we propose a generic approach that exploits the privacy guarantee of the mechanism to ensure efficiency. We give results on the computational complexity, acceptance rate, and mixing properties of our MCMC. We illustrate the efficacy and applicability of our methods on a na\\\"ive-Bayes log-linear model as well as on a linear regression model. <|reference_end|>" ]
[ 1, 7, 12, 19 ]
{"<|multi_cite_1_1|>": "ss-702958", "<|multi_cite_1_2|>": "ss-767290", "<|multi_cite_4_1|>": "ss-1278859", "<|multi_cite_4_2|>": "arxiv-143993", "<|multi_cite_5_1|>": "arxiv-189368", "<|multi_cite_5_2|>": "ss-929551", "<|multi_cite_5_3|>": "arxiv-349107", "<|multi_cite_2_1|>": "arxiv-94503", "<|multi_cite_2_2|>": "ss-1006837", "<|multi_cite_2_3|>": "arxiv-66153", "<|multi_cite_2_4|>": "arxiv-171646", "<|multi_cite_2_5|>": "arxiv-228332", "<|multi_cite_2_6|>": "ss-1183958", "<|multi_cite_2_7|>": "arxiv-408224", "<|multi_cite_2_8|>": "ss-829749", "<|cite_6|>": "ss-819301", "<|multi_cite_7_1|>": "ss-1263009", "<|multi_cite_7_2|>": "arxiv-108521", "<|multi_cite_8_1|>": "arxiv-118109", "<|multi_cite_8_2|>": "ss-829749", "<|multi_cite_9_1|>": "ss-808994", "<|multi_cite_9_2|>": "ss-681053", "<|cite_10|>": "arxiv-408224", "<|cite_3|>": "arxiv-128689"}
1109.0351
<|paper_start|> Title: Directed Information, Causal Estimation, and Communication in Continuous Time Abstract: Directed Information, Causal Estimation, and Communication in Continuous Time: A notion of directed information between two continuous-time processes is proposed. A key component in the definition is taking an infimum over all possible partitions of the time interval, which plays a role no less significant than the supremum over "space" partitions inherent in the definition of mutual information. Properties and operational interpretations in estimation and communication are then established for the proposed notion of directed information. For the continuous-time additive white Gaussian noise channel, it is shown that Duncan's classical relationship between causal estimation and information continues to hold in the presence of feedback upon replacing mutual information by directed information. A parallel result is established for the Poisson channel. The utility of this relationship is then demonstrated in computing the directed information rate between the input and output processes of a continuous-time Poisson channel with feedback, where the channel input process is constrained to be constant between events at the channel output. Finally, the capacity of a wide class of continuous-time channels with feedback is established via directed information, characterizing the fundamental limit on reliable communication. Introduction The directed information $I(X^n \to Y^n)$ between two random $n$-sequences $X^n = (X_1,\ldots, X_n)$ and $Y^n = (Y_1,\ldots,Y_n)$ is a natural generalization of Shannon's mutual information to random objects obeying causal relations. Introduced by Massey <|cite_start|> (Reference: Causality, feedback, and directed information: It is shown that the "usual definition" of a discrete memoryless channel (DMC) in fact prohibits the use of feedback. The difficulty stems from the confusion of causality and statistical dependence. An adequate definition of a DMC is given, as well as a definition of using a channel without feedback. A definition, closely based on an old idea of Marko, is given for the directed information flowing from one sequence to another. This directed information is used to give a simple proof of the well-known fact that the use of feedback cannot increase the capacity of a DMC. It is shown that, when feedback is present, directed information is a more useful quantity than the traditional mutual information.) <|cite_end|>, this notion has been shown to arise as the canonical answer to a variety of problems with causally dependent components. For example, it plays a pivotal role in characterizing the capacity $C_\text{FB}$ of a communication channel with feedback. Massey <|cite_start|> (Reference: Causality, feedback, and directed information: It is shown that the "usual definition" of a discrete memoryless channel (DMC) in fact prohibits the use of feedback. The difficulty stems from the confusion of causality and statistical dependence. An adequate definition of a DMC is given, as well as a definition of using a channel without feedback. A definition, closely based on an old idea of Marko, is given for the directed information flowing from one sequence to another. This directed information is used to give a simple proof of the well-known fact that the use of feedback cannot increase the capacity of a DMC. It is shown that, when feedback is present, directed information is a more useful quantity than the traditional mutual information.) <|cite_end|> showed that the feedback capacity is upper bounded as \begin{equation} \label{eq:massey_ub} C_\text{FB} \le \lim_{n\to\infty} \max_{p(x^n||y^{n-1})}\frac{1}{n} I(X^n \to Y^n), \end{equation} where $I(X^n\to Y^n) = \sum_{i=1}^n I(X^i; Y_i|Y^{i-1})$ and $p(x^n||y^{n-1}) = \prod_{i=1}^{n} p(x_i|x^{i-1},y^{i-1})$; see also Kramer <|cite_start|> (Reference: Capacity Results for the Discrete Memoryless Network: The capacity region of the discrete memoryless network is expressed in terms of conditional mutual information and causally conditioned directed information. Codetrees play a central role in the capacity expressions.) <|cite_end|> that streamlines the notion of directed information by causal conditioning. The upper bound in~\eqref{eq:massey_ub} is tight for certain classes of ergodic channels, such as general nonanticipatory channels satisfying certain regularity conditions <|cite_start|> (Reference: The Capacity of Channels with Feedback: We introduce a general framework for treating channels with memory and feedback. First, we generalize Massey's concept of directed information and use it to characterize the feedback capacity of general channels. Second, we present coding results for Markov channels. This requires determining appropriate sufficient statistics at the encoder and decoder. Third, a dynamic programming framework for computing the capacity of Markov channels is presented. Fourth, it is shown that the average cost optimality equation (ACOE) can be viewed as an implicit single-letter characterization of the capacity. Fifth, scenarios with simple sufficient statistics are described.) <|cite_end|>, channels with finite input memory and ergodic noise <|cite_start|> (Reference: A Coding Theorem for a Class of Stationary Channels with Feedback: A coding theorem is proved for a class of stationary channels with feedback in which the output Y_n = f(X_{n-m}^n, Z_{n-m}^n) is the function of the current and past m symbols from the channel input X_n and the stationary ergodic channel noise Z_n. In particular, it is shown that the feedback capacity is equal to $$ \limp_{n\to\infty} \sup_{p(x^n||y^{n-1})} \frac{1}{n} I(X^n \to Y^n), $$ where I(X^n \to Y^n) = \sum_{i=1}^n I(X^i; Y_i|Y^{i-1}) denotes the Massey directed information from the channel input to the output, and the supremum is taken over all causally conditioned distributions p(x^n||y^{n-1}) = \prod_{i=1}^n p(x_i|x^{i-1},y^{i-1}). The main ideas of the proof are the Shannon strategy for coding with side information and a new elementary coding technique for the given channel model without feedback, which is in a sense dual to Gallager's lossy coding of stationary ergodic sources. A similar approach gives a simple alternative proof of coding theorems for finite state channels by Yang-Kavcic-Tatikonda, Chen-Berger, and Permuter-Weissman-Goldsmith.) <|cite_end|>, and indecomposable finite-state channels <|cite_start|> (Reference: Finite State Channels with Time-Invariant Deterministic Feedback: We consider capacity of discrete-time channels with feedback for the general case where the feedback is a time-invariant deterministic function of the output samples. Under the assumption that the channel states take values in a finite alphabet, we find an achievable rate and an upper bound on the capacity. We further show that when the channel is indecomposable, and has no intersymbol interference (ISI), its capacity is given by the limit of the maximum of the (normalized) directed information between the input $X^N$ and the output $Y^N$, i.e. $C = \lim_{N \to \infty} \frac{1}{N} \max I(X^N \to Y^N)$, where the maximization is taken over the causal conditioning probability $Q(x^N||z^{N-1})$ defined in this paper. The capacity result is used to show that the source-channel separation theorem holds for time-invariant determinist feedback. We also show that if the state of the channel is known both at the encoder and the decoder then feedback does not increase capacity.) <|cite_end|>, paving the road to a computable characterization of feedback capacity; see <|cite_start|> (Reference: The capacity of finite-State Markov Channels With feedback: We consider a class of finite-state Markov channels with feedback. We first introduce a simplified equivalent channel model, and then construct the optimal stationary and nonstationary input processes that maximize the long-term directed mutual information. Furthermore, we give a sufficient condition under which the channel's Shannon capacity can be achieved by a stationary input process. The corresponding converse coding theorem and direct coding theorem are proved.) <|cite_end|> <|cite_start|> (Reference: Feedback Capacity of Stationary Gaussian Channels: The feedback capacity of additive stationary Gaussian noise channels is characterized as the solution to a variational problem. Toward this end, it is proved that the optimal feedback coding scheme is stationary. When specialized to the first-order autoregressive moving average noise spectrum, this variational characterization yields a closed-form expression for the feedback capacity. In particular, this result shows that the celebrated Schalkwijk-Kailath coding scheme achieves the feedback capacity for the first-order autoregressive moving average Gaussian channel, positively answering a long-standing open problem studied by Butman, Schalkwijk-Tiernan, Wolfowitz, Ozarow, Ordentlich, Yang-Kavcic-Tatikonda, and others. More generally, it is shown that a k-dimensional generalization of the Schalkwijk-Kailath coding scheme achieves the feedback capacity for any autoregressive moving average noise spectrum of order k. Simply put, the optimal transmitter iteratively refines the receiver's knowledge of the intended message.) <|cite_end|> <|cite_start|> (Reference: Capacity of the Trapdoor Channel with Feedback: We establish that the feedback capacity of the trapdoor channel is the logarithm of the golden ratio and provide a simple communication scheme that achieves capacity. As part of the analysis, we formulate a class of dynamic programs that characterize capacities of unifilar finite-state channels. The trapdoor channel is an instance that admits a simple analytic solution.) <|cite_end|> for examples. Directed information and its variants also characterize (via multiletter expressions) the capacity for two-way channels <|cite_start|> (Reference: Capacity Results for the Discrete Memoryless Network: The capacity region of the discrete memoryless network is expressed in terms of conditional mutual information and causally conditioned directed information. Codetrees play a central role in the capacity expressions.) <|cite_end|>, multiple access channels with feedback <|cite_start|> (Reference: Capacity Results for the Discrete Memoryless Network: The capacity region of the discrete memoryless network is expressed in terms of conditional mutual information and causally conditioned directed information. Codetrees play a central role in the capacity expressions.) <|cite_end|> <|cite_start|> (Reference: Capacity Region of the Finite-State Multiple Access Channel with and without Feedback: The capacity region of the Finite-State Multiple Access Channel (FS-MAC) with feedback that may be an arbitrary time-invariant function of the channel output samples is considered. We characterize both an inner and an outer bound for this region, using Masseys's directed information. These bounds are shown to coincide, and hence yield the capacity region, of FS-MACs where the state process is stationary and ergodic and not affected by the inputs. Though `multi-letter' in general, our results yield explicit conclusions when applied to specific scenarios of interest. E.g., our results allow us to: - Identify a large class of FS-MACs, that includes the additive mod-2 noise MAC where the noise may have memory, for which feedback does not enlarge the capacity region. - Deduce that, for a general FS-MAC with states that are not affected by the input, if the capacity (region) without feedback is zero, then so is the capacity (region) with feedback. - Deduce that the capacity region of a MAC that can be decomposed into a `multiplexer' concatenated by a point-to-point channel (with, without, or with partial feedback), the capacity region is given by $\sum_{m} R_m \leq C$, where C is the capacity of the point to point channel and m indexes the encoders. Moreover, we show that for this family of channels source-channel coding separation holds.) <|cite_end|>, broadcast channels with feedback <|cite_start|> (Reference: Capacity theorems for discrete, finite-state broadcast channels with feedback and unidirectional receiver cooperation: In this paper, we consider the discrete, time-varying broadcast channel (BC) with memory under the assumption that the channel states belong to a set of finite cardinality. We study the achievable rates in several scenarios of feedback and full unidirectional receiver cooperation. In particular, we focus on two scenarios: the first scenario is the general finite-state broadcast channel (FSBC) where both receivers send feedback to the transmitter while one receiver also sends its channel output to the second receiver. The second scenario is the degraded FSBC where only the strong receiver sends feedback to the transmitter. Using a superposition codebook construction, we derive the capacity regions for both scenarios. Combining elements from these two basic results, we obtain the capacity regions for a number of additional broadcast scenarios with feedback and unidirectional receiver cooperation.) <|cite_end|>, and compound channels with feedback <|cite_start|> (Reference: Feedback Capacity of the Compound Channel: In this work we find the capacity of a compound finite-state channel with time-invariant deterministic feedback. The model we consider involves the use of fixed length block codes. Our achievability result includes a proof of the existence of a universal decoder for the family of finite-state channels with feedback. As a consequence of our capacity result, we show that feedback does not increase the capacity of the compound Gilbert-Elliot channel. Additionally, we show that for a stationary and uniformly ergodic Markovian channel, if the compound channel capacity is zero without feedback then it is zero with feedback. Finally, we use our result on the finite-state channel to show that the feedback capacity of the memoryless compound channel is given by $\inf_{\theta} \max_{Q_X} I(X;Y|\theta)$.) <|cite_end|>, as well as the rate--distortion function with feedforward <|cite_start|> (Reference: Source coding with feed-forward: Rate-distortion theorems and error exponents for a general source: In this work, we consider a source coding model with feed-forward. We analyze a system with a noiseless, feed-forward link where the decoder has knowledge of all previous source samples while reconstructing the present sample. The rate-distortion function for an arbitrary source with feed-forward is derived in terms of directed information, a variant of mutual information. We further investigate the nature of the rate-distortion function with feed-forward for two common types of sources- discrete memory- less sources and Gaussian sources. We then characterize the error exponent for a general source with feed-forward. The results are then extended to feed-forward with an arbitrary delay larger than the block length.) <|cite_end|> <|cite_start|> (Reference: On the role of feedforward in Gaussian sources: Point-to-point source coding and multiple description source coding: Source coding with noiseless feedforward deals with efficient quantization of information sources into indexes, where to reconstruct a source sample, the decoder in addition to this index, has access to all the previous noiseless source samples. This problem may find applications in sensor networks, economics, and control theory. In the first part of this paper, we consider a deterministic block coding scheme for independent and identically distributed (i.i.d.) Gaussian sources. We show that this scheme is asymptotically optimal in terms of its rate-distortion function and the error exponent. In the second part of this paper we consider two-channel multiple description source coding with noiseless feedforward. We consider i.i.d. Gaussian sources and obtain the optimal rate-distortion region. The key result is that there is no penalty to be paid for constraining the descriptions to be mutually refineable. That is when one of the channels is active, the decoder which operates on one of the descriptions achieves the optimal rate-distortion function, and when both channels are active, the joint decoder still attains the optimal rate-distortion function. This implies that for memoryless sources with additive distortion measures, in the case of multiple description source coding, noiseless feedforward provides significant improvements in performance. We then evaluate the optimal multiple description source coding error exponents for the symmetric case) <|cite_end|>. In another context, directed information captures the difference in growth rates of wealth in horse race gambling due to \emph{causal} side information <|cite_start|> (Reference: On Directed Information and Gambling: We study the problem of gambling in horse races with causal side information and show that Massey's directed information characterizes the increment in the maximum achievable capital growth rate due to the availability of side information. This result gives a natural interpretation of directed information $I(Y^n \to X^n)$ as the amount of information that $Y^n$ \emph{causally} provides about $X^n$. Extensions to stock market portfolio strategies and data compression with causal side information are also discussed.) <|cite_end|>. This provides a natural interpretation of $I(X^n \to Y^n)$ as the amount of information about $Y^n$ causally provided by $X^n$ on the fly. Similar interpretations for directed information can be drawn for other problems in science and engineering <|cite_start|> (Reference: Interpretations of Directed Information in Portfolio Theory, Data Compression, and Hypothesis Testing: We investigate the role of Massey's directed information in portfolio theory, data compression, and statistics with causality constraints. In particular, we show that directed information is an upper bound on the increment in growth rates of optimal portfolios in a stock market due to {causal} side information. This upper bound is tight for gambling in a horse race, which is an extreme case of stock markets. Directed information also characterizes the value of {causal} side information in instantaneous compression and quantifies the benefit of {causal} inference in joint compression of two stochastic processes. In hypothesis testing, directed information evaluates the best error exponent for testing whether a random process $Y$ {causally} influences another process $X$ or not. These results give a natural interpretation of directed information $I(Y^n \to X^n)$ as the amount of information that a random sequence $Y^n = (Y_1,Y_2,..., Y_n)$ {causally} provides about another random sequence $X^n = (X_1,X_2,...,X_n)$. A new measure, {\em directed lautum information}, is also introduced and interpreted in portfolio theory, data compression, and hypothesis testing.) <|cite_end|>. This paper is dedicated to extending the mathematical notion of directed information to continuous-time random processes and to establishing results that demonstrate the operational significance of this notion in estimation and communication. Our contributions include the following: \begin{itemize} \item We introduce the notion of directed information in continuous time. Given a pair of continuous-time processes in a time interval and its partition consisting of $n$ subintervals, we first consider the (discrete-time) directed information for the two sequences of length $n$ whose components are the sample paths on the respective subintervals. The resulting quantity depends on the specific partition of the time interval. We define directed information in continuous time by taking the infimum over all finite time partitions. Thus, in contrast to mutual information in continuous time which can be defined as a \emph{supremum} of mutual information over finite ``space'' partitions~\cite[Ch.~2.5]{Gallager68}, \cite[Ch.~3.5]{Pinsker60}, inherent to our notion of directed information is a similar supremum followed by an \emph{infimum} over time partitions. We explain why this definition is natural by showing that the continuous-time directed information inherits key properties of its discrete-time origin and by establishing new properties that are meaningful in continuous time. \item We show that this notion of directed information arises in extending classical relationships between information and estimation in continuous time---Duncan's theorem <|cite_start|> (Reference: On the calculation of Mutual Information: Abstract : Calculating the amount of information about a random function contained in another random function has important uses in communication theory. An expression for the mutual information for continuous time random processes has been given by Gelfand and Yaglom, Chiang, and Perez by generalizing Shannon's result in a natural way. Under a condition of absolute continuity of measures the continuous time expression has the same form as Shannon's result. For two Gaussian processes Gelfand and Yaglom express the mutual information in terms of a mean square estimation error. We generalize this result to diffusion processes and express the solution in a different form which is more naturally related to a corresponding filtering problem. We also use these results to calculate some information rates.) <|cite_end|> that relates the minimum mean squared error (MMSE) in causal estimation of a target signal based on an observation through an additive white Gaussian noise channel to the \emph{mutual information} between the target signal and the observation, and its counterpart for the Poisson channel---to the scenarios in which the channel input process can causally depend on the channel output process, whereby corresponding relationships now hold between \emph{directed information} and estimation. \item We illustrate these relationships between directed information and estimation by characterizing the directed information rate and the feedback capacity of a continuous-time Poisson channel with inputs constrained to be constant between events at the channel output. \item We establish the fundamental role of continuous-time directed information in characterizing the feedback capacity of a large class of continuous-time channels. In particular, we show that for channels where the output is a function of the input and some stationary ergodic ``noise'' process, the continuous-time directed information characterizes the feedback capacity of the channel. \end{itemize} The remainder of the paper is organized as follows. Section \ref{sec: Definition of Directed Information in Continuous Time} is devoted to the definition of directed information and related quantities in continuous time, which is followed by a presentation of key properties of continuous-time directed information in Section \ref{sec: Properties of the Directed Information in Continuous Time}. In Section \ref{sec: Directed Information and Causal Estimation}, we establish the generalizations of Duncan's theorem and its Poisson counterpart that accommodate the presence of feedback. In Section \ref{sec: poisson feedback example}, we apply the relationship between the causal estimation error and directed information for the Poisson channel to compute the directed information rate between the input and the output of this channel in a scenario that involves feedback. In Section \ref{sec: Communication Over Continuous-Time Channels with Feedback}, we study a general feedback communication problem in which our notion of directed information in continuous time emerges naturally in the characterization of the feedback capacity. Section \ref{sec: Concluding Remarks} concludes the paper with a few remarks. <|paper_end|>
[ "<|reference_start|> Feedback Capacity of Stationary Gaussian Channels: The feedback capacity of additive stationary Gaussian noise channels is characterized as the solution to a variational problem. Toward this end, it is proved that the optimal feedback coding scheme is stationary. When specialized to the first-order autoregressive moving average noise spectrum, this variational characterization yields a closed-form expression for the feedback capacity. In particular, this result shows that the celebrated Schalkwijk-Kailath coding scheme achieves the feedback capacity for the first-order autoregressive moving average Gaussian channel, positively answering a long-standing open problem studied by Butman, Schalkwijk-Tiernan, Wolfowitz, Ozarow, Ordentlich, Yang-Kavcic-Tatikonda, and others. More generally, it is shown that a k-dimensional generalization of the Schalkwijk-Kailath coding scheme achieves the feedback capacity for any autoregressive moving average noise spectrum of order k. Simply put, the optimal transmitter iteratively refines the receiver's knowledge of the intended message. <|reference_end|>", "<|reference_start|> Capacity Region of the Finite-State Multiple Access Channel with and without Feedback: The capacity region of the Finite-State Multiple Access Channel (FS-MAC) with feedback that may be an arbitrary time-invariant function of the channel output samples is considered. We characterize both an inner and an outer bound for this region, using Masseys's directed information. These bounds are shown to coincide, and hence yield the capacity region, of FS-MACs where the state process is stationary and ergodic and not affected by the inputs. Though `multi-letter' in general, our results yield explicit conclusions when applied to specific scenarios of interest. E.g., our results allow us to: - Identify a large class of FS-MACs, that includes the additive mod-2 noise MAC where the noise may have memory, for which feedback does not enlarge the capacity region. - Deduce that, for a general FS-MAC with states that are not affected by the input, if the capacity (region) without feedback is zero, then so is the capacity (region) with feedback. - Deduce that the capacity region of a MAC that can be decomposed into a `multiplexer' concatenated by a point-to-point channel (with, without, or with partial feedback), the capacity region is given by $\\sum_{m} R_m \\leq C$, where C is the capacity of the point to point channel and m indexes the encoders. Moreover, we show that for this family of channels source-channel coding separation holds. <|reference_end|>", "<|reference_start|> Source coding with feed-forward: Rate-distortion theorems and error exponents for a general source: In this work, we consider a source coding model with feed-forward. We analyze a system with a noiseless, feed-forward link where the decoder has knowledge of all previous source samples while reconstructing the present sample. The rate-distortion function for an arbitrary source with feed-forward is derived in terms of directed information, a variant of mutual information. We further investigate the nature of the rate-distortion function with feed-forward for two common types of sources- discrete memory- less sources and Gaussian sources. We then characterize the error exponent for a general source with feed-forward. The results are then extended to feed-forward with an arbitrary delay larger than the block length. <|reference_end|>", "<|reference_start|> On the calculation of Mutual Information: Abstract : Calculating the amount of information about a random function contained in another random function has important uses in communication theory. An expression for the mutual information for continuous time random processes has been given by Gelfand and Yaglom, Chiang, and Perez by generalizing Shannon's result in a natural way. Under a condition of absolute continuity of measures the continuous time expression has the same form as Shannon's result. For two Gaussian processes Gelfand and Yaglom express the mutual information in terms of a mean square estimation error. We generalize this result to diffusion processes and express the solution in a different form which is more naturally related to a corresponding filtering problem. We also use these results to calculate some information rates. <|reference_end|>" ]
[ 7, 11, 14, 18 ]
{"<|cite_1|>": "ss-1264121", "<|cite_2|>": "ss-1264121", "<|cite_3|>": "ss-1955825", "<|cite_4|>": "arxiv-674844", "<|cite_5|>": "arxiv-675397", "<|cite_6|>": "arxiv-674650", "<|multi_cite_7_1|>": "ss-781115", "<|multi_cite_7_2|>": "arxiv-673905", "<|multi_cite_7_3|>": "arxiv-674919", "<|cite_8|>": "ss-1955825", "<|multi_cite_9_1|>": "ss-1955825", "<|multi_cite_9_2|>": "arxiv-901", "<|cite_10|>": "ss-2535326", "<|cite_11|>": "arxiv-1739", "<|multi_cite_12_1|>": "ss-1014963", "<|multi_cite_12_2|>": "ss-1016475", "<|cite_13|>": "arxiv-2608", "<|cite_14|>": "arxiv-10690", "<|cite_15|>": "ss-761074"}
2307.06101
<|paper_start|> Title: Air Bumper: A Collision Detection and Reaction Framework for Autonomous MAV Navigation Abstract: Air Bumper: A Collision Detection and Reaction Framework for Autonomous MAV Navigation: Autonomous navigation in unknown environments with obstacles remains challenging for micro aerial vehicles (MAVs) due to their limited onboard computing and sensing resources. Although various collision avoidance methods have been developed, it is still possible for drones to collide with unobserved obstacles due to unpredictable disturbances, sensor limitations, and control uncertainty. Instead of completely avoiding collisions, this article proposes Air Bumper, a collision detection and reaction framework, for fully autonomous flight in 3D environments to improve the safety of drones. Our framework only utilizes the onboard inertial measurement unit (IMU) to detect and estimate collisions. We further design a collision recovery control for rapid recovery and collision-aware mapping to integrate collision information into general LiDAR-based sensing and planning frameworks. Our simulation and experimental results show that the quadrotor can rapidly detect, estimate, and recover from collisions with obstacles in 3D space and continue the flight smoothly with the help of the collision-aware map. Our Air Bumper will be released as open-source software on GitHub. Introduction MAVs have gained increasing popularity for their ability to access and operate in environments that are difficult or impossible for humans to reach, making them valuable tools in various fields like infrastructure inspection <|cite_start|> (Reference: Review of Unmanned Aerial System (UAS) applications in the built environment: Towards automated building inspection procedures using drones: ) <|cite_end|> <|cite_start|> (Reference: Towards UAV-based bridge inspection systems: A review and an application perspective: Visual condition inspections remain paramount to assessing the current deterioration status of a bridge and assigning remediation or maintenance tasks so as to ensure the ongoing serviceability of the structure. However, in recent years, there has been an increasing backlog of maintenance activities. Existing research reveals that this is attributable to the labour-intensive, subjective and disruptive nature of the current bridge inspection method. Current processes ultimately require lane closures, traffic guidance schemes and inspection equipment. This not only increases the whole-of-life costs of the bridge, but also increases the risk to the travelling public as issues affecting the structural integrity may go unaddressed. As a tool for bridge condition inspections, Unmanned Aerial Vehicles (UAVs) or, drones, offer considerable potential, allowing a bridge to be visually assessed without the need for inspectors to walk across the deck or utilise under-bridge inspection units. With current inspection processes placing additional strain on the existing bridge maintenance resources, the technology has the potential to significantly reduce the overall inspection costs and disruption caused to the travelling public. In addition to this, the use of automated aerial image capture enables engineers to better understand a situation through the 3D spatial context offered by UAV systems. However, the use of UAV for bridge inspection involves a number of critical issues to be resolved, including stability and accuracy of control, and safety to people. SLAM (Simultaneous Localisation and Mapping) is a technique that could be used by a UAV to build a map of the bridge underneath, while simultaneously determining its location on the constructed map. While there are considerable economic and risk-related benefits created through introducing entirely new ways of inspecting bridges and visualising information, there also remain hindrances to the wider deployment of UAVs. This study is to provide a context for use of UAVs for conducting visual bridge inspections, in addition to addressing the obstacles that are required to be overcome in order for the technology to be integrated into current practice.) <|cite_end|> <|cite_start|> (Reference: Past, present and future of robotic tunnel inspection: ) <|cite_end|>, subterranean exploration <|cite_start|> (Reference: {CERBERUS in the DARPA Subterranean Challenge: This article presents the core technologies and deployment strategies of Team CERBERUS that enabled our winning run in the DARPA Subterranean Challenge finals. CERBERUS is a robotic system-of-systems involving walking and flying robots presenting resilient autonomy, as well as mapping and navigation capabilities to explore complex underground environments. Description This article details the winning performance of Team CERBERUS in the DARPA Subterranean Challenge Final Event.) <|cite_end|> <|cite_start|> (Reference: NeBula: Quest for Robotic Autonomy in Challenging Environments; TEAM CoSTAR at the DARPA Subterranean Challenge: This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved 2nd and 1st place, respectively. We also discuss CoSTAR's demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including: (i) geometric and semantic environment mapping; (ii) a multi-modal positioning system; (iii) traversability analysis and local planning; (iv) global motion planning and exploration behavior; (i) risk-aware mission planning; (vi) networking and decentralized reasoning; and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g. wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.) <|cite_end|> <|cite_start|> (Reference: Heterogeneous Ground and Air Platforms, Homogeneous Sensing: Team CSIRO Data61's Approach to the DARPA Subterranean Challenge: Heterogeneous teams of robots, leveraging a balance between autonomy and human interaction, bring powerful capabilities to the problem of exploring dangerous, unstructured subterranean environments. Here we describe the solution developed by Team CSIRO Data61, consisting of CSIRO, Emesent and Georgia Tech, during the DARPA Subterranean Challenge. These presented systems were fielded in the Tunnel Circuit in August 2019, the Urban Circuit in February 2020, and in our own Cave event, conducted in September 2020. A unique capability of the fielded team is the homogeneous sensing of the platforms utilised, which is leveraged to obtain a decentralised multi-agent SLAM solution on each platform (both ground agents and UAVs) using peer-to-peer communications. This enabled a shift in focus from constructing a pervasive communications network to relying on multi-agent autonomy, motivated by experiences in early circuit events. These experiences also showed the surprising capability of rugged tracked platforms for challenging terrain, which in turn led to the heterogeneous team structure based on a BIA5 OzBot Titan ground robot and an Emesent Hovermap UAV, supplemented by smaller tracked or legged ground robots. The ground agents use a common CatPack perception module, which allowed reuse of the perception and autonomy stack across all ground agents with minimal adaptation.) <|cite_end|>, and search and rescue <|cite_start|> (Reference: A lightweight autonomous MAV for indoor search and rescue: Micro Aerial Vehicles (MAVs) have great potentials to be applied for indoor search and rescue missions. In this paper, we propose a modular lightweight design of an autonomous MAV with integrated hardware and software. The MAV is equipped with the 2D laser scanner, camera, mission computer and flight controller, running all the computation onboard in real time. The onboard perception system includes a laser‐based SLAM module and a custom‐designed visual detection module. A dual Kalman filter design provides robust state estimation by multiple sensor fusion. Specifically, the fusion module provides robust altitude measurement in the circumstance of surface changing. In addition, indoor‐outdoor transition is explicitly handled by the fusion module. In order to efficiently navigate through obstacles and adapt to multiple tasks, a task tree‐based mission planning method is seamlessly integrated with path planning and control modules. The MAV is capable of searching and rescuing victims from unknown indoor environments effectively. It was validated by our award‐winning performance at the 2017 International Micro Air Vehicle Competition (IMAV 2017), held in Toulouse, France. The performance video is available on https://youtu.be/8H19ppS_VXM.) <|cite_end|> <|cite_start|> (Reference: Decentralized swarms of unmanned aerial vehicles for search and rescue operations without explicit communication: ) <|cite_end|>, etc. However, safety becomes a critical concern for MAVs when operating in such complex and cluttered environments. These scenarios present a significant challenge for MAVs to conduct safe and collision-free flights. To address this challenge, much research has focused on utilizing onboard sensors such as LiDAR <|cite_start|> (Reference: FAST-LIO2: Fast Direct LiDAR-inertial Odometry: This paper presents FAST-LIO2: a fast, robust, and versatile LiDAR-inertial odometry framework. Building on a highly efficient tightly-coupled iterated Kalman filter, FAST-LIO2 has two key novelties that allow fast, robust, and accurate LiDAR navigation (and mapping). The first one is directly registering raw points to the map (and subsequently update the map, i.e., mapping) without extracting features. This enables the exploitation of subtle features in the environment and hence increases the accuracy. The elimination of a hand-engineered feature extraction module also makes it naturally adaptable to emerging LiDARs of different scanning patterns; The second main novelty is maintaining a map by an incremental k-d tree data structure, ikd-Tree, that enables incremental updates (i.e., point insertion, delete) and dynamic re-balancing. Compared with existing dynamic data structures (octree, R*-tree, nanoflann k-d tree), ikd-Tree achieves superior overall performance while naturally supports downsampling on the tree. We conduct an exhaustive benchmark comparison in 19 sequences from a variety of open LiDAR datasets. FAST-LIO2 achieves consistently higher accuracy at a much lower computation load than other state-of-the-art LiDAR-inertial navigation systems. Various real-world experiments on solid-state LiDARs with small FoV are also conducted. Overall, FAST-LIO2 is computationally-efficient (e.g., up to 100 Hz odometry and mapping in large outdoor environments), robust (e.g., reliable pose estimation in cluttered indoor environments with rotation up to 1000 deg/s), versatile (i.e., applicable to both multi-line spinning and solid-state LiDARs, UAV and handheld platforms, and Intel and ARM-based processors), while still achieving higher accuracy than existing methods. Our implementation of the system FAST-LIO2, and the data structure ikd-Tree are both open-sourced on Github.) <|cite_end|>, stereo cameras, and RGB-D cameras <|cite_start|> (Reference: ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual--Inertial, and Multimap SLAM: This article presents ORB-SLAM3, the first system able to perform visual, visual-inertial and multimap SLAM with monocular, stereo and RGB-D cameras, using pin-hole and fisheye lens models. The first main novelty is a tightly integrated visual-inertial SLAM system that fully relies on maximum a posteriori (MAP) estimation, even during IMU initialization, resulting in real-time robust operation in small and large, indoor and outdoor environments, being two to ten times more accurate than previous approaches. The second main novelty is a multiple map system relying on a new place recognition method with improved recall that lets ORB-SLAM3 survive to long periods of poor visual information: when it gets lost, it starts a new map that will be seamlessly merged with previous maps when revisiting them. Compared with visual odometry systems that only use information from the last few seconds, ORB-SLAM3 is the first system able to reuse in all the algorithm stages all previous information from high parallax co-visible keyframes, even if they are widely separated in time or come from previous mapping sessions, boosting accuracy. Our experiments show that, in all sensor configurations, ORB-SLAM3 is as robust as the best systems available in the literature and significantly more accurate. Notably, our stereo-inertial SLAM achieves an average accuracy of 3.5 cm in the EuRoC drone and 9 mm under quick hand-held motions in the room of TUM-VI dataset, representative of AR/VR scenarios. For the benefit of the community we make public the source code.) <|cite_end|> for Simultaneous Localization and Mapping (SLAM); motion planning algorithms <|cite_start|> (Reference: RAPTOR: Robust and Perception-aware Trajectory Replanning for Quadrotor Fast Flight: Recent advances in trajectory replanning have enabled quadrotor to navigate autonomously in unknown environments. However, high-speed navigation still remains a significant challenge. Given very limited time, existing methods have no strong guarantee on the feasibility or quality of the solutions. Moreover, most methods do not consider environment perception, which is the key bottleneck to fast flight. In this paper, we present RAPTOR, a robust and perception-aware replanning framework to support fast and safe flight. A path-guided optimization (PGO) approach that incorporates multiple topological paths is devised, to ensure finding feasible and high-quality trajectories in very limited time. We also introduce a perception-aware planning strategy to actively observe and avoid unknown obstacles. A risk-aware trajectory refinement ensures that unknown obstacles which may endanger the quadrotor can be observed earlier and avoid in time. The motion of yaw angle is planned to actively explore the surrounding space that is relevant for safe navigation. The proposed methods are tested extensively. We will release our implementation as an open-source package for the community.) <|cite_end|> <|cite_start|> (Reference: Model predictive local motion planning with boundary state constrained primitives: Motion primitives are frequently used to find valid local trajectories for mobile robots, especially in cases where fast replanning is required, but the onboard computational power is limited. In this letter, we present a practical framework for constructing motion primitives from boundary state constraints, and then using them for online planning. The primitives are offline constructed with either a boundary value problem solver or a controller. They are then approximated with a neural network for fast evaluation during online optimization. The references and nominal inputs are generated in a receding horizon fashion by solving a model predictive control problem in the continuous domain with either gradient-based or gradient-free techniques. The proposed approach is computationally efficient and has been tested on quadrotors in real flight experiments, including sensor-based navigation, flying through a complex three-dimensional environment, dynamic obstacle avoidance, and tracking moving references.) <|cite_end|> have been developed to generate collision-free paths. Despite these efforts, MAVs are still susceptible to colliding with obstacles due to unpredictable disturbances, sensor limitations, and control uncertainty. \begin{figure}[!htb] \vspace{6pt} \centering \includegraphics[width=1.0 \linewidth]{pic/ghost.png} \caption{A collision detection and reaction experiment with an unobserved obstacle. (a) Composite images of the experiment. (b) Collision-aware volumetric map with collision point cloud.} \label{fig:cage_uav} \vspace{-12pt} \end{figure} Instead of dealing with MAV collision by completely avoiding it, increasing attention has been shifted to collision detection and reaction. In this paper, we introduce a unified IMU-based collision detection and reaction framework (Air Bumper) that estimates collisions and integrates the collision information into a general autonomous MAV navigation framework. To handle collisions effectively, a collision-aware volumetric mapping algorithm is developed, which collaborates with general motion planning algorithms to enable the MAVs to reach their original targets without getting stuck by obstacles. Notably, the collision detection and estimation only rely on IMU data from the flight controller without requiring any external sensors. Moreover, a fully autonomous collision-resilient MAV with a 3D cage is designed, crafted, and evaluated. This MAV itself is effectively tolerant of collisions, and its collision resilience and autonomy can be further enhanced by incorporating the proposed framework, along with general autopilot, SLAM, and motion planning algorithms. The framework enables the drone to detect and react to unobserved collisions, as well as update a collision-aware map for autonomous navigation after collisions (Fig. \ref{fig:cage_uav}). The experiments conducted in simulated and real unknown environments demonstrate that our proposed framework effectively facilitates MAV recovery from collisions with transparent and unpredictable obstacles in 3D spaces, allowing them to continue their assigned flight tasks. \begin{figure}[t] \vspace{6pt} \centering \includegraphics[width=1.0 \linewidth]{pic/airbumper.pdf} \caption{Overview of the collision detection and reaction framework.} \vspace{-6pt} \label{fig: workflow} \vspace{-12pt} \end{figure} Related Work \label{Related Works} In the face of possible collisions in flight, many researchers choose not to generate a collision-free path to avoid the collision but to design collision-resilient MAVs to deal with it. At the hardware level, there are many kinds of designs and structures to enhance collision resilience. As a high-speed rotating part, the propeller is the most vulnerable to damage in a collision. Therefore, propeller guards <|cite_start|> (Reference: Development and experimental validation of aerial vehicle with passive rotating shell on each rotor: Aerial robotics is a fast-growing field of robotics and has been successfully used in various applications. Still, it faces many challenges, such as dealing with unavoidable obstacles in a cluttered environment. Recently, a flying robot with a protective shell that can rotate passively was introduced. The passive rotating mechanism is intended to reduce the impact force on the attitude of the UAV. However, such a system also has some limitations. Because the shell rotates passively, the ability to physically interact outside the shell is limited, and the onboard camera and other remote sensors are constantly obstructed. In this letter, a new idea is introduced in response to the limitations of the previous system while retaining the protective shell and maintaining some degrees of passive rotation of the shell. It is proposed to position two passive rotating hemispherical shells in each rotor to directly protect the propeller. This letter presents the concept, discusses the design and proof of concept, and validates the concept through experiments. Various experiments are conducted to demonstrate the capabilities of the proposed flying robot, resolve the problem of physical interaction and camera obstruction, and introduce new advantages.) <|cite_end|> <|cite_start|> (Reference: Toward Impact-resilient Quadrotor Design, Collision Characterization and Recovery Control to Sustain Flight after Collisions: Collision detection and recovery for aerial robots remain a challenge because of the limited space for sensors and local stability of the flight controller. We introduce a novel collision-resilient quadrotor that features a compliant arm design to enable free flight while allowing for one passive degree of freedom to absorb shocks. We further propose a novel collision detection and characterization method based on Hall sensors, as well as a new recovery control method to generate and track a smooth trajectory after a collision occurs. Experimental results demonstrate that the robot can detect and recover from high-speed collisions with various obstacles such as walls and poles. Moreover, it can survive collisions that are hard to detect with existing methods based on IMU data and contact models, for example, when colliding with unstructured surfaces, or being hit by a moving obstacle while hovering.) <|cite_end|> <|cite_start|> (Reference: Fly-crash-recover: A sensor-based reactive framework for online collision recovery of uavs: Unmanned Aerial Vehicles (UAVs) are becoming increasingly popular thanks to the multiplicity of operations in which they can be deployed such as surveillance, search and rescue, mapping, transportation, hobby and recreational activities. Although sensors like LIDARs and cameras are often present on such systems for motion planning to avoid obstacles, collisions can still occur in very dense and unstructured environments, especially if disturbances are present. In this work, we research techniques to recover UAVs after a collision has occurred. We note that the on-board sensors, especially the inertial sensor used to stabilize the UAV, run at a high frequencies obtaining hundreds of data points every second. At run-time, this can be leveraged at the moment of a collision to quickly detect and recover the system. Our approach considers knowledge of UAV system dynamics to predict the expected behavior of the vehicle under safe flight conditions and leverage such expectations together with inertial data to detect collisions rapidly (on the order of milliseconds). We also propose a potential field-based approach to map the collision and create the correct reactive maneuver to avoid the collided object and bring the system back to a stable and safe configuration. Experiments are executed using ROS on two micro-quadrotor UAV platforms having different dynamics and performances, while colliding with poles and walls positioned in different configurations. In our results, we are able to show that the UAVs are successfully able to detect and avoid a collision, while also providing a rigorous analysis of the conditions in which the system can recover from imminent collisions.) <|cite_end|> are commonly used to protect it. At the same time, many cage-like structures are designed to provide more protection for the whole drone. Rigid cage structure <|cite_start|> (Reference: RMF-Owl: A Collision-Tolerant Flying Robot for Autonomous Subterranean Exploration: This work presents the design, hardware realization, autonomous exploration and object detection capabilities of RMF-Owl, a new collision-tolerant aerial robot tailored for resilient autonomous subterranean exploration. The system is custom built for underground exploration with focus on collision tolerance, resilient autonomy with robust localization and mapping, alongside high-performance exploration path planning in confined, obstacle-filled and topologically complex underground environments. Moreover, RMF-Owl offers the ability to search, detect and locate objects of interest which can be particularly useful in search and rescue missions. A series of results from field experiments are presented in order to demonstrate the system's ability to autonomously explore challenging unknown underground environments.) <|cite_end|> <|cite_start|> (Reference: A UAV-based explore-then-exploit system for autonomous indoor facility inspection and scene reconstruction: ) <|cite_end|> can use its strength to protect inside fragile parts, like sensors, flight controllers, and onboard computers. In addition to minimizing the impact of collisions through the hardware design discussed above, some researchers are also extracting environmental information from collisions in order to integrate it into the MAV perception system. Lew et al. in <|cite_start|> (Reference: Contact Inertial Odometry: Collisions are your Friends: Autonomous exploration of unknown environments with aerial vehicles remains a challenge, especially in perceptually degraded conditions. Dust, fog, or a lack of visual or LiDAR-based features results in severe difficulties for state estimation algorithms, which failure can be catastrophic. In this work, we show that it is indeed possible to navigate in such conditions without any exteroceptive sensing by exploiting collisions instead of treating them as constraints. To this end, we present a novel contact-based inertial odometry (CIO) algorithm: it uses estimated external forces with the environment to detect collisions and generate pseudo-measurements of the robot velocity, enabling autonomous flight. To fully exploit this method, we first perform modeling of a hybrid ground and aerial vehicle which can withstand collisions at moderate speeds, for which we develop an external wrench estimation algorithm. Then, we present our CIO algorithm and develop a reactive planner and control law which encourage exploration by bouncing off obstacles. All components of this framework are validated in hardware experiments and we demonstrate that a quadrotor can traverse a cluttered environment using an IMU only. This work can be used on drones to recover from visual inertial odometry failure or on micro-drones that do not have the payload capacity to carry cameras, LiDARs or powerful computers.) <|cite_end|> proposed a contact-based inertial odometry (CIO), which can provide a usable but inaccurate velocity estimation for a hybrid ground and aerial vehicle performing autonomous navigation. In the flight, several not destructive collisions happen, and the controller can get updated information from collisions. The work in <|cite_start|> (Reference: The Tiercel: A novel autonomous micro aerial vehicle that can map the environment by flying into obstacles: Autonomous flight through unknown environments in the presence of obstacles is a challenging problem for micro aerial vehicles (MAVs). A majority of the current state-of-art research assumes obstacles as opaque objects that can be easily sensed by optical sensors such as cameras or LiDARs. However in indoor environments with glass walls and windows, or scenarios with smoke and dust, robots (even birds) have a difficult time navigating through the unknown space.In this paper, we present the design of a new class of micro aerial vehicles that achieves autonomous navigation and are robust to collisions. In particular, we present the Tiercel MAV: a small, agile, light weight and collision-resilient robot powered by a cellphone grade CPU. Our design exploits contact to infer the presence of transparent or reflective obstacles like glass walls, integrating touch with visual perception for SLAM. The Tiercel is able to localize using visual-inertial odometry (VIO) running on board the robot with a single downward facing fisheye camera and an IMU. We show how our collision detector design and experimental set up enable us to characterize the impact of collisions on VIO. We further develop a planning strategy to enable the Tiercel to fly autonomously in an unknown space, sustaining collisions and creating a 2D map of the environment. Finally we demonstrate a swarm of three autonomous Tiercel robots safely navigating and colliding through an obstacle field to reach their objectives.) <|cite_end|> analyzes the impact of collisions on visual-inertial odometry (VIO) and uses collision information to build a map with a downward camera for localization. In their experiment, two glass walls are included to present that the transparent objects may cause LiDAR to get an inaccurate distance. Still, collision mapping can help MAVs detect these transparent walls. Authors in <|cite_start|> (Reference: Toward Impact-resilient Quadrotor Design, Collision Characterization and Recovery Control to Sustain Flight after Collisions: Collision detection and recovery for aerial robots remain a challenge because of the limited space for sensors and local stability of the flight controller. We introduce a novel collision-resilient quadrotor that features a compliant arm design to enable free flight while allowing for one passive degree of freedom to absorb shocks. We further propose a novel collision detection and characterization method based on Hall sensors, as well as a new recovery control method to generate and track a smooth trajectory after a collision occurs. Experimental results demonstrate that the robot can detect and recover from high-speed collisions with various obstacles such as walls and poles. Moreover, it can survive collisions that are hard to detect with existing methods based on IMU data and contact models, for example, when colliding with unstructured surfaces, or being hit by a moving obstacle while hovering.) <|cite_end|> <|cite_start|> (Reference: Online Search-based Collision-inclusive Motion Planning and Control for Impact-resilient Mobile Robots: This paper focuses on the emerging paradigm shift of collision-inclusive motion planning and control for impact-resilient mobile robots, and develops a unified hierarchical framework for navigation in unknown and partially-observable cluttered spaces. At the lower-level, we develop a deformation recovery control and trajectory replanning strategy that handles collisions that may occur at run-time, locally. The low-level system actively detects collisions (via embedded Hall effect sensors on a mobile robot built in-house), enables the robot to recover from them, and locally adjusts the post-impact trajectory. Then, at the higher-level, we propose a search-based planning algorithm to determine how to best utilize potential collisions to improve certain metrics, such as control energy and computational time. Our method builds upon A* with jump points. We generate a novel heuristic function, and a collision checking and adjustment technique, thus making the A* algorithm converge faster to reach the goal by exploiting and utilizing possible collisions. The overall hierarchical framework generated by combining the global A* algorithm and the local deformation recovery and replanning strategy, as well as individual components of this framework, are tested extensively both in simulation and experimentally. An ablation study draws links to related state-of-the-art search-based collision-avoidance planners (for the overall framework), as well as search-based collision-avoidance and sampling-based collision-inclusive global planners (for the higher level). Results demonstrate our method's efficacy for collision-inclusive motion planning and control in unknown environments with isolated obstacles for a class of impact-resilient robots operating in 2D.) <|cite_end|> introduce hall sensors to detect collisions and estimate the intensity and location of the collision to realize reaction control. However, these works tend to navigate using only IMU or directly use collision data to perform reaction control, which makes the collision information hard to be recorded and reused. Although the method proposed in <|cite_start|> (Reference: Fly-crash-recover: A sensor-based reactive framework for online collision recovery of uavs: Unmanned Aerial Vehicles (UAVs) are becoming increasingly popular thanks to the multiplicity of operations in which they can be deployed such as surveillance, search and rescue, mapping, transportation, hobby and recreational activities. Although sensors like LIDARs and cameras are often present on such systems for motion planning to avoid obstacles, collisions can still occur in very dense and unstructured environments, especially if disturbances are present. In this work, we research techniques to recover UAVs after a collision has occurred. We note that the on-board sensors, especially the inertial sensor used to stabilize the UAV, run at a high frequencies obtaining hundreds of data points every second. At run-time, this can be leveraged at the moment of a collision to quickly detect and recover the system. Our approach considers knowledge of UAV system dynamics to predict the expected behavior of the vehicle under safe flight conditions and leverage such expectations together with inertial data to detect collisions rapidly (on the order of milliseconds). We also propose a potential field-based approach to map the collision and create the correct reactive maneuver to avoid the collided object and bring the system back to a stable and safe configuration. Experiments are executed using ROS on two micro-quadrotor UAV platforms having different dynamics and performances, while colliding with poles and walls positioned in different configurations. In our results, we are able to show that the UAVs are successfully able to detect and avoid a collision, while also providing a rigorous analysis of the conditions in which the system can recover from imminent collisions.) <|cite_end|> successfully achieves collision recording for further flight in a laboratory environment using motion capture systems, the lack of integration with online sensing and planning modules limits its applicability in real-world settings. Additionally, most of these works <|cite_start|> (Reference: Quadrotor collision characterization and recovery control: Collisions between quadrotor UAVs and the environment often occur, for instance, under faulty piloting, from wind gusts, or when obstacle avoidance fails. Airspace regulations are forcing drone companies to build safer drones; many quadrotor drones now incorporate propeller protection. However, propeller protected quadrotors still do not detect or react to collisions with objects such as walls, poles and cables. In this paper, we present a collision recovery pipeline which controls propeller protected quadrotors to recover from collisions. This pipeline combines concepts from impact dynamics, fuzzy logic, and aggressive quadrotor attitude control. The strategy is validated via a comprehensive Monte Carlo simulation of collisions against a wall, showing the feasibility of recovery from challenging collision scenarios. The pipeline is implemented on a custom experimental quadrotor platform, demonstrating feasibility of real-time performance and successful recovery from a range of pre-collision conditions. The ultimate goal of the research is to implement a general collision recovery solution as a safety feature for quadrotor flight controllers.) <|cite_end|> <|cite_start|> (Reference: Recovery control for quadrotor uav colliding with a pole: Small quadrotor UAVs are projected to fly increasingly in urban environments for a wide variety of applications such as disaster response, police surveillance, civil infrastructure inspection, and air quality measurement. Micro UAVs can detect and avoid obstacles using onboard cameras; nevertheless, disturbances such as wind gusts, operator error, or failure of onboard vision can still result in dangerous collisions with objects. In the urban setting, the most predominant obstacles are walls and poles. With the aim of developing collision recovery control solutions for quadrotor UAVs, this paper investigates the collision dynamics between a propeller-protected quadrotor UAV and a vertical pole. Simulations provide insight into a quadrotor's post-collision dynamics and experimental trials demonstrate the feasibility of autonomously recovering to stable flight using only inertial onboard sensing in real-time.) <|cite_end|> focus on collision detection and characterization in a 2D environment. However, the obstacles in cluttered environments are often not on the same level as MAVs, which means that collisions can occur from any direction. In this work, we combine the Air Bumper framework with LiDAR-based sensing on a caged, collision-resilient MAV. This allows for collision detection and estimation in 3D space and the generation of smooth reaction trajectories with the help of collision-aware mapping. <|paper_end|>
[ "<|reference_start|> Review of Unmanned Aerial System (UAS) applications in the built environment: Towards automated building inspection procedures using drones: <|reference_end|>", "<|reference_start|> Model predictive local motion planning with boundary state constrained primitives: Motion primitives are frequently used to find valid local trajectories for mobile robots, especially in cases where fast replanning is required, but the onboard computational power is limited. In this letter, we present a practical framework for constructing motion primitives from boundary state constraints, and then using them for online planning. The primitives are offline constructed with either a boundary value problem solver or a controller. They are then approximated with a neural network for fast evaluation during online optimization. The references and nominal inputs are generated in a receding horizon fashion by solving a model predictive control problem in the continuous domain with either gradient-based or gradient-free techniques. The proposed approach is computationally efficient and has been tested on quadrotors in real flight experiments, including sensor-based navigation, flying through a complex three-dimensional environment, dynamic obstacle avoidance, and tracking moving references. <|reference_end|>", "<|reference_start|> A UAV-based explore-then-exploit system for autonomous indoor facility inspection and scene reconstruction: <|reference_end|>", "<|reference_start|> Fly-crash-recover: A sensor-based reactive framework for online collision recovery of uavs: Unmanned Aerial Vehicles (UAVs) are becoming increasingly popular thanks to the multiplicity of operations in which they can be deployed such as surveillance, search and rescue, mapping, transportation, hobby and recreational activities. Although sensors like LIDARs and cameras are often present on such systems for motion planning to avoid obstacles, collisions can still occur in very dense and unstructured environments, especially if disturbances are present. In this work, we research techniques to recover UAVs after a collision has occurred. We note that the on-board sensors, especially the inertial sensor used to stabilize the UAV, run at a high frequencies obtaining hundreds of data points every second. At run-time, this can be leveraged at the moment of a collision to quickly detect and recover the system. Our approach considers knowledge of UAV system dynamics to predict the expected behavior of the vehicle under safe flight conditions and leverage such expectations together with inertial data to detect collisions rapidly (on the order of milliseconds). We also propose a potential field-based approach to map the collision and create the correct reactive maneuver to avoid the collided object and bring the system back to a stable and safe configuration. Experiments are executed using ROS on two micro-quadrotor UAV platforms having different dynamics and performances, while colliding with poles and walls positioned in different configurations. In our results, we are able to show that the UAVs are successfully able to detect and avoid a collision, while also providing a rigorous analysis of the conditions in which the system can recover from imminent collisions. <|reference_end|>" ]
[ 0, 11, 16, 21 ]
{"<|multi_cite_1_1|>": "ss-1839678", "<|multi_cite_1_2|>": "ss-845410", "<|multi_cite_1_3|>": "ss-806353", "<|multi_cite_2_1|>": "ss-1210822", "<|multi_cite_2_2|>": "arxiv-328822", "<|multi_cite_2_3|>": "arxiv-335496", "<|multi_cite_3_1|>": "ss-1839679", "<|multi_cite_3_2|>": "ss-1441181", "<|cite_4|>": "arxiv-355062", "<|cite_5|>": "ss-735936", "<|multi_cite_6_1|>": "arxiv-276885", "<|multi_cite_6_2|>": "ss-1839680", "<|multi_cite_7_1|>": "ss-1294102", "<|multi_cite_7_2|>": "arxiv-301370", "<|multi_cite_7_3|>": "ss-1839681", "<|multi_cite_8_1|>": "arxiv-400907", "<|multi_cite_8_2|>": "ss-1839682", "<|cite_9|>": "arxiv-221215", "<|cite_10|>": "ss-679511", "<|multi_cite_11_1|>": "arxiv-301370", "<|multi_cite_11_2|>": "arxiv-449312", "<|cite_12|>": "ss-1839681", "<|multi_cite_13_1|>": "ss-679509", "<|multi_cite_13_2|>": "ss-679510"}
1309.6683
<|paper_start|> Title: Dynamic Structural Equation Models for Social Network Topology Inference Abstract: Dynamic Structural Equation Models for Social Network Topology Inference: Many real-world processes evolve in cascades over complex networks, whose topologies are often unobservable and change over time. However, the so-termed adoption times when blogs mention popular news items, individuals in a community catch an infectious disease, or consumers adopt a trendy electronics product are typically known, and are implicitly dependent on the underlying network. To infer the network topology, a \textit{dynamic} structural equation model is adopted to capture the relationship between observed adoption times and the unknown edge weights. Assuming a slowly time-varying topology and leveraging the sparse connectivity inherent to social networks, edge weights are estimated by minimizing a sparsity-regularized exponentially-weighted least-squares criterion. To this end, solvers with complementary strengths are developed by leveraging (pseudo) real-time sparsity-promoting proximal gradient iterations, the improved convergence rate of accelerated variants, or reduced computational complexity of stochastic gradient descent. Numerical tests with both synthetic and real data demonstrate the effectiveness of the novel algorithms in unveiling sparse dynamically-evolving topologies, while accounting for external influences in the adoption times. Key events in the recent succession of political leadership in North Korea, explain connectivity changes observed in the associated network inferred from global cascades of online media. Introduction \label{sec:introduction} Networks arising in natural and man-made settings provide the backbone for the propagation of \emph{contagions} such as the spread of popular news stories, the adoption of buying trends among consumers, and the spread of infectious diseases <|cite_start|> (Reference: Diffusion of innovations: A first theory of innovation diffusion was formalized by Everett Rogers in a 1962 book called Diffusion of Innovations. Rogers stated that adopters of any new innovation or idea could be categorized as innovators (2.5%), early adopters (13.5%), early majority (34%), late majority (34%) and laggards (16%), based on a bell curve. Each adopter's willingness and ability to adopt an innovation would depend on their awareness, interest, evaluation, trial, and adoption. Some of the characteristics of each category of adopter include:) <|cite_end|> <|cite_start|> (Reference: Networks, Crowds, and Markets: Reasoning About a Highly Connected World: Over the past decade there has been a growing public fascination with the complex connectedness of modern society. This connectedness is found in many incarnations: in the rapid growth of the Internet, in the ease with which global communication takes place, and in the ability of news and information as well as epidemics and financial crises to spread with surprising speed and intensity. These are phenomena that involve networks, incentives, and the aggregate behavior of groups of people; they are based on the links that connect us and the ways in which our decisions can have subtle consequences for others. This introductory undergraduate textbook takes an interdisciplinary look at economics, sociology, computing and information science, and applied mathematics to understand networks and behavior. It describes the emerging field of study that is growing at the interface of these areas, addressing fundamental questions about how the social, economic, and technological worlds are connected.) <|cite_end|>. For example, a terrorist attack may be reported within minutes on mainstream news websites. An information cascade emerges because these websites' readership typically includes bloggers who write about the attack as well, influencing their own readers in turn to do the same. Although the times when ``nodes" get infected are often observable, the underlying network topologies over which cascades propagate are typically unknown and dynamic. Knowledge of the topology plays a crucial role for several reasons e.g., when social media advertisers select a small set of initiators so that an online campaign can go viral, or when healthcare initiatives wish to infer hidden needle-sharing networks of injecting drug users. As a general principle, network structural information can be used to predict the behavior of complex systems <|cite_start|> (Reference: Statistical analysis of network data: methods and models: In the past decade, the study of networks has increased dramatically. Researchers from across the sciencesincluding biology and bioinformatics, computer science, economics, engineering, mathematics, physics, sociology, and statisticsare more and more involved with the collection and statistical analysis of network-indexed data. As a result, statistical methods and models are being developed in this area at a furious pace, with contributions coming from a wide spectrum of disciplines. This book provides an up-to-date treatment of the foundations common to the statistical analysis of network data across the disciplines. The material is organized according to a statistical taxonomy, although the presentation entails a conscious balance of concepts versus mathematics. In addition, the examplesincluding extended cases studiesare drawn widely from the literature. This book should be of substantial interest both to statisticians and to anyone else working in the area of network science. The coverage of topics in this book is broad, but unfolds in a systematic manner, moving from descriptive (or exploratory) methods, to sampling, to modeling and inference. Specific topics include network mapping, characterization of network structure, network sampling, and the modeling, inference, and prediction of networks, network processes, and network flows. This book is the first such resource to present material on all of these core topics in one place.) <|cite_end|>, such as the evolution and spread of information pathways in online media underlying e.g., major social movements and uprisings due to political conflicts <|cite_start|> (Reference: Structure and Dynamics of Information Pathways in Online Media: Diffusion of information, spread of rumors and infectious diseases are all instances of stochastic processes that occur over the edges of an underlying network. Many times networks over which contagions spread are unobserved, and such networks are often dynamic and change over time. In this paper, we investigate the problem of inferring dynamic networks based on information diffusion data. We assume there is an unobserved dynamic network that changes over time, while we observe the results of a dynamic process spreading over the edges of the network. The task then is to infer the edges and the dynamics of the underlying network. We develop an on-line algorithm that relies on stochastic convex optimization to efficiently solve the dynamic network inference problem. We apply our algorithm to information diffusion among 3.3 million mainstream media and blog sites and experiment with more than 179 million different pieces of information spreading over the network in a one year period. We study the evolution of information pathways in the online media space and find interesting insights. Information pathways for general recurrent topics are more stable across time than for on-going news events. Clusters of news media sites and blogs often emerge and vanish in matter of days for on-going news events. Major social movements and events involving civil population, such as the Libyan's civil war or Syria's uprise, lead to an increased amount of information pathways among blogs as well as in the overall increase in the network centrality of blogs and social media sites.) <|cite_end|>. Inference of networks using temporal traces of infection events has recently become an active area of research. According to the taxonomy in~\cite[Ch. 7]{kolaczyk_book}, this can be viewed as a problem involving inference of \textit{association} networks. Two other broad classes of network topology identification problems entail (individual) link prediction, or, tomographic inference. Several prior approaches postulate probabilistic models and rely on maximum likelihood estimation (MLE) to infer edge weights as pairwise transmission rates between nodes <|cite_start|> (Reference: Uncovering the Temporal Dynamics of Diffusion Networks: Time plays an essential role in the diffusion of information, influence and disease over networks. In many cases we only observe when a node copies information, makes a decision or becomes infected -- but the connectivity, transmission rates between nodes and transmission sources are unknown. Inferring the underlying dynamics is of outstanding interest since it enables forecasting, influencing and retarding infections, broadly construed. To this end, we model diffusion processes as discrete networks of continuous temporal processes occurring at different rates. Given cascade data -- observed infection times of nodes -- we infer the edges of the global diffusion network and estimate the transmission rates of each edge that best explain the observed data. The optimization problem is convex. The model naturally (without heuristics) imposes sparse solutions and requires no parameter tuning. The problem decouples into a collection of independent smaller problems, thus scaling easily to networks on the order of hundreds of thousands of nodes. Experiments on real and synthetic data show that our algorithm both recovers the edges of diffusion networks and accurately estimates their transmission rates from cascade data.) <|cite_end|>, <|cite_start|> (Reference: On the Convexity of Latent Social Network Inference: In many real-world scenarios, it is nearly impossible to collect explicit social network data. In such cases, whole networks must be inferred from underlying observations. Here, we formulate the problem of inferring latent social networks based on network diffusion or disease propagation data. We consider contagions propagating over the edges of an unobserved social network, where we only observe the times when nodes became infected, but not who infected them. Given such node infection times, we then identify the optimal network that best explains the observed data. We present a maximum likelihood approach based on convex programming with a l1-like penalty term that encourages sparsity. Experiments on real and synthetic data reveal that our method near-perfectly recovers the underlying network structure as well as the parameters of the contagion propagation model. Moreover, our approach scales well as it can infer optimal networks of thousands of nodes in a matter of minutes.) <|cite_end|>. However, these methods assume that the network does not change over time. A dynamic algorithm has been recently proposed to infer time-varying diffusion networks by solving an MLE problem via stochastic gradient descent iterations <|cite_start|> (Reference: Structure and Dynamics of Information Pathways in Online Media: Diffusion of information, spread of rumors and infectious diseases are all instances of stochastic processes that occur over the edges of an underlying network. Many times networks over which contagions spread are unobserved, and such networks are often dynamic and change over time. In this paper, we investigate the problem of inferring dynamic networks based on information diffusion data. We assume there is an unobserved dynamic network that changes over time, while we observe the results of a dynamic process spreading over the edges of the network. The task then is to infer the edges and the dynamics of the underlying network. We develop an on-line algorithm that relies on stochastic convex optimization to efficiently solve the dynamic network inference problem. We apply our algorithm to information diffusion among 3.3 million mainstream media and blog sites and experiment with more than 179 million different pieces of information spreading over the network in a one year period. We study the evolution of information pathways in the online media space and find interesting insights. Information pathways for general recurrent topics are more stable across time than for on-going news events. Clusters of news media sites and blogs often emerge and vanish in matter of days for on-going news events. Major social movements and events involving civil population, such as the Libyan's civil war or Syria's uprise, lead to an increased amount of information pathways among blogs as well as in the overall increase in the network centrality of blogs and social media sites.) <|cite_end|>. Although successful experiments on large-scale web data reliably uncover information pathways, the estimator in <|cite_start|> (Reference: Structure and Dynamics of Information Pathways in Online Media: Diffusion of information, spread of rumors and infectious diseases are all instances of stochastic processes that occur over the edges of an underlying network. Many times networks over which contagions spread are unobserved, and such networks are often dynamic and change over time. In this paper, we investigate the problem of inferring dynamic networks based on information diffusion data. We assume there is an unobserved dynamic network that changes over time, while we observe the results of a dynamic process spreading over the edges of the network. The task then is to infer the edges and the dynamics of the underlying network. We develop an on-line algorithm that relies on stochastic convex optimization to efficiently solve the dynamic network inference problem. We apply our algorithm to information diffusion among 3.3 million mainstream media and blog sites and experiment with more than 179 million different pieces of information spreading over the network in a one year period. We study the evolution of information pathways in the online media space and find interesting insights. Information pathways for general recurrent topics are more stable across time than for on-going news events. Clusters of news media sites and blogs often emerge and vanish in matter of days for on-going news events. Major social movements and events involving civil population, such as the Libyan's civil war or Syria's uprise, lead to an increased amount of information pathways among blogs as well as in the overall increase in the network centrality of blogs and social media sites.) <|cite_end|> does not explicitly account for edge sparsity prevalent in social and information networks. Moreover, most prior approaches only attribute node infection events to the network topology, and do not account for the influence of external sources such as a ground crew for a mainstream media website. The propagation of a contagion is tantamount to \textit{causal} effects or interactions being excerted among entities such as news portals and blogs, consumers, or people susceptible to being infected with a contagious disease. Acknowledging this viewpoint, \textit{structural equation models} (SEMs) provide a general statistical modeling technique to estimate causal relationships among traits; see e.g., <|cite_start|> (Reference: Structural Equation Modeling: Foundations and Extensions: Preface to the Second Edition 1. Historical Foundations of Structural Equation Modeling for Continuous and Categorical Latent Variables 2. Path Analysis: Modeling Systems of Structural Equations Among Observed Variables 3. Factor Analysis 4. Structural Equation Models in Single and Multiple Groups 5. Statistical Assumptions Underlying Structural Equation Modeling 6. Evaluating and Modifying Structural Equation Models 7. Multilevel Structural Equation Modeling 8. Latent Growth Curve Modeling 9. Structural Models for Categorical and Continuous Latent Variables 10. Epilogue: Toward a New Approach to the Practice of Structural Equation Modeling) <|cite_end|> <|cite_start|> (Reference: {Causality: Models, reasoning, and inference: 1. Introduction to probabilities, graphs, and causal models 2. A theory of inferred causation 3. Causal diagrams and the identification of causal effects 4. Actions, plans, and direct effects 5. Causality and structural models in the social sciences 6. Simpson's paradox, confounding, and collapsibility 7. Structural and counterfactual models 8. Imperfect experiments: bounds and counterfactuals 9. Probability of causation: interpretation and identification Epilogue: the art and science of cause and effect.) <|cite_end|>. These directional effects are often not revealed by standard linear models that leverage symmetric associations between random variables, such as those represented by covariances or correlations, <|cite_start|> (Reference: {High-dimensional graphs and variable selection with the Lasso: The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs. Neighborhood selection estimates the conditional independence restrictions separately for each node in the graph and is hence equivalent to variable selection for Gaussian linear models. We show that the proposed neighborhood selection scheme is consistent for sparse high-dimensional graphs. Consistency hinges on the choice of the penalty parameter. The oracle value for optimal prediction does not lead to a consistent neighborhood estimate. Controlling instead the probability of falsely joining some distinct connectivity components of the graph, consistent estimation for sparse graphs is achieved (with exponential rates), even when the number of variables grows as the number of observations raised to an arbitrary power.) <|cite_end|>, <|cite_start|> (Reference: {Sparse inverse covariance estimation with the graphical lasso: We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm--the graphical lasso--that is remarkably fast: It solves a 1000-node problem ( approximately 500,000 parameters) in at most a minute and is 30-4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and Bühlmann (2006). We illustrate the method on some cell-signaling data from proteomics.) <|cite_end|>, <|cite_start|> (Reference: Estimating time-varying networks: An evacuated and hermetically sealed bellows assembly has a bellows core assembly for mechanical movement versus pressure differential requirements. Mechanical attachment means are secured to the opposite ends of the bellows assembly. Vibrational optimization is provided to the bellows assembly to reduce predetermined frequencies.) <|cite_end|>, <|cite_start|> (Reference: Sparse graphical modeling of piecewise-stationary time series: Graphical models are useful for capturing interdependencies of statistical variables in various fields. Estimating parameters describing sparse graphical models of stationary multivariate data is a major task in areas as diverse as biostatistics, econometrics, social networks, and climate data analysis. Even though time series in these applications are often non-stationary, revealing interdependencies through sparse graphs has not advanced as rapidly, because estimating such time-varying models is challenged by the curse of dimensionality and the associated complexity which is prohibitive. The goal of this paper is to introduce novel algorithms for joint segmentation and estimation of sparse, piecewise stationary, graphical models. The crux of the proposed approach is application of dynamic programming in conjunction with cost functions regularized with terms promoting the right form of sparsity in the right application domain. As a result, complexity of the novel schemes scales gracefully with the problem dimension.) <|cite_end|>. SEMs are attractive because of their simplicity and ability to capture edge directionalities. They have been widely adopted in many fields, such as economics, psychometrics <|cite_start|> (Reference: A general structural equation model with dichotomous, ordered categorical, and continuous latent variable indicators: ) <|cite_end|>, social sciences <|cite_start|> (Reference: STRUCTURAL EQUATION METHODS IN THE SOCIAL SCIENCES: ) <|cite_end|>, and genetics <|cite_start|> (Reference: Gene network inference via structural equation modeling in genetical genomics experiments: Our goal is gene network inference in genetical genomics or systems genetics experiments. For species where sequence information is available, we first perform expression quantitative trait locus (eQTL) mapping by jointly utilizing cis-, cis–trans-, and trans-regulation. After using local structural models to identify regulator–target pairs for each eQTL, we construct an encompassing directed network (EDN) by assembling all retained regulator–target relationships. The EDN has nodes corresponding to expressed genes and eQTL and directed edges from eQTL to cis-regulated target genes, from cis-regulated genes to cis–trans-regulated target genes, from trans-regulator genes to target genes, and from trans-eQTL to target genes. For network inference within the strongly constrained search space defined by the EDN, we propose structural equation modeling (SEM), because it can model cyclic networks and the EDN indeed contains feedback relationships. On the basis of a factorization of the likelihood and the constrained search space, our SEM algorithm infers networks involving several hundred genes and eQTL. Structure inference is based on a penalized likelihood ratio and an adaptation of Occam's window model selection. The SEM algorithm was evaluated using data simulated with nonlinear ordinary differential equations and known cyclic network topologies and was applied to a real yeast data set.) <|cite_end|> <|cite_start|> (Reference: Gene network inference via sparse structural equation modeling with genetic perturbations: Structural equation models (SEMs) have been recently proposed to infer gene regulatory network using gene expression data and genetic perturbations. However, lack of efficient inference method for SEMs prevents practical use of SEMs in the inference of relatively large gene networks. In this paper, relying on the sparsity of gene networks, we develop an efficient SEM-based method for inferring gene networks using both gene expression and expression quantitative trait locus (eQTL) data. Simulated tests demonstrate that the novel method significantly outperform state-of-the-art methods in the field.) <|cite_end|>. In particular, SEMs have recently been proposed for \textit{static} gene regulatory network inference from gene expression data; see e.g., <|cite_start|> (Reference: Gene network inference via sparse structural equation modeling with genetic perturbations: Structural equation models (SEMs) have been recently proposed to infer gene regulatory network using gene expression data and genetic perturbations. However, lack of efficient inference method for SEMs prevents practical use of SEMs in the inference of relatively large gene networks. In this paper, relying on the sparsity of gene networks, we develop an efficient SEM-based method for inferring gene networks using both gene expression and expression quantitative trait locus (eQTL) data. Simulated tests demonstrate that the novel method significantly outperform state-of-the-art methods in the field.) <|cite_end|> <|cite_start|> (Reference: Gene expression network reconstruction by convex feature selection when incorporating genetic perturbations: Cellular gene expression measurements contain regulatory information that can be used to discover novel network relationships. Here, we present a new algorithm for network reconstruction powered by the adaptive lasso, a theoretically and empirically well-behaved method for selecting the regulatory features of a network. Any algorithms designed for network discovery that make use of directed probabilistic graphs require perturbations, produced by either experiments or naturally occurring genetic variation, to successfully infer unique regulatory relationships from gene expression data. Our approach makes use of appropriately selected cis-expression Quantitative Trait Loci (cis-eQTL), which provide a sufficient set of independent perturbations for maximum network resolution. We compare the performance of our network reconstruction algorithm to four other approaches: the PC-algorithm, QTLnet, the QDG algorithm, and the NEO algorithm, all of which have been used to reconstruct directed networks among phenotypes leveraging QTL. We show that the adaptive lasso can outperform these algorithms for networks of ten genes and ten cis-eQTL, and is competitive with the QDG algorithm for networks with thirty genes and thirty cis-eQTL, with rich topologies and hundreds of samples. Using this novel approach, we identify unique sets of directed relationships in Saccharomyces cerevisiae when analyzing genome-wide gene expression data for an intercross between a wild strain and a lab strain. We recover novel putative network relationships between a tyrosine biosynthesis gene (TYR1), and genes involved in endocytosis (RCY1), the spindle checkpoint (BUB2), sulfonate catabolism (JLP1), and cell-cell communication (PRM7). Our algorithm provides a synthesis of feature selection methods and graphical model theory that has the potential to reveal new directed regulatory relationships from the analysis of population level genetic and gene expression data.) <|cite_end|> and references therein. However, SEMs have not been utilized to track the dynamics of causal effects among interacting nodes, or, to infer the topology of time-varying directed networks. In this context, the present paper proposes a \textit{dynamic} SEM to account for directed networks over which contagions propagate, and describes how node infection times depend on both topological (causal) and external influences. Topological influences are modeled in Section \ref{sec:model} as linear combinations of infection times of other nodes in the network, whose weights correspond to entries in the time-varying asymmetric adjacency matrix. Accounting for external influences is well motivated by drawing upon examples from online media, where established news websites depend more on on-site reporting than blog references. External influence data is also useful for model identifiability, since it has been shown necessary to resolve directional ambiguities <|cite_start|> (Reference: Identifiability of sparse structural equation models for directed and cyclic networks: Structural equation models (SEMs) provide a statistical description of directed networks. The networks modeled by SEMs may have signed edge weights, a property that is pertinent to represent the activating and inhibitory interactions characteristic of biological systems, as well as the collaborative and antagonist behaviors found in social networks, among other applications. They may also have cyclic paths, accommodating the presence of protein stabilizing loops, or the feedback in decision making processes. Starting from the mathematical description of a linear SEM, this paper aims to identify the topology, edge directions, and edge weights of the underlying network. It is established that perturbation data is essential for this purpose, otherwise directional ambiguities cannot be resolved. It is also proved that the required amount of data is significantly reduced when the network topology is assumed to be sparse; that is, when the number of incoming edges per node is much smaller than the network size. Identifying a dynamic network with step changes across time is also considered, but it is left as an open problem to be addressed in an extended version of this paper.) <|cite_end|>. Supposing the network varies slowly with time, parameters in the proposed dynamic SEM are estimated adaptively by minimizing a sparsity-promoting exponentially-weighted least-squares (LS) criterion (Section \ref{ssec:rls}). To account for the inherently sparse connectivity of social networks, an $\ell_1$-norm regularization term that promotes sparsity on the entries of the network adjacency matrix is incorporated in the cost function; see also <|cite_start|> (Reference: Sparse LMS for system identification: We propose a new approach to adaptive system identification when the system model is sparse. The approach applies ℓ1 relaxation, common in compressive sensing, to improve the performance of LMS-type adaptive methods. This results in two new algorithms, the zero-attracting LMS (ZA-LMS) and the reweighted zero-attracting LMS (RZA-LMS). The ZA-LMS is derived via combining a ℓ1 norm penalty on the coefficients into the quadratic LMS cost function, which generates a zero attractor in the LMS iteration. The zero attractor promotes sparsity in taps during the filtering process, and therefore accelerates convergence when identifying sparse systems. We prove that the ZA-LMS can achieve lower mean square error than the standard LMS. To further improve the filtering performance, the RZA-LMS is developed using a reweighted zero attractor. The performance of the RZA-LMS is superior to that of the ZA-LMS numerically. Experiments demonstrate the advantages of the proposed filters in both convergence rate and steady-state behavior under sparsity assumptions on the true coefficient vector. The RZA-LMS is also shown to be robust when the number of non-zero taps increases.) <|cite_end|> <|cite_start|> (Reference: Online Sparse System Identification and Signal Reconstruction using Projections onto Weighted $\ell_1$ Balls: This paper presents a novel projection-based adaptive algorithm for sparse signal and system identification. The sequentially observed data are used to generate an equivalent sequence of closed convex sets, namely hyperslabs. Each hyperslab is the geometric equivalent of a cost criterion, that quantifies "data mismatch". Sparsity is imposed by the introduction of appropriately designed weighted $\ell_1$ balls. The algorithm develops around projections onto the sequence of the generated hyperslabs as well as the weighted $\ell_1$ balls. The resulting scheme exhibits linear dependence, with respect to the unknown system's order, on the number of multiplications/additions and an $\mathcal{O}(L\log_2L)$ dependence on sorting operations, where $L$ is the length of the system/signal to be estimated. Numerical results are also given to validate the performance of the proposed method against the LASSO algorithm and two very recently developed adaptive sparse LMS and LS-type of adaptive algorithms, which are considered to belong to the same algorithmic family.) <|cite_end|> <|cite_start|> (Reference: Online adaptive estimation of sparse signals: Where {RLS} meets the $\ell_1$-norm: Using the ℓ1-norm to regularize the least-squares criterion, the batch least-absolute shrinkage and selection operator (Lasso) has well-documented merits for estimating sparse signals of interest emerging in various applications where observations adhere to parsimonious linear regression models. To cope with high complexity, increasing memory requirements, and lack of tracking capability that batch Lasso estimators face when processing observations sequentially, the present paper develops a novel time-weighted Lasso (TWL) approach. Performance analysis reveals that TWL cannot estimate consistently the desired signal support without compromising rate of convergence. This motivates the development of a time- and norm-weighted Lasso (TNWL) scheme with ℓ1-norm weights obtained from the recursive least-squares (RLS) algorithm. The resultant algorithm consistently estimates the support of sparse signals without reducing the convergence rate. To cope with sparsity-aware recursive real-time processing, novel adaptive algorithms are also developed to enable online coordinate descent solvers of TWL and TNWL that provably converge to the true sparse signal in the time-invariant case. Simulated tests compare competing alternatives and corroborate the performance of the novel algorithms in estimating time-invariant signals, and tracking time-varying signals under sparsity constraints.) <|cite_end|> <|cite_start|> (Reference: Sparse graphical modeling of piecewise-stationary time series: Graphical models are useful for capturing interdependencies of statistical variables in various fields. Estimating parameters describing sparse graphical models of stationary multivariate data is a major task in areas as diverse as biostatistics, econometrics, social networks, and climate data analysis. Even though time series in these applications are often non-stationary, revealing interdependencies through sparse graphs has not advanced as rapidly, because estimating such time-varying models is challenged by the curse of dimensionality and the associated complexity which is prohibitive. The goal of this paper is to introduce novel algorithms for joint segmentation and estimation of sparse, piecewise stationary, graphical models. The crux of the proposed approach is application of dynamic programming in conjunction with cost functions regularized with terms promoting the right form of sparsity in the right application domain. As a result, complexity of the novel schemes scales gracefully with the problem dimension.) <|cite_end|>. A novel algorithm to jointly track the network's adjacency matrix and the weights capturing the level of external influences is developed in Section \ref{ssec:ista}, which minimizes the resulting non-differentiable cost function via a proximal-gradient (PG) solver; see e.g., <|cite_start|> (Reference: Proximal {Algorithms: This monograph is about a class of optimization algorithms called proximal algorithms. Much like Newton's method is a standard tool for solving unconstrained smooth optimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, or distributed versions of these problems. They are very generally applicable, but are especially well-suited to problems of substantial recent interest involving large or high-dimensional datasets. Proximal methods sit at a higher level of abstraction than classical algorithms like Newton's method: the base operation is evaluating the proximal operator of a function, which itself involves solving a small convex optimization problem. These subproblems, which generalize the problem of projecting a point onto a convex set, often admit closed-form solutions or can be solved very quickly with standard or simple specialized methods. Here, we discuss the many different interpretations of proximal operators and algorithms, describe their connections to many other topics in optimization and applied mathematics, survey some popular algorithms, and provide a large number of examples of proximal operators that commonly arise in practice.) <|cite_end|> <|cite_start|> (Reference: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint: We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary preassigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted 𝓁p‐penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem. Use of such 𝓁p‐penalized problems with p < 2 is often advocated when one expects the underlying ideal noiseless solution to have a sparse expansion with respect to the basis under consideration. To compute the corresponding regularized solutions, we analyze an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. © 2004 Wiley Periodicals, Inc.) <|cite_end|> <|cite_start|> (Reference: A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems: We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.) <|cite_end|>. The resulting dynamic iterative shrinkage-thresholding algorithm (ISTA) is provably convergent, and offers parallel, closed-form, and sparsity-promoting updates per iteration. Proximal-splitting algorithms such as ISTA have been successfully adopted for various signal processing tasks <|cite_start|> (Reference: Proximal Splitting Methods in Signal Processing: ) <|cite_end|>, and for parallel optimization <|cite_start|> (Reference: A proximal decomposition method for solving convex variational inverse problems: A broad range of inverse problems can be abstracted into the problem of minimizing the sum of several convex functions in a Hilbert space. We propose a proximal decomposition algorithm for solving this problem with an arbitrary number of nonsmooth functions and establish its weak convergence. The algorithm fully decomposes the problem in that it involves each function individually via its own proximity operator. A significant improvement over the methods currently in use in the area of inverse problems is that it is not limited to two nonsmooth functions. Numerical applications to signal and image processing problems are demonstrated.) <|cite_end|>. Further algorithmic improvements are outlined in Section \ref{sec:alg_improv}. These include enhancing the algorithms' rate of convergence through Nesterov's acceleration techniques <|cite_start|> (Reference: A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems: We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.) <|cite_end|> <|cite_start|> (Reference: A method for solving the convex programming problem with convergence rate O(1/k^2): ) <|cite_end|> <|cite_start|> (Reference: Smooth minimization of nonsmooth functions with parallel coordinate descent methods: ) <|cite_end|> (Section \ref{ssec:fista}), and also adapting it for real-time operation (Section \ref{ssec:inexact_fista}). When minimal computational complexity is at a premium, a stochastic gradient descent (SGD) algorithm is developed in Section \ref{ssec:stochgrad}, which adaptively minimizes an instantaneous (noisy) approximation of the ensemble LS cost. Throughout, insightful and useful extensions to the proposed algorithms that are not fully developed due to space limitations are highlighted as remarks. Numerical tests on synthetic network data demonstrate the superior error performance of the developed algorithms, and highlight their merits when compared to the sparsity-agnostic approach in <|cite_start|> (Reference: Structure and Dynamics of Information Pathways in Online Media: Diffusion of information, spread of rumors and infectious diseases are all instances of stochastic processes that occur over the edges of an underlying network. Many times networks over which contagions spread are unobserved, and such networks are often dynamic and change over time. In this paper, we investigate the problem of inferring dynamic networks based on information diffusion data. We assume there is an unobserved dynamic network that changes over time, while we observe the results of a dynamic process spreading over the edges of the network. The task then is to infer the edges and the dynamics of the underlying network. We develop an on-line algorithm that relies on stochastic convex optimization to efficiently solve the dynamic network inference problem. We apply our algorithm to information diffusion among 3.3 million mainstream media and blog sites and experiment with more than 179 million different pieces of information spreading over the network in a one year period. We study the evolution of information pathways in the online media space and find interesting insights. Information pathways for general recurrent topics are more stable across time than for on-going news events. Clusters of news media sites and blogs often emerge and vanish in matter of days for on-going news events. Major social movements and events involving civil population, such as the Libyan's civil war or Syria's uprise, lead to an increased amount of information pathways among blogs as well as in the overall increase in the network centrality of blogs and social media sites.) <|cite_end|> (Section \ref{ssec:synthetic}). Experiments in Section \ref{ssec:real} involve real temporal traces of popular global events that propagated on news websites and blogs in 2011. Interestingly, topologies inferred from cascades associated to the meme ``Kim Jong-un" exhibit an abrupt increase in the number of edges following the appointment of the new North Korean ruler. \noindent\textit{Notation}. Bold uppercase (lowercase) letters will denote matrices (column vectors), while operators $(\cdot)^{\top}$, $\lambda_{\max}(\cdot)$, and $\textrm{diag}(\cdot)$ will stand for matrix transposition, maximum eigenvalue, and diagonal matrix, respectively. The $N \times N$ identity matrix will be represented by $\I_N$, while $\mathbf{0}_{N}$ will denote the $N \times 1$ vector of all zeros, and $\mathbf{0}_{N \times P}:=\mathbf{0}_{N} \mathbf{0}^\top_{P}$. The $\ell_p$ and Frobenius norms will be denoted by $\|\cdot\|_p$, and $\|\cdot\|_F$, respectively. <|paper_end|>
[ "<|reference_start|> Uncovering the Temporal Dynamics of Diffusion Networks: Time plays an essential role in the diffusion of information, influence and disease over networks. In many cases we only observe when a node copies information, makes a decision or becomes infected -- but the connectivity, transmission rates between nodes and transmission sources are unknown. Inferring the underlying dynamics is of outstanding interest since it enables forecasting, influencing and retarding infections, broadly construed. To this end, we model diffusion processes as discrete networks of continuous temporal processes occurring at different rates. Given cascade data -- observed infection times of nodes -- we infer the edges of the global diffusion network and estimate the transmission rates of each edge that best explain the observed data. The optimization problem is convex. The model naturally (without heuristics) imposes sparse solutions and requires no parameter tuning. The problem decouples into a collection of independent smaller problems, thus scaling easily to networks on the order of hundreds of thousands of nodes. Experiments on real and synthetic data show that our algorithm both recovers the edges of diffusion networks and accurately estimates their transmission rates from cascade data. <|reference_end|>", "<|reference_start|> A general structural equation model with dichotomous, ordered categorical, and continuous latent variable indicators: <|reference_end|>", "<|reference_start|> Identifiability of sparse structural equation models for directed and cyclic networks: Structural equation models (SEMs) provide a statistical description of directed networks. The networks modeled by SEMs may have signed edge weights, a property that is pertinent to represent the activating and inhibitory interactions characteristic of biological systems, as well as the collaborative and antagonist behaviors found in social networks, among other applications. They may also have cyclic paths, accommodating the presence of protein stabilizing loops, or the feedback in decision making processes. Starting from the mathematical description of a linear SEM, this paper aims to identify the topology, edge directions, and edge weights of the underlying network. It is established that perturbation data is essential for this purpose, otherwise directional ambiguities cannot be resolved. It is also proved that the required amount of data is significantly reduced when the network topology is assumed to be sparse; that is, when the number of incoming edges per node is much smaller than the network size. Identifying a dynamic network with step changes across time is also considered, but it is left as an open problem to be addressed in an extended version of this paper. <|reference_end|>", "<|reference_start|> A proximal decomposition method for solving convex variational inverse problems: A broad range of inverse problems can be abstracted into the problem of minimizing the sum of several convex functions in a Hilbert space. We propose a proximal decomposition algorithm for solving this problem with an arbitrary number of nonsmooth functions and establish its weak convergence. The algorithm fully decomposes the problem in that it involves each function individually via its own proximity operator. A significant improvement over the methods currently in use in the area of inverse problems is that it is not limited to two nonsmooth functions. Numerical applications to signal and image processing problems are demonstrated. <|reference_end|>" ]
[ 4, 14, 20, 29 ]
{"<|multi_cite_1_1|>": "ss-1513572", "<|multi_cite_1_2|>": "ss-1040365", "<|cite_2|>": "ss-847634", "<|cite_3|>": "arxiv-38916", "<|cite_4|>": "arxiv-21211", "<|cite_5|>": "arxiv-16911", "<|cite_6|>": "arxiv-38916", "<|cite_7|>": "arxiv-38916", "<|multi_cite_8_1|>": "ss-2014908", "<|multi_cite_8_2|>": "ss-812973", "<|cite_9|>": "ss-682873", "<|cite_10|>": "ss-752057", "<|cite_11|>": "ss-1201207", "<|cite_12|>": "ss-2300005", "<|cite_13|>": "ss-2014909", "<|cite_14|>": "ss-1762624", "<|multi_cite_15_1|>": "ss-2014910", "<|multi_cite_15_2|>": "ss-2014911", "<|multi_cite_16_1|>": "ss-2014911", "<|multi_cite_16_2|>": "ss-2014912", "<|cite_17|>": "ss-2014913", "<|multi_cite_18_1|>": "ss-990968", "<|multi_cite_18_2|>": "arxiv-12873", "<|multi_cite_18_3|>": "ss-857744", "<|multi_cite_18_4|>": "ss-2300005", "<|multi_cite_19_1|>": "ss-1282301", "<|multi_cite_19_2|>": "ss-1255314", "<|multi_cite_19_3|>": "ss-735375", "<|cite_20|>": "ss-957869", "<|cite_21|>": "ss-1993978", "<|multi_cite_22_1|>": "ss-735375", "<|multi_cite_22_2|>": "ss-685647", "<|multi_cite_22_3|>": "ss-1339677", "<|cite_23|>": "arxiv-38916"}
2210.07018-0
<|paper_start|> Title: Online matching with delays and stochastic arrival times Abstract: Online matching with delays and stochastic arrival times: This paper presents a new research direction for the Min-cost Perfect Matching with Delays (MPMD) - a problem introduced by Emek et al. (STOC'16). In the original version of this problem, we are given an $n$-point metric space, where requests arrive in an online fashion. The goal is to minimise the matching cost for an even number of requests. However, contrary to traditional online matching problems, a request does not have to be paired immediately at the time of its arrival. Instead, the decision of whether to match a request can be postponed for time $t$ at a delay cost of $t$. For this reason, the goal of the MPMD is to minimise the overall sum of distance and delay costs. Interestingly, for adversarially generated requests, no online algorithm can achieve a competitive ratio better than $O(\log n/\log \log n)$ (Ashlagi et al., APPROX/RANDOM'17). Here, we consider a stochastic version of the MPMD problem where the input requests follow a Poisson arrival process. For such a problem, we show that the above lower bound can be improved by presenting two deterministic online algorithms, which, in expectation, are constant-competitive. The first one is a simple greedy algorithm that matches any two requests once the sum of their delay costs exceeds their connection cost, i.e., the distance between them. The second algorithm builds on the tools used to analyse the first one in order to obtain even better performance guarantees. This result is rather surprising as the greedy approach for the adversarial model achieves a competitive ratio of $\Omega(m^{\log \frac{3}{2}+\varepsilon})$, where $m$ denotes the number of requests served (Azar et al., TOCS'20). Finally, we prove that it is possible to obtain similar results for the general case when the delay cost follows an arbitrary positive and non-decreasing function, as well as for the MPMD variant with penalties to clear pending requests. Introduction Imagine players logging into an online platform to compete against each other in a two player game. The platform needs to pair them in a way that maximizes the overall satisfaction from the gameplay. Typically, a player prefers to be matched with someone with similar gaming skills. Thus, the platform has to consider the experience gap when pairing two players. This skill level difference is referred to as the \emph{connection cost}. Additionally, once logged in, a player can tolerate some waiting time to be matched --- this is why the platform can postpone the pairing decision in the hope of a better matching to be found (i.e., the login of another player with similar skills). Nonetheless, the waiting time for each player has its limits. A player may become unsatisfied if their gaming request has been ignored for too long. This time gap between logging into the platform and joining a gaming session is referred to as the \emph{delay cost}. The platform's goal is to pair all the online players into sessions, such that the total connection cost plus the total delay cost produced is minimized. The above is an example of an online problem called Min-cost Perfect Matching with Delays (MPMD) <|cite_start|> (Reference: Online Matching: Haste makes Waste!: This paper studies a new online problem, referred to as \emph{min-cost perfect matching with delays (MPMD)}, defined over a finite metric space (i.e., a complete graph with positive edge weights obeying the triangle inequality) $\mathcal{M}$ that is known to the algorithm in advance. Requests arrive in a continuous time online fashion at the points of $\mathcal{M}$ and should be served by matching them to each other. The algorithm is allowed to delay its request matching commitments, but this does not come for free: the total cost of the algorithm is the sum of metric distances between matched requests \emph{plus} the sum of times each request waited since it arrived until it was matched. A randomized online MPMD algorithm is presented whose competitive ratio is $O (\log^{2} n + \log \Delta)$, where $n$ is the number of points in $\mathcal{M}$ and $\Delta$ is its aspect ratio. The analysis is based on a machinery developed in the context of a new stochastic process that can be viewed as two interleaved Poisson processes; surprisingly, this new process captures precisely the behavior of our algorithm. A related problem in which the algorithm is allowed to clear any unmatched request at a fixed penalty is also addressed. It is suggested that the MPMD problem is merely the tip of the iceberg for a general framework of online problems with delayed service that captures many more natural problems.) <|cite_end|>. It has drawn researchers attention recently <|cite_start|> (Reference: Online Matching: Haste makes Waste!: This paper studies a new online problem, referred to as \emph{min-cost perfect matching with delays (MPMD)}, defined over a finite metric space (i.e., a complete graph with positive edge weights obeying the triangle inequality) $\mathcal{M}$ that is known to the algorithm in advance. Requests arrive in a continuous time online fashion at the points of $\mathcal{M}$ and should be served by matching them to each other. The algorithm is allowed to delay its request matching commitments, but this does not come for free: the total cost of the algorithm is the sum of metric distances between matched requests \emph{plus} the sum of times each request waited since it arrived until it was matched. A randomized online MPMD algorithm is presented whose competitive ratio is $O (\log^{2} n + \log \Delta)$, where $n$ is the number of points in $\mathcal{M}$ and $\Delta$ is its aspect ratio. The analysis is based on a machinery developed in the context of a new stochastic process that can be viewed as two interleaved Poisson processes; surprisingly, this new process captures precisely the behavior of our algorithm. A related problem in which the algorithm is allowed to clear any unmatched request at a fixed penalty is also addressed. It is suggested that the MPMD problem is merely the tip of the iceberg for a general framework of online problems with delayed service that captures many more natural problems.) <|cite_end|> <|cite_start|> (Reference: Polylogarithmic Bounds on the Competitiveness of Min-Cost Perfect Matching with Delays: We consider the problem of online Min-cost Perfect Matching with Delays (MPMD) recently introduced by Emek et al, (STOC 2016). This problem is defined on an underlying n-point metric space. An adversary presents real-time requests online at points of the metric space, and the algorithm is required to match them, possibly after keeping them waiting for some time. The cost incurred is the sum of the distances between matched pairs of requests (the connection cost), and the sum of the waiting times of the requests (the delay cost). We prove the first logarithmic upper bound and the first polylogarithmic lower bound on the randomized competitive ratio of this problem. We present an algorithm with a competitive ratio of O(log n), which improves the upper bound of O(log2 n + log Δ) of Emek et al, by removing the dependence on Δ, the aspect ratio of the metric space (which can be unbounded as a function of n). The core of our algorithm is a deterministic algorithm for MPMD on metrics induced by edge-weighted trees of height h, whose cost is guaranteed to be at most O(1) times the connection cost plus O(h) times the delay cost of every feasible solution. The reduction from MPMD on arbitrary metrics to MPMD on trees is achieved using the result on embedding n-point metric spaces into distributions over weighted hierarchically separated trees of height O(log n), with distortion O(log n). We also prove a lower bound of [EQUATION] on the competitive ratio of any randomized algorithm. This is the first lower bound which increases with n, and is attained on the metric of n equally spaced points on a line.) <|cite_end|> <|cite_start|> (Reference: Min-cost Bipartite Perfect Matching with Delays: In the min-cost bipartite perfect matching with delays (MBPMD) problem, requests arrive online at points of a finite metric space. Each request is either positive or negative and has to be matched to a request of opposite polarity. As opposed to traditional online matching problems, the algorithm does not have to serve requests as they arrive, and may choose to match them later at a cost. Our objective is to minimize the sum of the distances between matched pairs of requests (the connection cost) and the sum of the waiting times of the requests (the delay cost). This objective exhibits a natural tradeoff between minimizing the distances and the cost of waiting for better matches. This tradeoff appears in many real-life scenarios, notably, ride-sharing platforms. MBPMD is related to its non-bipartite variant, min-cost perfect matching with delays (MPMD), in which each request can be matched to any other request. MPMD was introduced by Emek et al. (STOC'16), who showed an O(log^2(n)+log(Delta))-competitive randomized algorithm on n-point metric spaces with aspect ratio Delta. Our contribution is threefold. First, we present a new lower bound construction for MPMD and MBPMD. We get a lower bound of Omega(sqrt(log(n)/log(log(n)))) on the competitive ratio of any randomized algorithm for MBPMD. For MPMD, we improve the lower bound from Omega(sqrt(log(n))) (shown by Azar et al., SODA'17) to Omega(log(n)/log(log(n))), thus, almost matching their upper bound of O(log(n)). Second, we adapt the algorithm of Emek et al. to the bipartite case, and provide a simplified analysis that improves the competitive ratio to O(log(n)). The key ingredient of the algorithm is an O(h)-competitive randomized algorithm for MBPMD on weighted trees of height h. Third, we provide an O(h)-competitive deterministic algorithm for MBPMD on weighted trees of height h. This algorithm is obtained by adapting the algorithm for MPMD by Azar et al. to the apparently more complicated bipartite setting.) <|cite_end|> <|cite_start|> (Reference: A Match in Time Saves Nine: Deterministic Online Matching With Delays: We consider the problem of online Min-cost Perfect Matching with Delays (MPMD) introduced by Emek et al. (STOC 2016). In this problem, an even number of requests appear in a metric space at different times and the goal of an online algorithm is to match them in pairs. In contrast to traditional online matching problems, in MPMD all requests appear online and an algorithm can match any pair of requests, but such decision may be delayed (e.g., to find a better match). The cost is the sum of matching distances and the introduced delays. We present the first deterministic online algorithm for this problem. Its competitive ratio is $O(m^{\log_2 5.5})$ $ = O(m^{2.46})$, where $2 m$ is the number of requests. This is polynomial in the number of metric space points if all requests are given at different points. In particular, the bound does not depend on other parameters of the metric, such as its aspect ratio. Unlike previous (randomized) solutions for the MPMD problem, our algorithm does not need to know the metric space in advance.) <|cite_end|> <|cite_start|> (Reference: A Primal-Dual Online Deterministic Algorithm for Matching with Delays: In the Min-cost Perfect Matching with Delays (MPMD) problem, 2 m requests arrive over time at points of a metric space. An online algorithm has to connect these requests in pairs, but a decision to match may be postponed till a more suitable matching pair is found. The goal is to minimize the joint cost of connection and the total waiting time of all requests. We present an O(m)-competitive deterministic algorithm for this problem, improving on an existing bound of O(m^(log(5.5))) = O(m^2.46). Our algorithm also solves (with the same competitive ratio) a bipartite variant of MPMD, where requests are either positive or negative and only requests with different polarities may be matched with each other. Unlike the existing randomized solutions, our approach does not depend on the size of the metric space and does not have to know it in advance.) <|cite_end|> <|cite_start|> (Reference: {Impatient Online Matching: We investigate the problem of Min-cost Perfect Matching with Delays (MPMD) in which requests are pairwise matched in an online fashion with the objective to minimize the sum of space cost and time cost. Though linear-MPMD (i.e., time cost is linear in delay) has been thoroughly studied in the literature, it does not well model impatient requests that are common in practice. Thus, we propose convex-MPMD where time cost functions are convex, capturing the situation where time cost increases faster and faster. Since the existing algorithms for linear-MPMD are not competitive any more, we devise a new deterministic algorithm for convex-MPMD problems. For a large class of convex time cost functions, our algorithm achieves a competitive ratio of O(k) on any k-point uniform metric space. Moreover, our deterministic algorithm is asymptotically optimal, which uncover a substantial difference between convex-MPMD and linear-MPMD which allows a deterministic algorithm with constant competitive ratio on any uniform metric space.) <|cite_end|> <|cite_start|> (Reference: Deterministic Min-Cost Matching with Delays: We consider the online Minimum-Cost Perfect Matching with Delays (MPMD) problem introduced by Emek et al. (STOC 2016), in which a general metric space is given, and requests are submitted in different times in this space by an adversary. The goal is to match requests, while minimizing the sum of distances between matched pairs in addition to the time intervals passed from the moment each request appeared until it is matched. In the online Minimum-Cost Bipartite Perfect Matching with Delays (MBPMD) problem introduced by Ashlagi et al. (APPROX/RANDOM 2017), each request is also associated with one of two classes, and requests can only be matched with requests of the other class. Previous algorithms for the problems mentioned above, include randomized $O\left(\log n\right)$-competitive algorithms for known and finite metric spaces, $n$ being the size of the metric space, and a deterministic $O\left(m\right)$-competitive algorithm, $m$ being the number of requests. We introduce $O\left(m^{\log\left(\frac{3}{2}+\epsilon\right)}\right)$-competitive deterministic algorithms for both problems and for any fixed $\epsilon > 0$. In particular, for a small enough $\epsilon$ the competitive ratio becomes $O\left(m^{0.59}\right)$. These are the first deterministic algorithms for the mentioned online matching problems, achieving a sub-linear competitive ratio. Our algorithms do not need to know the metric space in advance.) <|cite_end|> <|cite_start|> (Reference: The Min-Cost Matching with Concave Delays Problem: We consider the problem of online min-cost perfect matching with concave delays. We begin with the single location variant. Specifically, requests arrive in an online fashion at a single location. The algorithm must then choose between matching a pair of requests or delaying them to be matched later on. The cost is defined by a concave function on the delay. Given linear or even convex delay functions, matching any two available requests is trivially optimal. However, this does not extend to concave delays. We solve this by providing an $O(1)$-competitive algorithm that is defined through a series of delay counters. Thereafter we consider the problem given an underlying $n$-points metric. The cost of a matching is then defined as the connection cost (as defined by the metric) plus the delay cost. Given linear delays, this problem was introduced by Emek et al. and dubbed the Min-cost perfect matching with linear delays (MPMD) problem. Liu et al. considered convex delays and subsequently asked whether there exists a solution with small competitive ratio given concave delays. We show this to be true by extending our single location algorithm and proving $O(\log n)$ competitiveness. Finally, we turn our focus to the bichromatic case, wherein requests have polarities and only opposite polarities may be matched. We show how to alter our former algorithms to again achieve $O(1)$ and $O(\log n)$ competitiveness for the single location and for the metric case.) <|cite_end|>due to many real-life applications ranging from Uber rides, dating platforms, kidney exchange programs etc. Formally, the problem of MPMD is defined as follows. The input is a set of $m$ requests arriving at arbitrary times in a metric space $\metricspace = (\setofpoints, \distancesymbol)$ equipped with a distance function $d$. Here, $m$ is an even integer, and $\setofpoints$ denotes the set of points in $\metricspace$. Each request $\request$ is characterized by its \emph{location} $\location{\request} \in \setofpoints$ and \emph{arrival time} $\arrival{\request} \in \positivesubset{\realnum}$. When two requests $\request$ and $\request'$ are matched into a pair at time $t \ge \max\{\arrival{\request}, \arrival{\request'}\}$, a \emph{connection cost} $\distance{\location{\request}}{\location{\request'}}$ plus a \emph{delay cost} $(t - \arrival{\request}) + (t - \arrival{\request'})$ is incurred. The target is to minimize the total cost produced by the online algorithm for matching all the requests into pairs. Previously, the MPMD problem was studied in an adversarial model where an online adversary generated the requests at different times in the given metric space $\metricspace$. Under this adversarial model, no online algorithm can achieve a constant competitive ratio: \begin{itemize} \item[-] if the metric is known in advance, the current best competitiveness is $O(\log n)$ (here $n$ denotes the number of points in the metric) \cite[Theorem 3.1]{azar2017polylogarithmic} and no online algorithm can achieve competitive ratio better than $\Omega(\log n / \log \log n)$ \cite[Theorem 1]{ashlagi2017min}; \item[-] if the metric is not known in advance, the current best competitiveness is $O(m^{\log 1.5 + \varepsilon}/ \varepsilon)$ (with $\varepsilon > 0$), achieved tightly by a deterministic online greedy algorithm \cite[Theorem 1]{azar2020deterministic}. \end{itemize} In fact, it is often too pessimistic to assume no stochastic information on the input is available. Again, consider the example of matching gaming requests. The online gaming platform has all the historical data and can estimate the arrival frequency of the players with each particular skill level on an hourly basis. Therefore, it is reasonable to assume that the gaming requests follow some stochastic distribution. Depending on the time of day, though, there may be more or fewer players logging in. However, if we divide the timeline into small intervals, it is reasonable to assume that within each of them, the distribution is regular and the requests are mutually independent (since the players don't know each other). Based on these observations, the following question can be naturally stated: {\em in the case when stochastic information on the input is available, can we devise online algorithms for MPMD with better performance guarantees?} In this paper, we provide an affirmative answer to the question above. We consider a stochastic online version of MPMD, by assuming that the requests arrive following a Poisson arrival process. To be more precise, the waiting time between any two consecutive requests arriving at any metrical point $\anypoint$, follows an exponential distribution $\expdistr{\waitingparam[\anypoint]}$ with parameter $\waitingparam[\anypoint] \ge 0$. Under such a model, the goal of the platform is to minimize the expected cost produced by an algorithm $\algor$ to deal with a random input sequence consisting of $m$ requests. To evaluate the performance of our algorithms on stochastic inputs, we use the {\em ratio of expectations}, that corresponds to the ratio of the expected cost of the algorithm to the expected cost of the optimal offline solution (see Definition \ref{def:roe}). \paragraph{Our contribution.} We prove that the performance guarantee obtained in the Poisson arrival model is significantly better compared with the current best competitiveness obtained in the adversarial model. More specifically, we show that an intuitive \emph{Greedy} algorithm, which matches any two requests immediately when their total delay cost reaches their distance, achieves a constant ratio of expectations. \begin{restatable}{theorem}{greedyapprox} \label{main:greedy} For MPMD in the Poisson arrival model, the Greedy algorithm achieves a ratio of expectations of $16 / (1 - e^{-2})$. \end{restatable} To prove this theorem, we apply the following strategy. We first notice that the connection cost of a Greedy solution is at most its delay cost. Thus, it becomes the core of the proof to upper bound the delay cost. For this purpose, in Section \ref{section:algorithms}, we define the \emph{radius} $\rho_x \ge 0$ for each metric point $x$. Such a radius depends on the parameters of the problem and roughly corresponds to the expected delay time for matching the requests located on $x$. Then, we show how to use the radius to lower bound the cost of the optimal offline solution. Intuitively, we prove that a request located on $x$ is in expectation responsible for a total cost of $\Omega(\rho_x)$. At this point, it is worth emphasizing once again that in the adversarial model when the metric is not known in advance the current best known competitive ratio is $\Omega(m^{\log \frac{3}{2} + \varepsilon})$) (see the counter example in <|cite_start|> (Reference: Deterministic Min-Cost Matching with Delays: We consider the online Minimum-Cost Perfect Matching with Delays (MPMD) problem introduced by Emek et al. (STOC 2016), in which a general metric space is given, and requests are submitted in different times in this space by an adversary. The goal is to match requests, while minimizing the sum of distances between matched pairs in addition to the time intervals passed from the moment each request appeared until it is matched. In the online Minimum-Cost Bipartite Perfect Matching with Delays (MBPMD) problem introduced by Ashlagi et al. (APPROX/RANDOM 2017), each request is also associated with one of two classes, and requests can only be matched with requests of the other class. Previous algorithms for the problems mentioned above, include randomized $O\left(\log n\right)$-competitive algorithms for known and finite metric spaces, $n$ being the size of the metric space, and a deterministic $O\left(m\right)$-competitive algorithm, $m$ being the number of requests. We introduce $O\left(m^{\log\left(\frac{3}{2}+\epsilon\right)}\right)$-competitive deterministic algorithms for both problems and for any fixed $\epsilon > 0$. In particular, for a small enough $\epsilon$ the competitive ratio becomes $O\left(m^{0.59}\right)$. These are the first deterministic algorithms for the mentioned online matching problems, achieving a sub-linear competitive ratio. Our algorithms do not need to know the metric space in advance.) <|cite_end|>, Appendix A). This notion of radius suggests another potential algorithm for MPMD with stochastic inputs. Indeed, when a new request $r$ arrives on a point $x$, we know that this request will wait for a time $O(\rho_x)$ in average before being matched by the Greedy algorithm. In particular, $r$ will be matched with another request that is at distance $O(\rho_x)$. Therefore, if at the time of the $r$'s arrival, there is another pending\footnote{By pending we mean that at that time, the request is still unmatched by the algorithm.} request $r'$ that is at distance less than $\rho_x$, why not match these two requests directly? In Section \ref{section:algorithms}, we formalize this intuition and design an algorithm called \emph{Radius}. Thanks to these anticipated pairings, the performance ratio is improved by a factor of 2. \begin{restatable}{theorem}{radiusapprox} \label{main:radius} For MPMD in the Poisson arrival model, the Radius algorithm achieves a ratio of expectations of $8 / (1 - e^{-2})$. \end{restatable} Finally, we show how to adjust the Greedy and the Radius algorithms to deal with other variants of the MPMD problem in a way that preserves constant performance ratio. In Section \ref{sec:general_delay}, we look at the generalization of the problem where a request can be delayed for a time $t$ at a cost $f(t)$, where $f$ is a given positive and non-decreasing function. We show that, unless $f$ is such that the expected cost of the optimal offline solution is infinite, our algorithms achieve constant performance ratios, where the constants only depend on the delay cost function $f$. In Section \ref{section:mpmdfp}, we consider the variant of MPMD where we are allowed to clear pending requests for a fixed penalty cost. \paragraph{Related work.} The MPMD problem was introduced by Emek et al.\ <|cite_start|> (Reference: Online Matching: Haste makes Waste!: This paper studies a new online problem, referred to as \emph{min-cost perfect matching with delays (MPMD)}, defined over a finite metric space (i.e., a complete graph with positive edge weights obeying the triangle inequality) $\mathcal{M}$ that is known to the algorithm in advance. Requests arrive in a continuous time online fashion at the points of $\mathcal{M}$ and should be served by matching them to each other. The algorithm is allowed to delay its request matching commitments, but this does not come for free: the total cost of the algorithm is the sum of metric distances between matched requests \emph{plus} the sum of times each request waited since it arrived until it was matched. A randomized online MPMD algorithm is presented whose competitive ratio is $O (\log^{2} n + \log \Delta)$, where $n$ is the number of points in $\mathcal{M}$ and $\Delta$ is its aspect ratio. The analysis is based on a machinery developed in the context of a new stochastic process that can be viewed as two interleaved Poisson processes; surprisingly, this new process captures precisely the behavior of our algorithm. A related problem in which the algorithm is allowed to clear any unmatched request at a fixed penalty is also addressed. It is suggested that the MPMD problem is merely the tip of the iceberg for a general framework of online problems with delayed service that captures many more natural problems.) <|cite_end|>. In their paper, they proposed a randomized online algorithm that achieves a competitive ratio of $O(\log^2 n + \log \Delta)$, where $n$ is the number of points of the metric space and $\Delta$ is the aspect ratio. Later, Azar et al.\ <|cite_start|> (Reference: Polylogarithmic Bounds on the Competitiveness of Min-Cost Perfect Matching with Delays: We consider the problem of online Min-cost Perfect Matching with Delays (MPMD) recently introduced by Emek et al, (STOC 2016). This problem is defined on an underlying n-point metric space. An adversary presents real-time requests online at points of the metric space, and the algorithm is required to match them, possibly after keeping them waiting for some time. The cost incurred is the sum of the distances between matched pairs of requests (the connection cost), and the sum of the waiting times of the requests (the delay cost). We prove the first logarithmic upper bound and the first polylogarithmic lower bound on the randomized competitive ratio of this problem. We present an algorithm with a competitive ratio of O(log n), which improves the upper bound of O(log2 n + log Δ) of Emek et al, by removing the dependence on Δ, the aspect ratio of the metric space (which can be unbounded as a function of n). The core of our algorithm is a deterministic algorithm for MPMD on metrics induced by edge-weighted trees of height h, whose cost is guaranteed to be at most O(1) times the connection cost plus O(h) times the delay cost of every feasible solution. The reduction from MPMD on arbitrary metrics to MPMD on trees is achieved using the result on embedding n-point metric spaces into distributions over weighted hierarchically separated trees of height O(log n), with distortion O(log n). We also prove a lower bound of [EQUATION] on the competitive ratio of any randomized algorithm. This is the first lower bound which increases with n, and is attained on the metric of n equally spaced points on a line.) <|cite_end|>improved the competitive ratio to $O(\log n)$, thereby removing the dependence of $\Delta$ in the competitive ratio. Both of these papers randomly embed the metric space into a tree of distortion $O(\log n)$, and then propose online algorithms on tree metrics. In the adversarial model, this bound is essentially tight, since Ashlagi et al.\ <|cite_start|> (Reference: Min-cost Bipartite Perfect Matching with Delays: In the min-cost bipartite perfect matching with delays (MBPMD) problem, requests arrive online at points of a finite metric space. Each request is either positive or negative and has to be matched to a request of opposite polarity. As opposed to traditional online matching problems, the algorithm does not have to serve requests as they arrive, and may choose to match them later at a cost. Our objective is to minimize the sum of the distances between matched pairs of requests (the connection cost) and the sum of the waiting times of the requests (the delay cost). This objective exhibits a natural tradeoff between minimizing the distances and the cost of waiting for better matches. This tradeoff appears in many real-life scenarios, notably, ride-sharing platforms. MBPMD is related to its non-bipartite variant, min-cost perfect matching with delays (MPMD), in which each request can be matched to any other request. MPMD was introduced by Emek et al. (STOC'16), who showed an O(log^2(n)+log(Delta))-competitive randomized algorithm on n-point metric spaces with aspect ratio Delta. Our contribution is threefold. First, we present a new lower bound construction for MPMD and MBPMD. We get a lower bound of Omega(sqrt(log(n)/log(log(n)))) on the competitive ratio of any randomized algorithm for MBPMD. For MPMD, we improve the lower bound from Omega(sqrt(log(n))) (shown by Azar et al., SODA'17) to Omega(log(n)/log(log(n))), thus, almost matching their upper bound of O(log(n)). Second, we adapt the algorithm of Emek et al. to the bipartite case, and provide a simplified analysis that improves the competitive ratio to O(log(n)). The key ingredient of the algorithm is an O(h)-competitive randomized algorithm for MBPMD on weighted trees of height h. Third, we provide an O(h)-competitive deterministic algorithm for MBPMD on weighted trees of height h. This algorithm is obtained by adapting the algorithm for MPMD by Azar et al. to the apparently more complicated bipartite setting.) <|cite_end|>showed that any randomized algorithm achieves a competitive ratio of $\Omega(\log n / \log \log n)$. Note that the above results assume that the $n$-point metric is given in advance. When the metric is not known in advance, Bienkowski et al.\ proposed a $O(m^{2.46})$-competitive online greedy algorithm <|cite_start|> (Reference: A Match in Time Saves Nine: Deterministic Online Matching With Delays: We consider the problem of online Min-cost Perfect Matching with Delays (MPMD) introduced by Emek et al. (STOC 2016). In this problem, an even number of requests appear in a metric space at different times and the goal of an online algorithm is to match them in pairs. In contrast to traditional online matching problems, in MPMD all requests appear online and an algorithm can match any pair of requests, but such decision may be delayed (e.g., to find a better match). The cost is the sum of matching distances and the introduced delays. We present the first deterministic online algorithm for this problem. Its competitive ratio is $O(m^{\log_2 5.5})$ $ = O(m^{2.46})$, where $2 m$ is the number of requests. This is polynomial in the number of metric space points if all requests are given at different points. In particular, the bound does not depend on other parameters of the metric, such as its aspect ratio. Unlike previous (randomized) solutions for the MPMD problem, our algorithm does not need to know the metric space in advance.) <|cite_end|>and a $O(m)$-competitive online algorithm based on the primal-dual method <|cite_start|> (Reference: A Primal-Dual Online Deterministic Algorithm for Matching with Delays: In the Min-cost Perfect Matching with Delays (MPMD) problem, 2 m requests arrive over time at points of a metric space. An online algorithm has to connect these requests in pairs, but a decision to match may be postponed till a more suitable matching pair is found. The goal is to minimize the joint cost of connection and the total waiting time of all requests. We present an O(m)-competitive deterministic algorithm for this problem, improving on an existing bound of O(m^(log(5.5))) = O(m^2.46). Our algorithm also solves (with the same competitive ratio) a bipartite variant of MPMD, where requests are either positive or negative and only requests with different polarities may be matched with each other. Unlike the existing randomized solutions, our approach does not depend on the size of the metric space and does not have to know it in advance.) <|cite_end|>, where $m$ denotes the number of requests released. Azar and Jacob-Fanani <|cite_start|> (Reference: Deterministic Min-Cost Matching with Delays: We consider the online Minimum-Cost Perfect Matching with Delays (MPMD) problem introduced by Emek et al. (STOC 2016), in which a general metric space is given, and requests are submitted in different times in this space by an adversary. The goal is to match requests, while minimizing the sum of distances between matched pairs in addition to the time intervals passed from the moment each request appeared until it is matched. In the online Minimum-Cost Bipartite Perfect Matching with Delays (MBPMD) problem introduced by Ashlagi et al. (APPROX/RANDOM 2017), each request is also associated with one of two classes, and requests can only be matched with requests of the other class. Previous algorithms for the problems mentioned above, include randomized $O\left(\log n\right)$-competitive algorithms for known and finite metric spaces, $n$ being the size of the metric space, and a deterministic $O\left(m\right)$-competitive algorithm, $m$ being the number of requests. We introduce $O\left(m^{\log\left(\frac{3}{2}+\epsilon\right)}\right)$-competitive deterministic algorithms for both problems and for any fixed $\epsilon > 0$. In particular, for a small enough $\epsilon$ the competitive ratio becomes $O\left(m^{0.59}\right)$. These are the first deterministic algorithms for the mentioned online matching problems, achieving a sub-linear competitive ratio. Our algorithms do not need to know the metric space in advance.) <|cite_end|>later proposed a $O(m^{\log 1.5 + \varepsilon}/ \varepsilon)$-competitive greedy algorithm, which is currently the best deterministic online algorithm. In the special case of a two-point metric, Emek et al.\ <|cite_start|> (Reference: Minimum Cost Perfect Matching with Delays for Two Sources: ) <|cite_end|>proposed a 3-competitive greedy algorithm. Deryckere and Umboh <|cite_start|> (Reference: Online Matching with Set Delay: . We initiate the study of online problems with set delay , where the delay cost at any given time is an arbitrary function of the set of pending requests. In particular, we study the online min-cost perfect matching with set delay (MPMD-Set) problem, which generalises the online min-cost perfect matching with delay (MPMD) problem introduced by Emek et al. (STOC 2016). In MPMD, 𝑚 requests arrive over time in a metric space of 𝑛 points. When a request arrives the algorithm must choose to either match or delay the request. The goal is to create a perfect matching of all requests while minimising the sum of distances between matched requests, and the total delay costs incurred by each of the requests. In contrast to previous work we study MPMD-Set in the non-clairvoyant setting, where the algorithm does not know the future delay costs. We first show no algorithm is competitive in 𝑛 or 𝑚 . We then study the natural special case of size-based delay where the delay is a non-decreasing function of the number of unmatched requests. Our main result is the first non-clairvoyant algorithms for online min-cost perfect matching with size-based delay that are competitive in terms of 𝑚 . In fact, these are the first non-clairvoyantalgorithms for any variant of MPMD. Furthermore,we prove a lower bound of Ω ( 𝑛 ) for any deterministic algorithm and Ω ( log 𝑛 ) for any randomised algorithm. These lower bounds also hold for clairvoyant algorithms.) <|cite_end|>studied online matching with set delay, where the delay cost at any given time is an arbitrary function of the set of pending requests. Another line of work considered a bipartite variant of MPMD, i.e., the Min-cost Bipartite Perfect Matching with (linear) Delays (MBPMD), where each request can be either red or blue, and only two requests of different colors can be matched into a pair. For MBPMD, Ashlagi et al.\ <|cite_start|> (Reference: Min-cost Bipartite Perfect Matching with Delays: In the min-cost bipartite perfect matching with delays (MBPMD) problem, requests arrive online at points of a finite metric space. Each request is either positive or negative and has to be matched to a request of opposite polarity. As opposed to traditional online matching problems, the algorithm does not have to serve requests as they arrive, and may choose to match them later at a cost. Our objective is to minimize the sum of the distances between matched pairs of requests (the connection cost) and the sum of the waiting times of the requests (the delay cost). This objective exhibits a natural tradeoff between minimizing the distances and the cost of waiting for better matches. This tradeoff appears in many real-life scenarios, notably, ride-sharing platforms. MBPMD is related to its non-bipartite variant, min-cost perfect matching with delays (MPMD), in which each request can be matched to any other request. MPMD was introduced by Emek et al. (STOC'16), who showed an O(log^2(n)+log(Delta))-competitive randomized algorithm on n-point metric spaces with aspect ratio Delta. Our contribution is threefold. First, we present a new lower bound construction for MPMD and MBPMD. We get a lower bound of Omega(sqrt(log(n)/log(log(n)))) on the competitive ratio of any randomized algorithm for MBPMD. For MPMD, we improve the lower bound from Omega(sqrt(log(n))) (shown by Azar et al., SODA'17) to Omega(log(n)/log(log(n))), thus, almost matching their upper bound of O(log(n)). Second, we adapt the algorithm of Emek et al. to the bipartite case, and provide a simplified analysis that improves the competitive ratio to O(log(n)). The key ingredient of the algorithm is an O(h)-competitive randomized algorithm for MBPMD on weighted trees of height h. Third, we provide an O(h)-competitive deterministic algorithm for MBPMD on weighted trees of height h. This algorithm is obtained by adapting the algorithm for MPMD by Azar et al. to the apparently more complicated bipartite setting.) <|cite_end|>presented two algorithms achieving a competitive ratio of $O(\log n)$ --- the first is an adaptation of Emek et al.'s <|cite_start|> (Reference: Online Matching: Haste makes Waste!: This paper studies a new online problem, referred to as \emph{min-cost perfect matching with delays (MPMD)}, defined over a finite metric space (i.e., a complete graph with positive edge weights obeying the triangle inequality) $\mathcal{M}$ that is known to the algorithm in advance. Requests arrive in a continuous time online fashion at the points of $\mathcal{M}$ and should be served by matching them to each other. The algorithm is allowed to delay its request matching commitments, but this does not come for free: the total cost of the algorithm is the sum of metric distances between matched requests \emph{plus} the sum of times each request waited since it arrived until it was matched. A randomized online MPMD algorithm is presented whose competitive ratio is $O (\log^{2} n + \log \Delta)$, where $n$ is the number of points in $\mathcal{M}$ and $\Delta$ is its aspect ratio. The analysis is based on a machinery developed in the context of a new stochastic process that can be viewed as two interleaved Poisson processes; surprisingly, this new process captures precisely the behavior of our algorithm. A related problem in which the algorithm is allowed to clear any unmatched request at a fixed penalty is also addressed. It is suggested that the MPMD problem is merely the tip of the iceberg for a general framework of online problems with delayed service that captures many more natural problems.) <|cite_end|>algorithm to the bipartite case, and the second is an adaptation of the algorithm proposed by Azar et al.\ <|cite_start|> (Reference: Polylogarithmic Bounds on the Competitiveness of Min-Cost Perfect Matching with Delays: We consider the problem of online Min-cost Perfect Matching with Delays (MPMD) recently introduced by Emek et al, (STOC 2016). This problem is defined on an underlying n-point metric space. An adversary presents real-time requests online at points of the metric space, and the algorithm is required to match them, possibly after keeping them waiting for some time. The cost incurred is the sum of the distances between matched pairs of requests (the connection cost), and the sum of the waiting times of the requests (the delay cost). We prove the first logarithmic upper bound and the first polylogarithmic lower bound on the randomized competitive ratio of this problem. We present an algorithm with a competitive ratio of O(log n), which improves the upper bound of O(log2 n + log Δ) of Emek et al, by removing the dependence on Δ, the aspect ratio of the metric space (which can be unbounded as a function of n). The core of our algorithm is a deterministic algorithm for MPMD on metrics induced by edge-weighted trees of height h, whose cost is guaranteed to be at most O(1) times the connection cost plus O(h) times the delay cost of every feasible solution. The reduction from MPMD on arbitrary metrics to MPMD on trees is achieved using the result on embedding n-point metric spaces into distributions over weighted hierarchically separated trees of height O(log n), with distortion O(log n). We also prove a lower bound of [EQUATION] on the competitive ratio of any randomized algorithm. This is the first lower bound which increases with n, and is attained on the metric of n equally spaced points on a line.) <|cite_end|>. Besides, Ashlagi et al.\ <|cite_start|> (Reference: Min-cost Bipartite Perfect Matching with Delays: In the min-cost bipartite perfect matching with delays (MBPMD) problem, requests arrive online at points of a finite metric space. Each request is either positive or negative and has to be matched to a request of opposite polarity. As opposed to traditional online matching problems, the algorithm does not have to serve requests as they arrive, and may choose to match them later at a cost. Our objective is to minimize the sum of the distances between matched pairs of requests (the connection cost) and the sum of the waiting times of the requests (the delay cost). This objective exhibits a natural tradeoff between minimizing the distances and the cost of waiting for better matches. This tradeoff appears in many real-life scenarios, notably, ride-sharing platforms. MBPMD is related to its non-bipartite variant, min-cost perfect matching with delays (MPMD), in which each request can be matched to any other request. MPMD was introduced by Emek et al. (STOC'16), who showed an O(log^2(n)+log(Delta))-competitive randomized algorithm on n-point metric spaces with aspect ratio Delta. Our contribution is threefold. First, we present a new lower bound construction for MPMD and MBPMD. We get a lower bound of Omega(sqrt(log(n)/log(log(n)))) on the competitive ratio of any randomized algorithm for MBPMD. For MPMD, we improve the lower bound from Omega(sqrt(log(n))) (shown by Azar et al., SODA'17) to Omega(log(n)/log(log(n))), thus, almost matching their upper bound of O(log(n)). Second, we adapt the algorithm of Emek et al. to the bipartite case, and provide a simplified analysis that improves the competitive ratio to O(log(n)). The key ingredient of the algorithm is an O(h)-competitive randomized algorithm for MBPMD on weighted trees of height h. Third, we provide an O(h)-competitive deterministic algorithm for MBPMD on weighted trees of height h. This algorithm is obtained by adapting the algorithm for MPMD by Azar et al. to the apparently more complicated bipartite setting.) <|cite_end|>presented a lower bound of $\Omega(\sqrt{\log n / \log \log n})$ on any randomized algorithm. The MPMD and MBPMD problems have been investigated in the more general case when any request can be delayed for a duration $t$ at a cost $f(t)$. Liu et al.\ <|cite_start|> (Reference: {Impatient Online Matching: We investigate the problem of Min-cost Perfect Matching with Delays (MPMD) in which requests are pairwise matched in an online fashion with the objective to minimize the sum of space cost and time cost. Though linear-MPMD (i.e., time cost is linear in delay) has been thoroughly studied in the literature, it does not well model impatient requests that are common in practice. Thus, we propose convex-MPMD where time cost functions are convex, capturing the situation where time cost increases faster and faster. Since the existing algorithms for linear-MPMD are not competitive any more, we devise a new deterministic algorithm for convex-MPMD problems. For a large class of convex time cost functions, our algorithm achieves a competitive ratio of O(k) on any k-point uniform metric space. Moreover, our deterministic algorithm is asymptotically optimal, which uncover a substantial difference between convex-MPMD and linear-MPMD which allows a deterministic algorithm with constant competitive ratio on any uniform metric space.) <|cite_end|>considered the case when $f$ is a convex function, and established a lower bound $\Omega(n)$ on the competitive ratio of any deterministic algorithm for Convex-MPMD. Specifically, this bound is obtained for an $n$-point uniform metric and a delay function of the form $f(t) = t^{\alpha}$ for $\alpha > 1$. In this case, they presented a deterministic algorithm that achieves a competitive ratio of $O(n)$. In the case when $f$ is a concave function, Azar et al.\ <|cite_start|> (Reference: The Min-Cost Matching with Concave Delays Problem: We consider the problem of online min-cost perfect matching with concave delays. We begin with the single location variant. Specifically, requests arrive in an online fashion at a single location. The algorithm must then choose between matching a pair of requests or delaying them to be matched later on. The cost is defined by a concave function on the delay. Given linear or even convex delay functions, matching any two available requests is trivially optimal. However, this does not extend to concave delays. We solve this by providing an $O(1)$-competitive algorithm that is defined through a series of delay counters. Thereafter we consider the problem given an underlying $n$-points metric. The cost of a matching is then defined as the connection cost (as defined by the metric) plus the delay cost. Given linear delays, this problem was introduced by Emek et al. and dubbed the Min-cost perfect matching with linear delays (MPMD) problem. Liu et al. considered convex delays and subsequently asked whether there exists a solution with small competitive ratio given concave delays. We show this to be true by extending our single location algorithm and proving $O(\log n)$ competitiveness. Finally, we turn our focus to the bichromatic case, wherein requests have polarities and only opposite polarities may be matched. We show how to alter our former algorithms to again achieve $O(1)$ and $O(\log n)$ competitiveness for the single location and for the metric case.) <|cite_end|>gave a $O(1)$-competitive (resp.\ $O(\log n)$-competitive) deterministic online algorithms for MPMD and MBPMD for a single-point metric (resp.\ any metric). Other classical online problems have been also considered under such delay setting, such as the online service problem <|cite_start|> (Reference: Online Service with Delay: In this paper, we introduce the online service with delay problem. In this problem, there are $n$ points in a metric space that issue service requests over time, and a server that serves these requests. The goal is to minimize the sum of distance traveled by the server and the total delay in serving the requests. This problem models the fundamental tradeoff between batching requests to improve locality and reducing delay to improve response time, that has many applications in operations management, operating systems, logistics, supply chain management, and scheduling. Our main result is to show a poly-logarithmic competitive ratio for the online service with delay problem. This result is obtained by an algorithm that we call the preemptive service algorithm. The salient feature of this algorithm is a process called preemptive service, which uses a novel combination of (recursive) time forwarding and spatial exploration on a metric space. We hope this technique will be useful for related problems such as reordering buffer management, online TSP, vehicle routing, etc. We also generalize our results to $k > 1$ servers.) <|cite_end|> <|cite_start|> (Reference: Online Service with Delay on a Line: ) <|cite_end|> <|cite_start|> (Reference: General Framework for Metric Optimization Problems with Delay or with Deadlines: In this paper, we present a framework used to construct and analyze algorithms for online optimization problems with deadlines or with delay over a metric space. Using this framework, we present algorithms for several different problems. We present an $O(D^{2})$-competitive deterministic algorithm for online multilevel aggregation with delay on a tree of depth $D$, an exponential improvement over the $O(D^{4}2^{D})$-competitive algorithm of Bienkowski et al. (ESA '16), where the only previously-known improvement was for the special case of deadlines by Buchbinder et al. (SODA '17). We also present an $O(\log^{2}n)$-competitive randomized algorithm for online service with delay over any general metric space of $n$ points, improving upon the $O(\log^{4}n)$-competitive algorithm by Azar et al. (STOC '17). In addition, we present the problem of online facility location with deadlines. In this problem, requests arrive over time in a metric space, and need to be served until their deadlines by facilities that are opened momentarily for some cost. We also consider the problem of facility location with delay, in which the deadlines are replaced with arbitrary delay functions. For those problems, we present $O(\log^{2}n)$-competitive algorithms, with $n$ the number of points in the metric space. The algorithmic framework we present includes techniques for the design of algorithms as well as techniques for their analysis.) <|cite_end|>, the multi-level aggregation problem <|cite_start|> (Reference: Online Algorithms for Multi-Level Aggregation: In the Multi-Level Aggregation Problem (MLAP), requests arrive at the nodes of an edge-weighted tree T, and have to be served eventually. A service is defined as a subtree X of T that contains its root. This subtree X serves all requests that are pending in the nodes of X, and the cost of this service is equal to the total weight of X. Each request also incurs waiting cost between its arrival and service times. The objective is to minimize the total waiting cost of all requests plus the total cost of all service subtrees. MLAP is a generalization of some well-studied optimization problems; for example, for trees of depth 1, MLAP is equivalent to the TCP Acknowledgment Problem, while for trees of depth 2, it is equivalent to the Joint Replenishment Problem. Aggregation problem for trees of arbitrary depth arise in multicasting, sensor networks, communication in organization hierarchies, and in supply-chain management. The instances of MLAP associated with these applications are naturally online, in the sense that aggregation decisions need to be made without information about future requests. Constant-competitive online algorithms are known for MLAP with one or two levels. However, it has been open whether there exist constant competitive online algorithms for trees of depth more than 2. Addressing this open problem, we give the first constant competitive online algorithm for networks of arbitrary (fixed) number of levels. The competitive ratio is O(D^4 2^D), where D is the depth of T. The algorithm works for arbitrary waiting cost functions, including the variant with deadlines. We also show several additional lower and upper bound results for some special cases of MLAP, including the Single-Phase variant and the case when the tree is a path.) <|cite_end|> <|cite_start|> (Reference: O(depth)-Competitive Algorithm for Online Multi-level Aggregation: We consider a multi-level aggregation problem in a weighted rooted tree, studied recently by Bienkowski et al. [7]. In this problem requests arrive over time at the nodes of the tree, and each request specifies a deadline. A request is served by sending it to the root before its deadline at a cost equal to the weight of the path from the node in which it resides to the root. However, requests from different nodes can be aggregated, and served together, so as to save on cost. The cost of serving an aggregated set of requests is equal to the weight of the subtree spanning the nodes in which the requests reside. Thus, the problem is to find a competitive online aggregation algorithm that minimizes the total cost of the aggregated requests. This problem arises naturally in many scenarios, including multicasting, supply-chain management and sensor networks. It is also related to the well studied TCP-acknowledgement problem and the online joint replenishment problem. We present an online O(D)-competitive algorithm for the problem, where D is the depth, or number of levels, of the aggregation tree. This result improves upon the D22D-competitive algorithm obtained recently by Bienkowski et al. [7].) <|cite_end|> <|cite_start|> (Reference: The Online Set Aggregation Problem: ) <|cite_end|> <|cite_start|> (Reference: General Framework for Metric Optimization Problems with Delay or with Deadlines: In this paper, we present a framework used to construct and analyze algorithms for online optimization problems with deadlines or with delay over a metric space. Using this framework, we present algorithms for several different problems. We present an $O(D^{2})$-competitive deterministic algorithm for online multilevel aggregation with delay on a tree of depth $D$, an exponential improvement over the $O(D^{4}2^{D})$-competitive algorithm of Bienkowski et al. (ESA '16), where the only previously-known improvement was for the special case of deadlines by Buchbinder et al. (SODA '17). We also present an $O(\log^{2}n)$-competitive randomized algorithm for online service with delay over any general metric space of $n$ points, improving upon the $O(\log^{4}n)$-competitive algorithm by Azar et al. (STOC '17). In addition, we present the problem of online facility location with deadlines. In this problem, requests arrive over time in a metric space, and need to be served until their deadlines by facilities that are opened momentarily for some cost. We also consider the problem of facility location with delay, in which the deadlines are replaced with arbitrary delay functions. For those problems, we present $O(\log^{2}n)$-competitive algorithms, with $n$ the number of points in the metric space. The algorithmic framework we present includes techniques for the design of algorithms as well as techniques for their analysis.) <|cite_end|> <|cite_start|> (Reference: New results on multi-level aggregation: ) <|cite_end|> <|cite_start|> (Reference: The Power of Clairvoyance for Multi-Level Aggregation and Set Cover with Delay: ) <|cite_end|>, facility location <|cite_start|> (Reference: Online Facility Location with Linear Delay: We study the problem of online facility location with delay. In this problem, a sequence of $n$ clients appear in the metric space, and they need to be eventually connected to some open facility. The clients do not have to be connected immediately, but such a choice comes with a penalty: each client incurs a waiting cost (the difference between its arrival and connection time). At any point in time, an algorithm may decide to open a facility and connect any subset of clients to it. This is a well-studied problem both of its own, and within the general class of network design problems with delays. Our main focus is on a new variant of this problem, where clients may be connected also to an already open facility, but such action incurs an extra cost: an algorithm pays for waiting of the facility (a cost incurred separately for each such "late" connection). This is reminiscent of online matching with delays, where both sides of the connection incur a waiting cost. We call this variant two-sided delay to differentiate it from the previously studied one-sided delay. We present an $O(1)$-competitive deterministic algorithm for the two-sided delay variant. On the technical side, we study a greedy strategy, which grows budgets with increasing waiting delays and opens facilities for subsets of clients once sums of these budgets reach certain thresholds. Our technique is a substantial extension of the approach used by Jain, Mahdian and Saberi [STOC 2002] for analyzing the performance of offline algorithms for facility location. We then show how to transform our $O(1)$-competitive algorithm for the two-sided delay variant to $O(\log n / \log \log n)$-competitive deterministic algorithm for one-sided delays. We note that all previous online algorithms for problems with delays in general metrics have at least logarithmic ratios.) <|cite_end|> <|cite_start|> (Reference: General Framework for Metric Optimization Problems with Delay or with Deadlines: In this paper, we present a framework used to construct and analyze algorithms for online optimization problems with deadlines or with delay over a metric space. Using this framework, we present algorithms for several different problems. We present an $O(D^{2})$-competitive deterministic algorithm for online multilevel aggregation with delay on a tree of depth $D$, an exponential improvement over the $O(D^{4}2^{D})$-competitive algorithm of Bienkowski et al. (ESA '16), where the only previously-known improvement was for the special case of deadlines by Buchbinder et al. (SODA '17). We also present an $O(\log^{2}n)$-competitive randomized algorithm for online service with delay over any general metric space of $n$ points, improving upon the $O(\log^{4}n)$-competitive algorithm by Azar et al. (STOC '17). In addition, we present the problem of online facility location with deadlines. In this problem, requests arrive over time in a metric space, and need to be served until their deadlines by facilities that are opened momentarily for some cost. We also consider the problem of facility location with delay, in which the deadlines are replaced with arbitrary delay functions. For those problems, we present $O(\log^{2}n)$-competitive algorithms, with $n$ the number of points in the metric space. The algorithmic framework we present includes techniques for the design of algorithms as well as techniques for their analysis.) <|cite_end|> <|cite_start|> (Reference: Beyond tree embeddings--a deterministic framework for network design with deadlines or delay: We consider network design problems with deadline or delay. All previous results for these models are based on randomized embedding of the graph into a tree (HST) and then solving the problem on this tree. We show that this is not necessary. In particular, we design a deterministic framework for these problems which is not based on embedding. This enables us to provide deterministic poly-log($n$)-competitive algorithms for Steiner tree, generalized Steiner tree, node weighted Steiner tree, (non-uniform) facility location and directed Steiner tree with deadlines or with delay (where $n$ is the number of nodes). Our deterministic algorithms also give improved guarantees over some previous randomized results. In addition, we show a lower bound of poly $\text{log}(n)$ for some of these problems, which implies that our framework is optimal up to the power of the poly-log. Our algorithms and techniques differ significantly from those in all previous considerations of these problems.) <|cite_end|>, bin packing <|cite_start|> (Reference: The Price of Clustering in Bin-Packing with Applications to Bin-Packingwith Delays: One of the most significant algorithmic challenges in the "big data era" is handling instances that are too large to be processed by a single machine. The common practice in this regard is to partition the massive problem instance into smaller ones and process each one of them separately. In some cases, the solutions for the smaller instances are later on assembled into a solution for the whole instance, but in many cases this last stage cannot be pursued (e.g., because it is too costly, because of locality issues, or due to privacy considerations). Motivated by this phenomenon, we consider the following natural combinatorial question: Given a bin-packing instance (namely, a set of items with sizes in (0, 1] that should be packed into unit capacity bins) I and a partition Ii \ i of I into clusters, how large is the ratio ∑i Øpt(Ii) / Øpt(I), where Øpt(J) denotes the optimal number of bins into which the items in J can be packed? In this paper, we investigate the supremum of this ratio over all instances I and partitions Ii \ i, referred to as the bin-packing price of clustering (¶oC ). It is trivial to observe that if each cluster contains only one tiny item (and hence, Øpt(Ii) = 1), then the ¶oC is unbounded. On the other hand, a relatively straightforward argument shows that under the constraint that Øpt(Ii) ≥ 2, the ¶oC is 2. Our main challenge was to determine whether the ¶oC drops below 2 when Øpt(Ii) > 2. In addition, one may hope that łimk -> ∞ ¶oC(k) = 1, where ¶oC(k) denotes the ¶oC under the restriction to clusters Ii with Øpt(Ii) ≥ k. We resolve the former question affirmatively and the latter one negatively: Our main results are that ¶oC(k) łeq 1.951 for any k ≥ 3 and łimk -> ∞ ¶oC(k) = 1.691... Moreover, the former bound cannot be significantly improved as ¶oC(3) > 1.933. In addition to the immediate contribution of this combinatorial result to "big data" kind of applications, it turns out that it is useful also for an interesting online problem called bin-packing with delays.) <|cite_end|> <|cite_start|> (Reference: On bin packing with clustering and bin packing with delays: We continue the study of two recently introduced bin packing type problems, called bin packing with clustering, and online bin packing with delays. A bin packing input consists of items of sizes not larger than 1, and the goal is to partition or pack them into bins, where the total size of items of every valid bin cannot exceed 1. In bin packing with clustering, items also have colors associated with them. A globally optimal solution can combine items of different colors in bins, while a clustered solution can only pack monochromatic bins. The goal is to compare a globally optimal solution to an optimal clustered solution, under certain constraints on the coloring provided with the input. We show close bounds on the worst-case ratio between these two costs, called "the price of clustering", improving and simplifying previous results. Specifically, we show that the price of clustering does not exceed 1.93667, improving over the previous upper bound of 1.951, and that it is at least 1.93558, improving over the previous lower bound of 1.93344. In online bin packing with delays, items are presented over time. Items may wait to be packed, and an algorithm can create a new bin at any time, packing a subset of already existing unpacked items into it, under the condition that the bin is valid. A created bin cannot be used again in the future, and all items have to be packed into bins eventually. The objective is to minimize the number of used bins plus the sum of waiting costs of all items, called delays. We build on previous work and modify a simple phase-based algorithm. We combine the modification with a careful analysis to improve the previously known competitive ratio from 3.951 to below 3.1551.) <|cite_end|>, set cover <|cite_start|> (Reference: Set Cover with Delay--Clairvoyance Is Not Required: We study the maximal independent set (MIS) and maximum independent set (MAX-IS) problems on dynamic sets of $O(n)$ axis-parallel rectangles, which can be modeled as dynamic rectangle intersection graphs. We consider the fully dynamic vertex update (insertion/deletion) model for two types of rectangles: (i) uniform height and width and (ii) uniform height and arbitrary width. These types of dynamic vertex update problems arise, e.g., in interactive map labeling. We present the first deterministic algorithm for maintaining a MIS (and thus a 4-approximate MAX-IS) of a dynamic set of uniform rectangles with amortized sub-logarithmic update time. This breaks the natural barrier of $O(\Delta)$ update time (where $\Delta$ is the maximum degree in the graph) for vertex updates presented by Assadi et al. (STOC 2018). We continue by investigating MAX-IS and provide a series of deterministic dynamic approximation schemes. For uniform rectangles, we first give an algorithm that maintains a $4$-approximate MAX-IS with $O(1)$ update time. In a subsequent algorithm, we establish the trade-off between approximation quality $2(1+\frac{1}{k})$ and update time $O(k^2\log n)$ for $k\in \mathbb{N}$. We conclude with an algorithm that maintains a $2$-approximate MAX-IS for dynamic sets of uniform height and arbitrary width rectangles with $O(\omega \log n)$ update time, where $\omega$ is the largest number of maximal cliques stabbed by any axis-parallel line. We have implemented our algorithms and report the results of an experimental comparison exploring the trade-off between solution size and update time for synthetic and real-world map labeling data sets.) <|cite_end|> <|cite_start|> (Reference: Nearly-Tight Lower Bounds for Set Cover and Network Design with Deadlines/Delay: In network design problems with deadlines/delay, an algorithm must make transmissions over time to satisfy connectivity requests on a graph. To satisfy a request, a transmission must be made that provides the desired connectivity. In the deadline case, this transmission must occur inside a time window associated with the request. In the delay case, the transmission should be as soon as possible after the request’s release, to avoid delay cost. In FOCS 2020, frameworks were given which reduce a network design problem with dead-lines/delay to its classic, offline variant, while incurring an additional competitiveness loss factor of O (log n ), where n is the number of vertices in the graph. Trying to improve upon this loss factor is thus a natural research direction. The frameworks of FOCS 2020 also apply to set cover with deadlines/delay , in which requests arrive on the elements of a universe over time, and the algorithm must transmit sets to serve them. In this problem, a universe of sets and elements is given, requests arrive on elements over time, and the algorithm must transmit sets to serve them. In this paper, we give nearly tight lower bounds for set cover with deadlines/delay. These lower bounds imply nearly-tight lower bounds of Ω(log n/ log log n ) for a few network design problems, such as node-weighted Steiner forest and directed Steiner tree. Our results imply that the frameworks in FOCS 2020 are essentially optimal, and improve quadratically over the best previously-known lower bounds.) <|cite_end|> <|cite_start|> (Reference: The Power of Clairvoyance for Multi-Level Aggregation and Set Cover with Delay: ) <|cite_end|>and many others <|cite_start|> (Reference: Online k-Way Matching with Delays and the H-Metric: In this paper, we study $k$-Way Min-cost Perfect Matching with Delays - the $k$-MPMD problem. This problem considers a metric space with $n$ nodes. Requests arrive at these nodes in an online fashion. The task is to match these requests into sets of exactly $k$, such that the space and time cost of all matched requests are minimized. The notion of the space cost requires a definition of an underlying metric space that gives distances of subsets of $k$ elements. For $k>2$, the task of finding a suitable metric space is at the core of our problem: We show that for some known generalizations to $k=3$ points, such as the $2$-metric and the $D$-metric, there exists no competitive randomized algorithm for the $3$-MPMD problem. The $G$-metrics are defined for 3 points and allows for a competitive algorithm for the $3$-MPMD problem. For $k>3$ points, there exist two generalizations of the $G$-metrics known as $n$- and $K$-metrics. We show that neither the $n$-metrics nor the $K$-metrics can be used for the $k$-MPMD problem. On the positive side, we introduce the $H$-metrics, the first metrics to allow for a solution of the $k$-MPMD problem for all $k$. In order to devise an online algorithm for the $k$-MPMD problem on the $H$-metrics, we embed the $H$-metric into trees with an $O(\log n)$ distortion. Based on this embedding result, we extend the algorithm proposed by Azar et al. (2017) and achieve a competitive ratio of $O(\log n)$ for the $k$-MPMD problem.) <|cite_end|> <|cite_start|> (Reference: Caching with Time Windows: We consider the (weighted) Paging with Time Windows problem, which is identical to the classical weighted paging problem but where each page request only needs to be served by a given deadline. This problem arises in many practical applications of online caching, such as the deadline I/O scheduler in the Linux kernel and video-on-demand streaming. From a theoretical perspective, this generalizes the caching problem to allow delayed service, a line of work that has recently gained traction in online algorithms (e.g., Emek et al. STOC '16, Azar et al. STOC '17, Azar and Touitou FOCS '19, etc.). Our main result is an O(log k log n)-competitive algorithm for the Paging with Time Windows problem on n pages with a cache of size k. This significantly improves on the previous best bound of O(k) (Azar et al. (STOC '17). We also consider the offline version of this problem, for which we give an O(1) approximation algorithm and prove APX-hardness. These are the first results for the offline problem; even NP-hardness was not known before our work. At the heart of our algorithms is a novel hitting-set LP relaxation of this problem that overcomes the Omega(k) integrality gap of the natural LP for the problem. To the best of our knowledge, this is the first example of an LP-based algorithm for an online algorithm with delays/deadlines.) <|cite_end|> <|cite_start|> (Reference: Beyond tree embeddings--a deterministic framework for network design with deadlines or delay: We consider network design problems with deadline or delay. All previous results for these models are based on randomized embedding of the graph into a tree (HST) and then solving the problem on this tree. We show that this is not necessary. In particular, we design a deterministic framework for these problems which is not based on embedding. This enables us to provide deterministic poly-log($n$)-competitive algorithms for Steiner tree, generalized Steiner tree, node weighted Steiner tree, (non-uniform) facility location and directed Steiner tree with deadlines or with delay (where $n$ is the number of nodes). Our deterministic algorithms also give improved guarantees over some previous randomized results. In addition, we show a lower bound of poly $\text{log}(n)$ for some of these problems, which implies that our framework is optimal up to the power of the poly-log. Our algorithms and techniques differ significantly from those in all previous considerations of these problems.) <|cite_end|>
[ "<|reference_start|> A Primal-Dual Online Deterministic Algorithm for Matching with Delays: In the Min-cost Perfect Matching with Delays (MPMD) problem, 2 m requests arrive over time at points of a metric space. An online algorithm has to connect these requests in pairs, but a decision to match may be postponed till a more suitable matching pair is found. The goal is to minimize the joint cost of connection and the total waiting time of all requests. We present an O(m)-competitive deterministic algorithm for this problem, improving on an existing bound of O(m^(log(5.5))) = O(m^2.46). Our algorithm also solves (with the same competitive ratio) a bipartite variant of MPMD, where requests are either positive or negative and only requests with different polarities may be matched with each other. Unlike the existing randomized solutions, our approach does not depend on the size of the metric space and does not have to know it in advance. <|reference_end|>", "<|reference_start|> Deterministic Min-Cost Matching with Delays: We consider the online Minimum-Cost Perfect Matching with Delays (MPMD) problem introduced by Emek et al. (STOC 2016), in which a general metric space is given, and requests are submitted in different times in this space by an adversary. The goal is to match requests, while minimizing the sum of distances between matched pairs in addition to the time intervals passed from the moment each request appeared until it is matched. In the online Minimum-Cost Bipartite Perfect Matching with Delays (MBPMD) problem introduced by Ashlagi et al. (APPROX/RANDOM 2017), each request is also associated with one of two classes, and requests can only be matched with requests of the other class. Previous algorithms for the problems mentioned above, include randomized $O\\left(\\log n\\right)$-competitive algorithms for known and finite metric spaces, $n$ being the size of the metric space, and a deterministic $O\\left(m\\right)$-competitive algorithm, $m$ being the number of requests. We introduce $O\\left(m^{\\log\\left(\\frac{3}{2}+\\epsilon\\right)}\\right)$-competitive deterministic algorithms for both problems and for any fixed $\\epsilon > 0$. In particular, for a small enough $\\epsilon$ the competitive ratio becomes $O\\left(m^{0.59}\\right)$. These are the first deterministic algorithms for the mentioned online matching problems, achieving a sub-linear competitive ratio. Our algorithms do not need to know the metric space in advance. <|reference_end|>", "<|reference_start|> The Price of Clustering in Bin-Packing with Applications to Bin-Packingwith Delays: One of the most significant algorithmic challenges in the \"big data era\" is handling instances that are too large to be processed by a single machine. The common practice in this regard is to partition the massive problem instance into smaller ones and process each one of them separately. In some cases, the solutions for the smaller instances are later on assembled into a solution for the whole instance, but in many cases this last stage cannot be pursued (e.g., because it is too costly, because of locality issues, or due to privacy considerations). Motivated by this phenomenon, we consider the following natural combinatorial question: Given a bin-packing instance (namely, a set of items with sizes in (0, 1] that should be packed into unit capacity bins) I and a partition Ii \\ i of I into clusters, how large is the ratio ∑i Øpt(Ii) / Øpt(I), where Øpt(J) denotes the optimal number of bins into which the items in J can be packed? In this paper, we investigate the supremum of this ratio over all instances I and partitions Ii \\ i, referred to as the bin-packing price of clustering (¶oC ). It is trivial to observe that if each cluster contains only one tiny item (and hence, Øpt(Ii) = 1), then the ¶oC is unbounded. On the other hand, a relatively straightforward argument shows that under the constraint that Øpt(Ii) ≥ 2, the ¶oC is 2. Our main challenge was to determine whether the ¶oC drops below 2 when Øpt(Ii) > 2. In addition, one may hope that łimk -> ∞ ¶oC(k) = 1, where ¶oC(k) denotes the ¶oC under the restriction to clusters Ii with Øpt(Ii) ≥ k. We resolve the former question affirmatively and the latter one negatively: Our main results are that ¶oC(k) łeq 1.951 for any k ≥ 3 and łimk -> ∞ ¶oC(k) = 1.691... Moreover, the former bound cannot be significantly improved as ¶oC(3) > 1.933. In addition to the immediate contribution of this combinatorial result to \"big data\" kind of applications, it turns out that it is useful also for an interesting online problem called bin-packing with delays. <|reference_end|>", "<|reference_start|> Online k-Way Matching with Delays and the H-Metric: In this paper, we study $k$-Way Min-cost Perfect Matching with Delays - the $k$-MPMD problem. This problem considers a metric space with $n$ nodes. Requests arrive at these nodes in an online fashion. The task is to match these requests into sets of exactly $k$, such that the space and time cost of all matched requests are minimized. The notion of the space cost requires a definition of an underlying metric space that gives distances of subsets of $k$ elements. For $k>2$, the task of finding a suitable metric space is at the core of our problem: We show that for some known generalizations to $k=3$ points, such as the $2$-metric and the $D$-metric, there exists no competitive randomized algorithm for the $3$-MPMD problem. The $G$-metrics are defined for 3 points and allows for a competitive algorithm for the $3$-MPMD problem. For $k>3$ points, there exist two generalizations of the $G$-metrics known as $n$- and $K$-metrics. We show that neither the $n$-metrics nor the $K$-metrics can be used for the $k$-MPMD problem. On the positive side, we introduce the $H$-metrics, the first metrics to allow for a solution of the $k$-MPMD problem for all $k$. In order to devise an online algorithm for the $k$-MPMD problem on the $H$-metrics, we embed the $H$-metric into trees with an $O(\\log n)$ distortion. Based on this embedding result, we extend the algorithm proposed by Azar et al. (2017) and achieve a competitive ratio of $O(\\log n)$ for the $k$-MPMD problem. <|reference_end|>" ]
[ 5, 15, 36, 41 ]
{"<|cite_1|>": "arxiv-93689", "<|multi_cite_2_1|>": "arxiv-93689", "<|multi_cite_2_2|>": "ss-818399", "<|multi_cite_2_3|>": "ss-980190", "<|multi_cite_2_4|>": "arxiv-122352", "<|multi_cite_2_5|>": "arxiv-155882", "<|multi_cite_2_6|>": "ss-1526965", "<|multi_cite_2_7|>": "arxiv-161951", "<|multi_cite_2_8|>": "arxiv-301350", "<|cite_3|>": "arxiv-161951", "<|cite_4|>": "arxiv-93689", "<|cite_5|>": "ss-818399", "<|cite_6|>": "ss-980190", "<|cite_7|>": "arxiv-122352", "<|cite_8|>": "arxiv-155882", "<|cite_9|>": "arxiv-161951", "<|cite_10|>": "ss-2465340", "<|cite_11|>": "ss-2357492", "<|cite_12|>": "ss-980190", "<|cite_13|>": "arxiv-93689", "<|cite_14|>": "ss-818399", "<|cite_15|>": "ss-980190", "<|cite_16|>": "ss-1526965", "<|cite_17|>": "arxiv-301350", "<|multi_cite_18_1|>": "arxiv-132273", "<|multi_cite_18_2|>": "ss-1302812", "<|multi_cite_18_3|>": "arxiv-199988", "<|multi_cite_19_1|>": "arxiv-80725", "<|multi_cite_19_2|>": "ss-710858", "<|multi_cite_19_3|>": "ss-2357493", "<|multi_cite_19_4|>": "arxiv-199988", "<|multi_cite_19_5|>": "ss-2552981", "<|multi_cite_19_6|>": "ss-1162596", "<|multi_cite_20_1|>": "arxiv-377561", "<|multi_cite_20_2|>": "arxiv-199988", "<|multi_cite_20_3|>": "ss-1391109", "<|multi_cite_21_1|>": "ss-1302813", "<|multi_cite_21_2|>": "arxiv-219316", "<|multi_cite_22_1|>": "ss-1526964", "<|multi_cite_22_2|>": "ss-2357494", "<|multi_cite_22_3|>": "ss-1162596", "<|multi_cite_23_1|>": "arxiv-366895", "<|multi_cite_23_2|>": "ss-2052936", "<|multi_cite_23_3|>": "ss-1391109", "<|multi_cite_23_4|>": "ss-2357494", "<|multi_cite_23_5|>": "ss-2357495", "<|multi_cite_24_1|>": "arxiv-307031", "<|multi_cite_24_2|>": "ss-2141966", "<|multi_cite_24_3|>": "arxiv-375705", "<|multi_cite_24_4|>": "arxiv-356546", "<|multi_cite_24_5|>": "ss-2357496", "<|cite_25|>": "arxiv-307031", "<|cite_26|>": "ss-2141966", "<|cite_27|>": "arxiv-356546", "<|cite_28|>": "ss-2357496", "<|multi_cite_29_1|>": "ss-1036936", "<|multi_cite_29_2|>": "ss-1272422", "<|multi_cite_30_1|>": "arxiv-423224", "<|multi_cite_30_2|>": "arxiv-387322", "<|multi_cite_30_3|>": "ss-2357497", "<|multi_cite_30_4|>": "ss-2357498", "<|multi_cite_30_5|>": "arxiv-395610", "<|multi_cite_30_6|>": "arxiv-306248", "<|multi_cite_30_7|>": "ss-1173367", "<|multi_cite_30_8|>": "arxiv-213645", "<|multi_cite_30_9|>": "ss-2357499", "<|multi_cite_30_10|>": "ss-2357500", "<|multi_cite_30_11|>": "ss-1445498", "<|multi_cite_30_12|>": "arxiv-155870", "<|multi_cite_30_13|>": "ss-2357501"}
2404.19392-0
<|paper_start|> Title: Convergence analysis of the transformed gradient projection algorithms on compact matrix manifolds Abstract: Convergence analysis of the transformed gradient projection algorithms on compact matrix manifolds: In this paper, to address the optimization problem on a compact matrix manifold, we introduce a novel algorithmic framework called the Transformed Gradient Projection (TGP) algorithm, using the projection onto this compact matrix manifold. Compared with the existing algorithms, the key innovation in our approach lies in the utilization of a new class of search directions and various stepsizes, including the Armijo, nonmonotone Armijo, and fixed stepsizes, to guide the selection of the next iterate. Our framework offers flexibility by encompassing the classical gradient projection algorithms as special cases, and intersecting the retraction-based line-search algorithms. Notably, our focus is on the Stiefel or Grassmann manifold, revealing that many existing algorithms in the literature can be seen as specific instances within our proposed framework, and this algorithmic framework also induces several new special cases. Then, we conduct a thorough exploration of the convergence properties of these algorithms, considering various search directions and stepsizes. To achieve this, we extensively analyze the geometric properties of the projection onto compact matrix manifolds, allowing us to extend classical inequalities related to retractions from the literature. Building upon these insights, we establish the weak convergence, convergence rate, and global convergence of TGP algorithms under three distinct stepsizes. In cases where the compact matrix manifold is the Stiefel or Grassmann manifold, our convergence results either encompass or surpass those found in the literature. Finally, through a series of numerical experiments, we observe that the TGP algorithms, owing to their increased flexibility in choosing search directions, outperform classical gradient projection and retraction-based line-search algorithms in several scenarios. Introduction \subsection{Problem formulation} Let $ \mm \subseteq \RR^{n\times r}$ be a compact matrix submanifold of class \(C^3\) with $1\leq r\leq n$. In this paper, we mainly consider the following optimization problem: \begin{align}\label{eq:objec_func_g} \min_{\matr{X}\in\mm} f(\matr{X}), \end{align} where the cost function $f$ is assumed to be twice continuously differentiable over $\RR^{n \times r}$. Problem \eqref{eq:objec_func_g} has a wide range of applications in various fields, including \emph{signal processing} <|cite_start|> (Reference: Optimization algorithms on matrix manifolds: Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists.) <|cite_end|> <|cite_start|> (Reference: Handbook of Blind Source Separation: Independent Component Analysis and Applications: Edited by the people who were forerunners in creating the field, together with contributions from 34 leading international experts, this handbook provides the definitive reference on Blind Source Separation, giving a broad and comprehensive description of all the core principles and methods, numerical algorithms and major applications in the fields of telecommunications, biomedical engineering and audio, acoustic and speech processing. Going beyond a machine learning perspective, the book reflects recent results in signal processing and numerical analysis, and includes topics such as optimization criteria, mathematical tools, the design of numerical algorithms, convolutive mixtures, and time frequency approaches. This Handbook is an ideal reference for university researchers, RD algebraic identification of under-determined mixtures, time-frequency methods, Bayesian approaches, blind identification under non negativity approaches, semi-blind methods for communicationsShows the applications of the methods to key application areas such as telecommunications, biomedical engineering, speech, acoustic, audio and music processing, while also giving a general method for developing applications) <|cite_end|> <|cite_start|> (Reference: Polar Decomposition-based Algorithms on the Product of Stiefel Manifolds with Applications in Tensor Approximation: ) <|cite_end|>, \emph{machine learning} <|cite_start|> (Reference: {Generalized Power Method for Sparse Principal Component Analysis: In this paper we develop a new approach to sparse principal component analysis (sparse PCA). We propose two single-unit and two block optimization formulations of the sparse PCA problem, aimed at extracting a single sparse dominant principal component of a data matrix, or more components at once, respectively. While the initial formulations involve nonconvex functions, and are therefore computationally intractable, we rewrite them into the form of an optimization program involving maximization of a convex function on a compact set. The dimension of the search space is decreased enormously if the data matrix has many more columns (variables) than rows. We then propose and analyze a simple gradient method suited for the task. It appears that our algorithm has best convergence properties in the case when either the objective function or the feasible set are strongly convex, which is the case with our single-unit formulations and can be enforced in the block case. Finally, we demonstrate numerically on a set of random and gene expression test problems that our approach outperforms existing algorithms both in quality of the obtained solution and in computational speed.) <|cite_end|> <|cite_start|> (Reference: Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?: This paper seeks to answer the question: as the (near-) orthogonality of weights is found to be a favorable property for training deep convolutional neural networks, how can we enforce it in more effective and easy-to-use ways? We develop novel orthogonality regularizations on training deep CNNs, utilizing various advanced analytical tools such as mutual coherence and restricted isometry property. These plug-and-play regularizations can be conveniently incorporated into training almost any CNN without extra hassle. We then benchmark their effects on state-of-the-art models: ResNet, WideResNet, and ResNeXt, on several most popular computer vision datasets: CIFAR-10, CIFAR-100, SVHN and ImageNet. We observe consistent performance gains after applying those proposed regularizations, in terms of both the final accuracies achieved, and faster and more stable convergences. We have made our codes and pre-trained models publicly available: this https URL.) <|cite_end|> <|cite_start|> (Reference: Orthogonal Convolutional Neural Networks: Deep convolutional neural networks are hindered by training instability and feature redundancy towards further performance improvement. A promising solution is to impose orthogonality on convolutional filters. We develop an efficient approach to impose filter orthogonality on a convolutional layer based on the doubly block-Toeplitz matrix representation of the convolutional kernel instead of using the common kernel orthogonality approach, which we show is only necessary but not sufficient for ensuring orthogonal convolutions. Our proposed orthogonal convolution requires no additional parameters and little computational overhead. This method consistently outperforms the kernel orthogonality alternative on a wide range of tasks such as image classification and inpainting under supervised, semi-supervised and unsupervised settings. Further, it learns more diverse and expressive features with better training stability, robustness, and generalization. Our code is publicly available at https://github.com/samaonline/Orthogonal-Convolutional-Neural-Networks.) <|cite_end|>, \emph{numerical linear algebra} <|cite_start|> (Reference: Tensor Analysis: Spectral Theory and Special Tensors: ) <|cite_end|> <|cite_start|> (Reference: Z-eigenvalue methods for a global polynomial optimization problem: ) <|cite_end|>and \emph{data analysis} <|cite_start|> (Reference: Tensor decompositions for learning latent variable models: This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models---including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation---which exploits a certain tensor structure in their low-order observable moments (typically, of second- and third-order). Specifically, parameter estimation is reduced to the problem of extracting a certain (orthogonal) decomposition of a symmetric tensor derived from the moments; this decomposition can be viewed as a natural generalization of the singular value decomposition for matrices. Although tensor decompositions are generally intractable to compute, the decomposition of these specially structured tensors can be efficiently obtained by a variety of approaches, including power iterations and maximization approaches (similar to the case of matrices). A detailed analysis of a robust tensor power method is provided, establishing an analogue of Wedin's perturbation theorem for the singular vectors of matrices. This implies a robust and computationally tractable estimation approach for several popular latent variable models.) <|cite_end|> <|cite_start|> (Reference: Tensor Decompositions for Signal Processing Applications: From two-way to multiway component analysis: The widespread use of multisensor technology and the emergence of big data sets have highlighted the limitations of standard flat-view matrix models and the necessity to move toward more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift toward models that are essentially polynomial, the uniqueness of which, unlike the matrix methods, is guaranteed under very mild and natural conditions. Benefiting from the power of multilinear algebra as their mathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints which match data properties and extract more general latent components in the data than matrix-based methods.) <|cite_end|> <|cite_start|> (Reference: {Tensor Decompositions and Applications: This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.) <|cite_end|> <|cite_start|> (Reference: Tensor Decomposition for Signal Processing and Machine Learning: Tensors or {\em multi-way arrays} are functions of three or more indices $(i,j,k,\cdots)$ -- similar to matrices (two-way arrays), which are functions of two indices $(r,c)$ for (row,column). Tensors have a rich history, stretching over almost a century, and touching upon numerous disciplines; but they have only recently become ubiquitous in signal and data analytics at the confluence of signal processing, statistics, data mining and machine learning. This overview article aims to provide a good starting point for researchers and practitioners interested in learning about and working with tensors. As such, it focuses on fundamentals and motivation (using various application examples), aiming to strike an appropriate balance of breadth {\em and depth} that will enable someone having taken first graduate courses in matrix algebra and probability to get started doing research and/or developing tensor algorithms and software. Some background in applied optimization is useful but not strictly required. The material covered includes tensor rank and rank decomposition; basic tensor factorization models and their relationships and properties (including fairly good coverage of identifiability); broad coverage of algorithms ranging from alternating optimization to stochastic gradient; statistical performance analysis; and applications ranging from source separation to collaborative filtering, mixture and topic modeling, classification, and multilinear subspace learning.) <|cite_end|>. In this paper, as two significant examples of the above problem \eqref{eq:objec_func_g}, we mainly focus on the \emph{Stiefel manifold} <|cite_start|> (Reference: The geometry of algorithms with orthogonality constraints: In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal processing. In addition to the new algorithms, we show how the geometrical framework gives penetrating new insights allowing us to create, understand, and compare algorithms. The theory proposed here provides a taxonomy for numerical linear algebra algorithms that provide a top level mathematical view of previously unrelated algorithms. It is our hope that developers of new algorithms and perturbation theories will benefit from the theory, methods, and examples in this paper.) <|cite_end|> <|cite_start|> (Reference: A feasible method for optimization with orthogonality constraints: ) <|cite_end|>and the \emph{Grassmann manifold} <|cite_start|> (Reference: A Grassmann Manifold Handbook: Basic Geometry and Computational Aspects: The Grassmann manifold of linear subspaces is important for the mathematical modelling of a multitude of applications, ranging from problems in machine learning, computer vision and image processing to low-rank matrix optimization problems, dynamic low-rank decompositions and model reduction. With this mostly expository work, we aim to provide a collection of the essential facts and formulae on the geometry of the Grassmann manifold in a fashion that is fit for tackling the aforementioned problems with matrix-based algorithms. Moreover, we expose the Grassmann geometry both from the approach of representing subspaces with orthogonal projectors and when viewed as a quotient space of the orthogonal group, where subspaces are identified as equivalence classes of (orthogonal) bases. This bridges the associated research tracks and allows for an easy transition between these two approaches. Original contributions include a modified algorithm for computing the Riemannian logarithm map on the Grassmannian that is advantageous numerically but also allows for a more elementary, yet more complete description of the cut locus and the conjugate points. We also derive a formula for parallel transport along geodesics in the orthogonal projector perspective, formulae for the derivative of the exponential map, as well as a formula for Jacobi fields vanishing at one point.) <|cite_end|>. In fact, it is worth noting that the proposed algorithms and convergence results of this paper also apply to other compact matrix manifolds as well, \emph{e.g.}, the \emph{oblique manifold} <|cite_start|> (Reference: The multimode Procrustes problem: ) <|cite_end|>and the \emph{product of Stiefel manifolds} <|cite_start|> (Reference: Adaptive quadratically regularized newton method for riemannian optimization: Optimization on Riemannian manifolds widely arises in eigenvalue computation, density functional theory, Bose--Einstein condensates, low rank nearest correlation, image registration, signal process...) <|cite_end|> <|cite_start|> (Reference: Polar Decomposition-based Algorithms on the Product of Stiefel Manifolds with Applications in Tensor Approximation: ) <|cite_end|>, although we do not dive into the details in this paper. The \emph{Stiefel manifold} is defined as $\St(r,n) \eqdef \{\matr{X}\in\RR^{n\times r}: \matr{X}^{\T}\matr{X}=\matr{I}_r\}$ <|cite_start|> (Reference: Richtungsfelder und Fernparallelismus in n-dimensionalen Mannigfaltigkeiten: ) <|cite_end|>. If $r=1$, it is the unit sphere $\mathbb{S}^{n-1} \subseteq \RR^{n}$, and when $r=n$, it becomes the $n$-dimensional orthogonal group $\ON{n}\subseteq\RR^{n\times n}$. The \emph{Grassmann manifold} is defined as $\Gr(p,n) \eqdef \{\matr{X}\in\RR^{n\times n}: \matr{X}^{\T}=\matr{X}, \matr{X}^2=\matr{X}, \rank{\matr{X}}=p\}$ <|cite_start|> (Reference: Geometric mean and geodesic regression on Grassmannians: ) <|cite_end|>, which is a set of projection matrices satisfying $\rank{\matr{X}}=p$. It is also isomorphic\footnote{In this paper, for the sake of convenience in presentation, we will interchangeably use these two equivalent forms.} to $\St(p,n)/\ON{p}$, the quotient manifold of $\St(p,n)$ and $\ON{p}$ <|cite_start|> (Reference: A Grassmann Manifold Handbook: Basic Geometry and Computational Aspects: The Grassmann manifold of linear subspaces is important for the mathematical modelling of a multitude of applications, ranging from problems in machine learning, computer vision and image processing to low-rank matrix optimization problems, dynamic low-rank decompositions and model reduction. With this mostly expository work, we aim to provide a collection of the essential facts and formulae on the geometry of the Grassmann manifold in a fashion that is fit for tackling the aforementioned problems with matrix-based algorithms. Moreover, we expose the Grassmann geometry both from the approach of representing subspaces with orthogonal projectors and when viewed as a quotient space of the orthogonal group, where subspaces are identified as equivalence classes of (orthogonal) bases. This bridges the associated research tracks and allows for an easy transition between these two approaches. Original contributions include a modified algorithm for computing the Riemannian logarithm map on the Grassmannian that is advantageous numerically but also allows for a more elementary, yet more complete description of the cut locus and the conjugate points. We also derive a formula for parallel transport along geodesics in the orthogonal projector perspective, formulae for the derivative of the exponential map, as well as a formula for Jacobi fields vanishing at one point.) <|cite_end|>. A diverse range of algorithms have been developed to address problem \eqref{eq:objec_func_g} in the literature, including both \emph{infeasible} and \emph{feasible} approaches. Infeasible methods encompass techniques such as the splitting methods <|cite_start|> (Reference: A Splitting Method for Orthogonality Constrained Problems: ) <|cite_end|>, and penalty methods <|cite_start|> (Reference: A class of smooth exact penalty function methods for optimization problems with orthogonality constraints: ABSTRACT Updating the augmented Lagrangian multiplier by closed-form expression yields efficient first-order infeasible approach for optimization problems with orthogonality constraints. Hence, parallelization becomes tractable in solving this type of problems. Inspired by this closed-form updating scheme, we propose a novel penalty function with compact convex constraints (PenC). We show that PenC  can act as an exact penalty model which shares the same global minimizers as the original problem with orthogonality constraints. Based on PenC, we first propose a first-order algorithm called PenCF  and establish its global convergence and local linear convergence rate under some mild assumptions. For the case that the computation and storage of Hessian is achievable, and we pursue high precision solution and fast local convergence rate, a second-order approach called PenCS  is proposed for solving PenC. To avoid expensive calculation or solving a hard subproblem in computing the Newton step, we propose a new strategy to do it approximately which still leads to quadratic convergence locally. Moreover, the main iterations of both PenCF  and PenCS  are orthonormalization-free and hence parallelizable. Numerical experiments illustrate that PenCF  is comparable with the existing first-order methods. Furthermore, PenCS  shows its stability and high efficiency in obtaining high precision solution comparing with the existing second-order methods.) <|cite_end|> <|cite_start|> (Reference: Trace-Penalty Minimization for Large-Scale Eigenspace Computation: ) <|cite_end|>. Feasible methods mainly include two classes. The first class stems from exploiting the geometric structure of $\mm$, allowing for the direct implementation of various Riemannian optimization algorithms by making use of differential-geometric principles like \emph{geodesic} and \emph{retraction}; see e.g. <|cite_start|> (Reference: Optimization algorithms on matrix manifolds: Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists.) <|cite_end|> <|cite_start|> (Reference: An Introduction to Optimization on Smooth Manifolds: Optimization on Riemannian manifolds-the result of smooth geometry and optimization merging into one elegant modern framework-spans many areas of science and engineering, including machine learning, computer vision, signal processing, dynamical systems and scientific computing. This text introduces the differential geometry and Riemannian geometry concepts that will help students and researchers in applied mathematics, computer science and engineering gain a firm mathematical grounding to use these tools confidently in their research. Its charts-last approach will prove more intuitive from an optimizer's viewpoint, and all definitions and theorems are motivated to build time-tested optimization algorithms. Starting from first principles, the text goes on to cover current research on topics including worst-case complexity and geodesic convexity. Readers will appreciate the tricks of the trade for conducting research and for numerical implementations sprinkled throughout the book.) <|cite_end|> <|cite_start|> (Reference: A Brief Introduction to Manifold Optimization: ) <|cite_end|>. The second class\footnote{In this paper, our emphasis will be on the second class of feasible methods.} of feasible methods ensures that iterations consistently remain within the manifold by using the \emph{projection} onto compact matrix manifolds. \subsection{Retraction-based line-search algorithms}\label{subsec:retrac_Riema_optim} A fundamental concept in the theory of differential manifold is the \emph{geodesic} <|cite_start|> (Reference: {Linear and Nonlinear Programming: Linear programs (LPs) and nonlinear programs (NLPs) are mathematical problems in which data are used to find the values of variables that minimize or maximize an objective function while simultaneously satisfying several imposed constraints on the values of the variables. Such problems arise frequently in computer science, mathematics, business, economics, statistics, engineering, operations research, and the sciences. An overview of the similarities and differences between LPs and NLPs is presented here. For example, although the art of building LP and NLP models involves identifying the variables, objective function, and constraints, methods for solving such problems differ greatly. A finite procedure has been developed to solve all LPs; however, no such procedure is available for solving NLPs. Keywords: linear programming; nonlinear programming; optimization; linear optimization; nonlinear optimization; mathematical models; operations research) <|cite_end|>, which, in essence, embodies a locally shortest path on the manifold, offering a generalization of straight lines in Euclidean space <|cite_start|> (Reference: Minimizing a differentiable function over a differential manifold: ) <|cite_end|>. Meanwhile, in classical unconstrained optimization, a thoroughly examined class of line-search algorithms explores along straight lines in each iteration <|cite_start|> (Reference: {Linear and Nonlinear Programming: Linear programs (LPs) and nonlinear programs (NLPs) are mathematical problems in which data are used to find the values of variables that minimize or maximize an objective function while simultaneously satisfying several imposed constraints on the values of the variables. Such problems arise frequently in computer science, mathematics, business, economics, statistics, engineering, operations research, and the sciences. An overview of the similarities and differences between LPs and NLPs is presented here. For example, although the art of building LP and NLP models involves identifying the variables, objective function, and constraints, methods for solving such problems differ greatly. A finite procedure has been developed to solve all LPs; however, no such procedure is available for solving NLPs. Keywords: linear programming; nonlinear programming; optimization; linear optimization; nonlinear optimization; mathematical models; operations research) <|cite_end|>. Therefore, it is natural to extend these classical line-search methods to manifolds through the utilization of geodesics. These types of algorithms have been extensively studied in early works; see <|cite_start|> (Reference: {Linear and Nonlinear Programming: Linear programs (LPs) and nonlinear programs (NLPs) are mathematical problems in which data are used to find the values of variables that minimize or maximize an objective function while simultaneously satisfying several imposed constraints on the values of the variables. Such problems arise frequently in computer science, mathematics, business, economics, statistics, engineering, operations research, and the sciences. An overview of the similarities and differences between LPs and NLPs is presented here. For example, although the art of building LP and NLP models involves identifying the variables, objective function, and constraints, methods for solving such problems differ greatly. A finite procedure has been developed to solve all LPs; however, no such procedure is available for solving NLPs. Keywords: linear programming; nonlinear programming; optimization; linear optimization; nonlinear optimization; mathematical models; operations research) <|cite_end|> <|cite_start|> (Reference: Minimizing a differentiable function over a differential manifold: ) <|cite_end|> <|cite_start|> (Reference: Optimization Techniques on Riemannian Manifolds: The techniques and analysis presented in this paper provide new methods to solve optimization problems posed on Riemannian manifolds. A new point of view is offered for the solution of constrained optimization problems. Some classical optimization techniques on Euclidean space are generalized to Riemannian manifolds. Several algorithms are presented and their convergence properties are analyzed employing the Riemannian structure of the manifold. Specifically, two apparently new algorithms, which can be thought of as Newton's method and the conjugate gradient method on Riemannian manifolds, are presented and shown to possess, respectively, quadratic and superlinear convergence. Examples of each method on certain Riemannian manifolds are given with the results of numerical experiments. Rayleigh's quotient defined on the sphere is one example. It is shown that Newton's method applied to this function converges cubically, and that the Rayleigh quotient iteration is an efficient approximation of Newton's method. The Riemannian version of the conjugate gradient method applied to this function gives a new algorithm for finding the eigenvectors corresponding to the extreme eigenvalues of a symmetric matrix. Another example arises from extremizing the function $\mathop{\rm tr} {\Theta}^{\scriptscriptstyle\rm T}Q{\Theta}N$ on the special orthogonal group. In a similar example, it is shown that Newton's method applied to the sum of the squares of the off-diagonal entries of a symmetric matrix converges cubically.) <|cite_end|> <|cite_start|> (Reference: Globally Convergent Optimization Algorithms on Riemannian Manifolds: Uniform Framework for Unconstrained and Constrained Optimization: ) <|cite_end|>and the references therein. It is worth noting that the algorithms proposed therein usually presume the explicit calculation of geodesics along a given direction. While closed-form expressions of geodesics are available only for certain manifolds <|cite_start|> (Reference: The geometry of algorithms with orthogonality constraints: In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal processing. In addition to the new algorithms, we show how the geometrical framework gives penetrating new insights allowing us to create, understand, and compare algorithms. The theory proposed here provides a taxonomy for numerical linear algebra algorithms that provide a top level mathematical view of previously unrelated algorithms. It is our hope that developers of new algorithms and perturbation theories will benefit from the theory, methods, and examples in this paper.) <|cite_end|> <|cite_start|> (Reference: Geometric Optimization Methods for Adaptive Filtering: The techniques and analysis presented in this thesis provide new methods to solve optimization problems posed on Riemannian manifolds. These methods are applied to the subspace tracking problem found in adaptive signal processing and adaptive control. A new point of view is offered for the constrained optimization problem. Some classical optimization techniques on Euclidean space are generalized to Riemannian manifolds. Several algorithms are presented and their convergence properties are analyzed employing the Riemannian structure of the manifold. Specifically, two new algorithms, which can be thought of as Newton's method and the conjugate gradient method on Riemannian manifolds, are presented and shown to possess quadratic and superlinear convergence, respectively. These methods are applied to several eigenvalue and singular value problems, which are posed as constrained optimization problems. ...) <|cite_end|>, the computation of geodesics can be computationally expensive or even impractical in general, as shown in <|cite_start|> (Reference: Constrained optimization along geodesics: ) <|cite_end|> <|cite_start|> (Reference: Projection-like Retractions on Matrix Manifolds: This paper deals with constructing retractions, a key step when applying optimization algorithms on matrix manifolds. For submanifolds of Euclidean spaces, we show that the operation consisting of taking a tangent step in the embedding Euclidean space followed by a projection onto the submanifold is a retraction. We also show that the operation remains a retraction if the projection is generalized to a projection-like procedure that consists of coming back to the submanifold along “admissible” directions, and we give a sufficient condition on the admissible directions for the generated retraction to be second order. This theory offers a framework in which previously proposed retractions can be analyzed, as well as a toolbox for constructing new ones. Illustrations are given for projection-like procedures on some specific manifolds for which we have an explicit, easy-to-compute expression.) <|cite_end|>. To address this challenge, it has been suggested to approximate exact geodesics using computationally efficient alternatives <|cite_start|> (Reference: Constrained optimization along geodesics: ) <|cite_end|> <|cite_start|> (Reference: Optimization on the symplectic Stiefel manifold: SR decomposition-based retraction and applications: Numerous problems in optics, quantum physics, stability analysis, and control of dynamical systems can be brought to an optimization problem with matrix variable subjected to the symplecticity constraint. As this constraint nicely forms a so-called symplectic Stiefel manifold, Riemannian optimization is preferred, because one can borrow ideas from unconstrained optimization methods after preparing necessary geometric tools. Retraction is arguably the most important one which decides the way iterates are updated given a search direction. Two retractions have been constructed so far: one relies on the Cayley transform and the other is designed using quasi-geodesic curves. In this paper, we propose a new retraction which is based on an SR matrix decomposition. We prove that its domain contains the open unit ball which is essential in proving the global convergence of the associated gradient-based optimization algorithm. Moreover, we consider three applications--symplectic target matrix problem, symplectic eigenvalue computation, and symplectic model reduction of Hamiltonian systems--with various examples. The extensive numerical comparisons reveal the strengths of the proposed optimization algorithm.) <|cite_end|>. For example, in the context of the Stiefel manifold, various specially designed curves along search directions have been constructed with low computational cost, and curvilinear search algorithms have subsequently been developed based on these curves <|cite_start|> (Reference: A feasible method for optimization with orthogonality constraints: ) <|cite_end|> <|cite_start|> (Reference: Adaptive regularized self-consistent field iteration with exact hessian for electronic structure calculation: The self-consistent field (SCF) iteration has been used ubiquitously for solving the Kohn--Sham (KS) equation or the minimization of the KS total energy functional with respect to orthogonality constraints in electronic structure calculations. Although SCF with heuristics such as charge mixing often works remarkably well on many problems, it is well known that its convergence can be unpredictable and there is no general theoretical analysis on its performance. We regularize the SCF iteration and establish rigorous global convergence to the first-order optimality conditions. The Hessian of the total energy functional is further exploited. By adding the part of the Hessian which is not considered in SCF, our methods can always achieve a highly accurate solution on problems for which SCF fails and exhibit a better convergence rate than SCF in the KSSOLV toolbox under the MATLAB environment.) <|cite_end|> <|cite_start|> (Reference: A framework of constraint preserving update schemes for optimization on Stiefel manifold: ) <|cite_end|>. Note that geodesics can often be computed through an exponential map <|cite_start|> (Reference: The geometry of algorithms with orthogonality constraints: In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal processing. In addition to the new algorithms, we show how the geometrical framework gives penetrating new insights allowing us to create, understand, and compare algorithms. The theory proposed here provides a taxonomy for numerical linear algebra algorithms that provide a top level mathematical view of previously unrelated algorithms. It is our hope that developers of new algorithms and perturbation theories will benefit from the theory, methods, and examples in this paper.) <|cite_end|>. To approximate these geodesics effectively, it suffices to find an approximation of the exponential map, which gives rise to the concept of \emph{retraction} <|cite_start|> (Reference: Optimization algorithms on matrix manifolds: Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists.) <|cite_end|>. Let $\mm'\subseteq\RR^{m}$ be a submanifold. A smooth map $ \retr $ from the tangent bundle $ \TangBundle{\mm'} $ to $ \mm' $ is said to be a \emph{retraction} on $ \mm' $ if it satisfies the following properties: \begin{itemize} \item[(i)] $ \retr(\vect{x}, \vect{0}_{\vect{x}}) = \vect{x} $ for all $ \vect{x} \in \mm' $, where $\vect{0}_{\vect{x}} $ denotes the zero element in the tangent space $ \TangMM{\vect{x}} $; \item[(ii)] The differential of $ \retr_{\vect{x}} $ at $ \vect{0}_{\vect{x}} $ is the identity map on $ \TangMM{\vect{x}} $. Here $ \retr_{\vect{x}}: \TangMM{\vect{x}} \to \mm' $ denotes the restriction of $ \retr $ to $ \TangMM{\vect{x}} $, \emph{i.e.}, $ \retr_{\vect{x}}(\cdot) \eqdef \retr(\vect{x}, \cdot) $. \end{itemize} Define a curve $ c(t) $ on $ \mm' $ passing through $ \vect{x} $ for some $ \vect{x} \in \mm' $ and $ \vect{v} \in \TangMM{\vect{x}} $ by $ c(t) \eqdef \retr_{\vect{x}}(t\vect{v}) $. The above definition of retraction implies that $ c'(0) = \vect{v} $, and thus this curve $ c(t) $ serves as a first-order approximation of the geodesic passing through $ \vect{x} $ along the direction $ \vect{v} $ <|cite_start|> (Reference: Projection-like Retractions on Matrix Manifolds: This paper deals with constructing retractions, a key step when applying optimization algorithms on matrix manifolds. For submanifolds of Euclidean spaces, we show that the operation consisting of taking a tangent step in the embedding Euclidean space followed by a projection onto the submanifold is a retraction. We also show that the operation remains a retraction if the projection is generalized to a projection-like procedure that consists of coming back to the submanifold along “admissible” directions, and we give a sufficient condition on the admissible directions for the generated retraction to be second order. This theory offers a framework in which previously proposed retractions can be analyzed, as well as a toolbox for constructing new ones. Illustrations are given for projection-like procedures on some specific manifolds for which we have an explicit, easy-to-compute expression.) <|cite_end|>. Over recent decades, numerous retractions have been developed for commonly used manifolds, many of which can be computed efficiently or have closed-form solutions; see <|cite_start|> (Reference: Optimization algorithms on matrix manifolds: Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists.) <|cite_end|> <|cite_start|> (Reference: An Introduction to Optimization on Smooth Manifolds: Optimization on Riemannian manifolds-the result of smooth geometry and optimization merging into one elegant modern framework-spans many areas of science and engineering, including machine learning, computer vision, signal processing, dynamical systems and scientific computing. This text introduces the differential geometry and Riemannian geometry concepts that will help students and researchers in applied mathematics, computer science and engineering gain a firm mathematical grounding to use these tools confidently in their research. Its charts-last approach will prove more intuitive from an optimizer's viewpoint, and all definitions and theorems are motivated to build time-tested optimization algorithms. Starting from first principles, the text goes on to cover current research on topics including worst-case complexity and geodesic convexity. Readers will appreciate the tricks of the trade for conducting research and for numerical implementations sprinkled throughout the book.) <|cite_end|> <|cite_start|> (Reference: A Brief Introduction to Manifold Optimization: ) <|cite_end|>. The derivation of retractions enables the adoption of classical algorithms in unconstrained optimization to general Riemannian manifolds. Up to now, various retraction-based Riemannian optimization algorithms have been developed, including \emph{Riemannian gradient descent} <|cite_start|> (Reference: Optimization algorithms on matrix manifolds: Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists.) <|cite_end|> <|cite_start|> (Reference: Erratum to: ``Global rates of convergence for nonconvex optimization on manifolds'': We consider the minimization of a cost function $f$ on a manifold $M$ using Riemannian gradient descent and Riemannian trust regions (RTR). We focus on satisfying necessary optimality conditions within a tolerance $\varepsilon$. Specifically, we show that, under Lipschitz-type assumptions on the pullbacks of $f$ to the tangent spaces of $M$, both of these algorithms produce points with Riemannian gradient smaller than $\varepsilon$ in $O(1/\varepsilon^2)$ iterations. Furthermore, RTR returns a point where also the Riemannian Hessian's least eigenvalue is larger than $-\varepsilon$ in $O(1/\varepsilon^3)$ iterations. There are no assumptions on initialization. The rates match their (sharp) unconstrained counterparts as a function of the accuracy $\varepsilon$ (up to constants) and hence are sharp in that sense. These are the first deterministic results for global rates of convergence to approximate first- and second-order Karush-Kuhn-Tucker points on manifolds. They apply in particular for optimization constrained to compact submanifolds of $\mathbb{R}^n$, under simpler assumptions.) <|cite_end|> <|cite_start|> (Reference: Quadratic optimization with orthogonality constraint: explicit Łojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methods: ) <|cite_end|> <|cite_start|> (Reference: A Riemannian gradient ascent algorithm with applications to orthogonal approximation problems of symmetric tensors: ) <|cite_end|>, \emph{Newton-type} <|cite_start|> (Reference: Adaptive quadratically regularized newton method for riemannian optimization: Optimization on Riemannian manifolds widely arises in eigenvalue computation, density functional theory, Bose--Einstein condensates, low rank nearest correlation, image registration, signal process...) <|cite_end|> <|cite_start|> (Reference: A broyden class of quasi-Newton methods for Riemannian optimization: This paper develops and analyzes a generalization of the Broyden class of quasi-Newton methods to the problem of minimizing a smooth objective function $f$ on a Riemannian manifold. A condition on vector transport and retraction that guarantees convergence and facilitates efficient computation is derived. Experimental evidence is presented demonstrating the value of the extension to the Riemannian Broyden class through superior performance for some problems compared to existing Riemannian BFGS methods, in particular those that depend on differentiated retraction.) <|cite_end|> <|cite_start|> (Reference: A Riemannian BFGS method without differentiated retraction for nonconvex optimization problems: In this paper, a Riemannian BFGS method for minimizing a smooth function on a Riemannian manifold is defined, based on a Riemannian generalization of a cautious update and a weak line search condition. It is proven that the Riemannian BFGS method converges (i) globally to stationary points without assuming the objective function to be convex and (ii) superlinearly to a nondegenerate minimizer. Using the weak line search condition removes the need for information from differentiated retraction. The joint matrix diagonalization problem is chosen to demonstrate the performance of the algorithms with various parameters, line search conditions, and pairs of retraction and vector transport. A preliminary version can be found in [Numerical Mathematics and Advanced Applications: ENUMATH 2015, Lect. Notes Comput. Sci. Eng. 112, Springer, New York, 2016, pp. 627--634].) <|cite_end|> <|cite_start|> (Reference: A cubic regularized newton's method over riemannian manifolds: In this paper we present a cubic regularized Newton's method to minimize a smooth function over a Riemannian manifold. The proposed algorithm is shown to reach a second-order $\epsilon$-stationary point within $\mathcal{O}(1/\epsilon^{\frac{3}{2}})$ iterations, under the condition that the pullbacks are locally Lipschitz continuous, a condition that is shown to be satisfied if the manifold is compact. Furthermore, we present a local superlinear convergence result under some additional conditions.) <|cite_end|> <|cite_start|> (Reference: Riemannian Stochastic Variance-Reduced Cubic Regularized Newton Method for Submanifold Optimization: ) <|cite_end|>and \emph{trust region} <|cite_start|> (Reference: Trust-Region Methods on Riemannian Manifolds: ) <|cite_end|> <|cite_start|> (Reference: Erratum to: ``Global rates of convergence for nonconvex optimization on manifolds'': We consider the minimization of a cost function $f$ on a manifold $M$ using Riemannian gradient descent and Riemannian trust regions (RTR). We focus on satisfying necessary optimality conditions within a tolerance $\varepsilon$. Specifically, we show that, under Lipschitz-type assumptions on the pullbacks of $f$ to the tangent spaces of $M$, both of these algorithms produce points with Riemannian gradient smaller than $\varepsilon$ in $O(1/\varepsilon^2)$ iterations. Furthermore, RTR returns a point where also the Riemannian Hessian's least eigenvalue is larger than $-\varepsilon$ in $O(1/\varepsilon^3)$ iterations. There are no assumptions on initialization. The rates match their (sharp) unconstrained counterparts as a function of the accuracy $\varepsilon$ (up to constants) and hence are sharp in that sense. These are the first deterministic results for global rates of convergence to approximate first- and second-order Karush-Kuhn-Tucker points on manifolds. They apply in particular for optimization constrained to compact submanifolds of $\mathbb{R}^n$, under simpler assumptions.) <|cite_end|>. In particular, in the context of addressing problem \eqref{eq:objec_func_g}, the update scheme of \emph{retraction-based line-search} algorithms can be represented as: \begin{equation}\label{eq:iter_retr} \matr{X}_{k+1} = \retr_{\matr{X}_k} (\tau_k\matr{V}_k). \end{equation} Here, $ \matr{V}_k \in \TangM{\matr{X}_k} $ is a \emph{search direction} such as $ -\grad f(\matr{X}_k) $, where $\grad f(\matr{X}_k) $ is the \emph{Riemannian gradient} <|cite_start|> (Reference: Optimization algorithms on matrix manifolds: Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists.) <|cite_end|>of $f$ at $\matr{X}_k$, \(\tau_k>0\) is the \emph{stepsize} selected by certain rules and $ \retr $ is a retraction on $\mm$. For the Stiefel manifold $\St(r,n)$, the retraction can be chosen as the exponential map, QR decomposition, polar decomposition or Cayley transform; see <|cite_start|> (Reference: A Brief Introduction to Manifold Optimization: ) <|cite_end|>and the references therein. For the Grassmann manifold $\Gr(p,n)$ in the form of the quotient manifold \( \St(p, n) / \ON{p} \), each retraction on the Stiefel manifold induces a corresponding retraction on $\Gr(p,n)$ \cite[Prop. 4.1.3]{absil2009optimization}. When the Grassmann manifold is represented as the set of projection matrices satisfying $\rank{\matr{X}}=p$, available retraction options involve utilizing QR decomposition <|cite_start|> (Reference: Optimization algorithms on the Grassmann manifold with application to matrix eigenvalue problems: ) <|cite_end|>and the exponential map <|cite_start|> (Reference: A Grassmann Manifold Handbook: Basic Geometry and Computational Aspects: The Grassmann manifold of linear subspaces is important for the mathematical modelling of a multitude of applications, ranging from problems in machine learning, computer vision and image processing to low-rank matrix optimization problems, dynamic low-rank decompositions and model reduction. With this mostly expository work, we aim to provide a collection of the essential facts and formulae on the geometry of the Grassmann manifold in a fashion that is fit for tackling the aforementioned problems with matrix-based algorithms. Moreover, we expose the Grassmann geometry both from the approach of representing subspaces with orthogonal projectors and when viewed as a quotient space of the orthogonal group, where subspaces are identified as equivalence classes of (orthogonal) bases. This bridges the associated research tracks and allows for an easy transition between these two approaches. Original contributions include a modified algorithm for computing the Riemannian logarithm map on the Grassmannian that is advantageous numerically but also allows for a more elementary, yet more complete description of the cut locus and the conjugate points. We also derive a formula for parallel transport along geodesics in the orthogonal projector perspective, formulae for the derivative of the exponential map, as well as a formula for Jacobi fields vanishing at one point.) <|cite_end|>. In recent years, there has been a growing interest in the convergence analysis of retraction-based line-search update scheme \eqref{eq:iter_retr} <|cite_start|> (Reference: Erratum to: ``Global rates of convergence for nonconvex optimization on manifolds'': We consider the minimization of a cost function $f$ on a manifold $M$ using Riemannian gradient descent and Riemannian trust regions (RTR). We focus on satisfying necessary optimality conditions within a tolerance $\varepsilon$. Specifically, we show that, under Lipschitz-type assumptions on the pullbacks of $f$ to the tangent spaces of $M$, both of these algorithms produce points with Riemannian gradient smaller than $\varepsilon$ in $O(1/\varepsilon^2)$ iterations. Furthermore, RTR returns a point where also the Riemannian Hessian's least eigenvalue is larger than $-\varepsilon$ in $O(1/\varepsilon^3)$ iterations. There are no assumptions on initialization. The rates match their (sharp) unconstrained counterparts as a function of the accuracy $\varepsilon$ (up to constants) and hence are sharp in that sense. These are the first deterministic results for global rates of convergence to approximate first- and second-order Karush-Kuhn-Tucker points on manifolds. They apply in particular for optimization constrained to compact submanifolds of $\mathbb{R}^n$, under simpler assumptions.) <|cite_end|> <|cite_start|> (Reference: Riemannian SVRG: Fast Stochastic Optimization on Riemannian Manifolds: We study optimization of finite sums of geodesically smooth functions on Riemannian manifolds. Although variance reduction techniques for optimizing finite-sums have witnessed tremendous attention in the recent years, existing work is limited to vector space problems. We introduce Riemannian SVRG (RSVRG), a new variance reduced Riemannian optimization method. We analyze RSVRG for both geodesically convex and nonconvex (smooth) functions. Our analysis reveals that RSVRG inherits advantages of the usual SVRG method, but with factors depending on curvature of the manifold that influence its convergence. To our knowledge, RSVRG is the first provably fast stochastic Riemannian method. Moreover, our paper presents the first non-asymptotic complexity analysis (novel even for the batch setting) for nonconvex Riemannian optimization. Our results have several implications; for instance, they offer a Riemannian perspective on variance reduced PCA, which promises a short, transparent convergence analysis.) <|cite_end|> <|cite_start|> (Reference: First-order Methods for Geodesically Convex Optimization: Geodesic convexity generalizes the notion of (vector space) convexity to nonlinear metric spaces. But unlike convex optimization, geodesically convex (g-convex) optimization is much less developed. In this paper we contribute to the understanding of g-convex optimization by developing iteration complexity analysis for several first-order algorithms on Hadamard manifolds. Specifically, we prove upper bounds for the global complexity of deterministic and stochastic (sub)gradient methods for optimizing smooth and nonsmooth g-convex functions, both with and without strong g-convexity. Our analysis also reveals how the manifold geometry, especially \emph{sectional curvature}, impacts convergence rates. To the best of our knowledge, our work is the first to provide global complexity analysis for first-order algorithms for general g-convex optimization.) <|cite_end|>. Notably, the \emph{weak convergence}\footnote{Every accumulation point of the iterates is a stationary point, \emph{i.e.}, the Riemannian gradient of the cost function at this point is $\vect{0}$.} of general first-order line-search algorithms on a general manifold has been established in <|cite_start|> (Reference: Optimization algorithms on matrix manifolds: Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists.) <|cite_end|>under a \emph{gradient-related} assumption. The research conducted in <|cite_start|> (Reference: Quadratic optimization with orthogonality constraint: explicit Łojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methods: ) <|cite_end|> <|cite_start|> (Reference: A Riemannian gradient ascent algorithm with applications to orthogonal approximation problems of symmetric tensors: ) <|cite_end|>demonstrates the \emph{global convergence}\footnote{For any starting point, the iterates converge as a whole sequence.} of these algorithms on the Stiefel manifold. It has also been shown in <|cite_start|> (Reference: Quadratic optimization with orthogonality constraint: explicit Łojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methods: ) <|cite_end|>that the sequence generated by these algorithms exhibits linear convergence for quadratic optimization on the Stiefel manifold. The work presented in <|cite_start|> (Reference: Iteration-Complexity of Gradient, Subgradient and Proximal Point Methods on Riemannian Manifolds: ) <|cite_end|> <|cite_start|> (Reference: Erratum to: ``Global rates of convergence for nonconvex optimization on manifolds'': We consider the minimization of a cost function $f$ on a manifold $M$ using Riemannian gradient descent and Riemannian trust regions (RTR). We focus on satisfying necessary optimality conditions within a tolerance $\varepsilon$. Specifically, we show that, under Lipschitz-type assumptions on the pullbacks of $f$ to the tangent spaces of $M$, both of these algorithms produce points with Riemannian gradient smaller than $\varepsilon$ in $O(1/\varepsilon^2)$ iterations. Furthermore, RTR returns a point where also the Riemannian Hessian's least eigenvalue is larger than $-\varepsilon$ in $O(1/\varepsilon^3)$ iterations. There are no assumptions on initialization. The rates match their (sharp) unconstrained counterparts as a function of the accuracy $\varepsilon$ (up to constants) and hence are sharp in that sense. These are the first deterministic results for global rates of convergence to approximate first- and second-order Karush-Kuhn-Tucker points on manifolds. They apply in particular for optimization constrained to compact submanifolds of $\mathbb{R}^n$, under simpler assumptions.) <|cite_end|>derived the \emph{convergence rate} of the gradient descent type algorithm under specific conditions. In particular, the Riemannian gradient descent method attains a first-order \(\epsilon\)-stationary point within $\mathcal{O}(\epsilon^{-2})$ iterations on a general compact submanifold of Euclidean space. Although not as extensive as the convergence studies on the Stiefel manifold, for the Grassmann manifold in the form of the set of projection matrices, the weak convergence of the Riemannian gradient descent method was also established <|cite_start|> (Reference: Optimization algorithms on the Grassmann manifold with application to matrix eigenvalue problems: ) <|cite_end|>. \subsection{Gradient projection method}\label{subsec:grad_proj_meth} In addition to the retraction-based line-search algorithms discussed in \cref{subsec:retrac_Riema_optim}, the other feasible approach to addressing problem \eqref{eq:objec_func_g} is through the classical \emph{gradient projection} algorithm, which selects the next iterate by \begin{equation}\label{eq:iter_prjec} \matr{X}_{k+1} = \mathcal{P}_{\mathcal{M}} (\matr{X}_{k}-\tau_k\nabla f(\matr{X}_{k})), \end{equation} where $\mathcal{P}_{\mathcal{M}}:\RR^{n\times r}\rightarrow \mm$ denotes the \emph{projection} mapping onto $\mm$ computing the best approximation, and $\tau_{k}>0$ is the stepsize. It is well-known that $\mathcal{P}_{\mathcal{M}}$ can be computed via the polar decomposition when $\mm = \St(r, n) $; see \cite[Lem. 5]{li2019polar} and the references therein. We will demonstrate later in \cref{lem:proj-Gr} that, when $\mm = \Gr(p, n)$, the projection $\mathcal{P}_{\mathcal{M}}$ can be obtained from the eigenvalue decomposition. Although the update schemes \eqref{eq:iter_retr} and \eqref{eq:iter_prjec} both keep the iterates in the feasible region $ \mm $ and, for tangent vectors $\matr{V}\in\TangM{\matr{X}}$, the map $ \operatorname{R}_{\matr{X}}:\matr{V}\mapsto \mathcal{P}_{\mathcal{M}}(\matr{X} + \matr{V}) $ forms a retraction <|cite_start|> (Reference: Projection-like Retractions on Matrix Manifolds: This paper deals with constructing retractions, a key step when applying optimization algorithms on matrix manifolds. For submanifolds of Euclidean spaces, we show that the operation consisting of taking a tangent step in the embedding Euclidean space followed by a projection onto the submanifold is a retraction. We also show that the operation remains a retraction if the projection is generalized to a projection-like procedure that consists of coming back to the submanifold along “admissible” directions, and we give a sufficient condition on the admissible directions for the generated retraction to be second order. This theory offers a framework in which previously proposed retractions can be analyzed, as well as a toolbox for constructing new ones. Illustrations are given for projection-like procedures on some specific manifolds for which we have an explicit, easy-to-compute expression.) <|cite_end|>, there still exist fundamental differences between them. For example, the Euclidean gradient $ \nabla f(\matr{X}) $ in \eqref{eq:iter_prjec} is not necessarily tangent to $ \matr{X} $ in general, and there also exist other choices of retraction besides the projection. Therefore, the existing analysis of retraction-based line-search algorithms \eqref{eq:iter_retr} cannot be directly applied to the projection-based one in \eqref{eq:iter_prjec}. In recent years, there has been extensive research on the convergence of the gradient projection algorithm \eqref{eq:iter_prjec} for addressing \emph{phase synchronization} <|cite_start|> (Reference: On the Estimation Performance and Convergence Rate of the Generalized Power Method for Phase Synchronization: An estimation problem of fundamental interest is that of phase synchronization, in which the goal is to recover a collection of phases using noisy measurements of relative phases. It is known that in the Gaussian noise setting, the maximum likelihood estimator (MLE) has an expected squared $\ell_2$-estimation error that is on the same order as the Cram\'er-Rao lower bound. Moreover, even though the MLE is an optimal solution to a non-convex quadratic optimization problem, it can be found with high probability using semidefinite programming (SDP), provided that the noise power is not too large. In this paper, we study the estimation and convergence performance of a recently-proposed low-complexity alternative to the SDP-based approach, namely, the generalized power method (GPM). Our contribution is twofold. First, we bound the rate at which the estimation error decreases in each iteration of the GPM and use this bound to show that all iterates---not just the MLE---achieve an estimation error that is on the same order as the Cram\'er-Rao bound. Our result holds under the least restrictive assumption on the noise power and gives the best provable bound on the estimation error known to date. It also implies that one can terminate the GPM at any iteration and still obtain an estimator that has a theoretical guarantee on its estimation error. Second, we show that under the same assumption on the noise power as that for the SDP-based method, the GPM will converge to the MLE at a linear rate with high probability. This answers a question raised in [3] and shows that the GPM is competitive in terms of both theoretical guarantees and numerical efficiency with the SDP-based method. At the heart of our convergence rate analysis is a new error bound for the non-convex quadratic optimization formulation of the phase synchronization problem, which could be of independent interest.) <|cite_end|> <|cite_start|> (Reference: Improved Performance Guarantees for Orthogonal Group Synchronization via Generalized Power Method: Given the noisy pairwise measurements among a set of unknown group elements, how to recover them efficiently and robustly? This problem, known as group synchronization, has drawn tremendous attention in the scientific community. In this work, we focus on orthogonal group synchronization that has found many applications, including computer vision, robotics, and cryo-electron microscopy. One commonly used approach is the least squares estimation that requires solving a highly nonconvex optimization program. The past few years have witnessed considerable advances in tackling this challenging problem by convex relaxation and efficient first-order methods. However, one fundamental theoretical question remains to be answered: how does the recovery performance depend on the noise strength? To answer this question, we study a benchmark model: recovering orthogonal group elements from their pairwise measurements corrupted by Gaussian noise. We investigate the performance of convex relaxation and the generalized power method (GPM). By applying the novel~\emph{leave-one-out} technique, we prove that the GPM with spectral initialization enjoys linear convergence to the global optima to the convex relaxation that also matches the maximum likelihood estimator. Our result achieves a near-optimal performance bound on the convergence of the GPM and improves the state-of-the-art theoretical guarantees on the tightness of convex relaxation by a large margin.) <|cite_end|> <|cite_start|> (Reference: Orthogonal group synchronization with incomplete measurements: Error bounds and linear convergence of the generalized power method: Group synchronization refers to estimating a collection of group elements from the noisy pairwise measurements. Such a nonconvex problem has received much attention from numerous scientific fields including computer vision, robotics, and cryo-electron microscopy. In this paper, we focus on the orthogonal group synchronization problem with general additive noise models under incomplete measurements, which is much more general than the commonly considered setting of complete measurements. Characterizations of the orthogonal group synchronization problem are given from perspectives of optimality conditions as well as fixed points of the projected gradient ascent method which is also known as the generalized power method (GPM). It is well worth noting that these results still hold even without generative models. In the meantime, we derive the local error bound property for the orthogonal group synchronization problem which is useful for the convergence rate analysis of different algorithms and can be of independent interest. Finally, we prove the linear convergence result of the GPM to a global maximizer under a general additive noise model based on the established local error bound property. Our theoretical convergence result holds under several deterministic conditions which can cover certain cases with adversarial noise, and as an example we specialize it to the setting of the Erd\"os-R\'enyi measurement graph and Gaussian noise.) <|cite_end|>and \emph{tensor approximation} problems <|cite_start|> (Reference: On the tensor {SVD} and the optimal low rank orthogonal approximation of tensors: It is known that a higher order tensor does not necessarily have an optimal low rank approximation, and that a tensor might not be orthogonally decomposable (i.e., admit a tensor SVD). We provide several sufficient conditions which lead to the failure of the tensor SVD, and characterize the existence of the tensor SVD with respect to the higher order SVD (HOSVD). In the face of these difficulties to generalize standard results known in the matrix case to tensors, we consider the low rank orthogonal approximation of tensors. The existence of an optimal approximation is theoretically guaranteed under certain conditions, and this optimal approximation yields a tensor decomposition where the diagonal of the core is maximized. We present an algorithm to compute this approximation and analyze its convergence behavior. Numerical experiments indicate a linear convergence rate for this algorithm.) <|cite_end|> <|cite_start|> (Reference: The Epsilon-Alternating Least Squares for Orthogonal Low-Rank Tensor Approximation and Its Global Convergence: The epsilon alternating least squares ($\epsilon$-ALS) is developed and analyzed for canonical polyadic decomposition (approximation) of a higher-order tensor where one or more of the factor matrices are assumed to be columnwisely orthonormal. It is shown that the algorithm globally converges to a KKT point for all tensors without any assumption. For the original ALS, by further studying the properties of the polar decomposition, we also establish its global convergence under a reality assumption not stronger than those in the literature. These results completely address a question concerning the global convergence raised in [L. Wang, M. T. Chu and B. Yu, \emph{SIAM J. Matrix Anal. Appl.}, 36 (2015), pp. 1--19]. In addition, an initialization procedure is proposed, which possesses a provable lower bound when the number of columnwisely orthonormal factors is one. Armed with this initialization procedure, numerical experiments show that the $\epsilon$-ALS exhibits a promising performance in terms of efficiency and effectiveness.) <|cite_end|> <|cite_start|> (Reference: Linear convergence of an alternating polar decomposition method for low rank orthogonal tensor approximations: ) <|cite_end|> <|cite_start|> (Reference: Polar Decomposition-based Algorithms on the Product of Stiefel Manifolds with Applications in Tensor Approximation: ) <|cite_end|>, where the feasible set can be the Stiefel manifold, the product of Stiefel manifolds, or the Grassmann manifold. In addition, various variants of the gradient projection algorithm \eqref{eq:iter_prjec} have also been studied, as well as their convergence properties. For example, an algorithm combining \eqref{eq:iter_prjec} with a correction step was proposed in <|cite_start|> (Reference: A New First-Order Algorithmic Framework for Optimization Problems with Orthogonality Constraints: In this paper, we consider a class of optimization problems with orthogonality constraints, the feasible region of which is called the Stiefel manifold. Our new framework combines a function value reduction step with a correction step. Different from the existing approaches, the function value reduction step of our algorithmic framework searches along the standard Euclidean descent directions instead of the vectors in the tangent space of the Stiefel manifold, and the correction step further reduces the function value and guarantees a symmetric dual variable at the same time. We construct two types of algorithms based on this new framework. The first type is based on gradient reduction including the gradient reflection (GR) and the gradient projection (GP) algorithms. The other one adopts a columnwise block coordinate descent (CBCD) scheme with a novel idea for solving the corresponding CBCD subproblem inexactly. We prove that both GR/GP with a fixed step size and CBCD belong to our algorithmic framework,...) <|cite_end|>. In the long line of work presented in <|cite_start|> (Reference: A Scaled Gradient Projection Method for Minimization over the Stiefel Manifold: ) <|cite_end|> <|cite_start|> (Reference: Two adaptive scaled gradient projection methods for Stiefel manifold constrained optimization: ) <|cite_end|> <|cite_start|> (Reference: A non-monotone linear search algorithm with mixed direction on Stiefel manifold: In this paper, we propose a non-monotone line search method for solving optimization problems on Stiefel manifold. The main novelty of our approach is that our method uses a search direction based on a linear combination of descent directions and a Barzilai–Borwein line search. The feasibility is guaranteed by projecting each iterate on the Stiefel manifold through SVD (singular value decomposition) factorizations. Some theoretical results for analysing the algorithm are presented. Finally, we provide numerical experiments for comparing our algorithm with other state-of-the-art procedures. The code is available online. The experimental results show that the proposed algorithm is competitive with other approaches and for particular problems, the computational performance is better than the state-of-the-art algorithms.) <|cite_end|>
[ "<|reference_start|> An Introduction to Optimization on Smooth Manifolds: Optimization on Riemannian manifolds-the result of smooth geometry and optimization merging into one elegant modern framework-spans many areas of science and engineering, including machine learning, computer vision, signal processing, dynamical systems and scientific computing. This text introduces the differential geometry and Riemannian geometry concepts that will help students and researchers in applied mathematics, computer science and engineering gain a firm mathematical grounding to use these tools confidently in their research. Its charts-last approach will prove more intuitive from an optimizer's viewpoint, and all definitions and theorems are motivated to build time-tested optimization algorithms. Starting from first principles, the text goes on to cover current research on topics including worst-case complexity and geodesic convexity. Readers will appreciate the tricks of the trade for conducting research and for numerical implementations sprinkled throughout the book. <|reference_end|>", "<|reference_start|> Optimization algorithms on matrix manifolds: Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists. <|reference_end|>", "<|reference_start|> A Grassmann Manifold Handbook: Basic Geometry and Computational Aspects: The Grassmann manifold of linear subspaces is important for the mathematical modelling of a multitude of applications, ranging from problems in machine learning, computer vision and image processing to low-rank matrix optimization problems, dynamic low-rank decompositions and model reduction. With this mostly expository work, we aim to provide a collection of the essential facts and formulae on the geometry of the Grassmann manifold in a fashion that is fit for tackling the aforementioned problems with matrix-based algorithms. Moreover, we expose the Grassmann geometry both from the approach of representing subspaces with orthogonal projectors and when viewed as a quotient space of the orthogonal group, where subspaces are identified as equivalence classes of (orthogonal) bases. This bridges the associated research tracks and allows for an easy transition between these two approaches. Original contributions include a modified algorithm for computing the Riemannian logarithm map on the Grassmannian that is advantageous numerically but also allows for a more elementary, yet more complete description of the cut locus and the conjugate points. We also derive a formula for parallel transport along geodesics in the orthogonal projector perspective, formulae for the derivative of the exponential map, as well as a formula for Jacobi fields vanishing at one point. <|reference_end|>", "<|reference_start|> Improved Performance Guarantees for Orthogonal Group Synchronization via Generalized Power Method: Given the noisy pairwise measurements among a set of unknown group elements, how to recover them efficiently and robustly? This problem, known as group synchronization, has drawn tremendous attention in the scientific community. In this work, we focus on orthogonal group synchronization that has found many applications, including computer vision, robotics, and cryo-electron microscopy. One commonly used approach is the least squares estimation that requires solving a highly nonconvex optimization program. The past few years have witnessed considerable advances in tackling this challenging problem by convex relaxation and efficient first-order methods. However, one fundamental theoretical question remains to be answered: how does the recovery performance depend on the noise strength? To answer this question, we study a benchmark model: recovering orthogonal group elements from their pairwise measurements corrupted by Gaussian noise. We investigate the performance of convex relaxation and the generalized power method (GPM). By applying the novel~\\emph{leave-one-out} technique, we prove that the GPM with spectral initialization enjoys linear convergence to the global optima to the convex relaxation that also matches the maximum likelihood estimator. Our result achieves a near-optimal performance bound on the convergence of the GPM and improves the state-of-the-art theoretical guarantees on the tightness of convex relaxation by a large margin. <|reference_end|>" ]
[ 47, 49, 63, 76 ]
{"<|multi_cite_1_1|>": "ss-1261283", "<|multi_cite_1_2|>": "ss-958475", "<|multi_cite_1_3|>": "ss-2229951", "<|multi_cite_2_1|>": "ss-850958", "<|multi_cite_2_2|>": "ss-1096118", "<|multi_cite_2_3|>": "arxiv-236586", "<|multi_cite_3_1|>": "ss-813306", "<|multi_cite_3_2|>": "ss-1903593", "<|multi_cite_4_1|>": "arxiv-37633", "<|multi_cite_4_2|>": "ss-1014261", "<|multi_cite_4_3|>": "ss-1356700", "<|multi_cite_4_4|>": "arxiv-101588", "<|multi_cite_5_1|>": "ss-738155", "<|multi_cite_5_2|>": "ss-1537801", "<|cite_6|>": "arxiv-306186", "<|cite_7|>": "ss-719123", "<|multi_cite_8_1|>": "ss-2149560", "<|multi_cite_8_2|>": "ss-2229951", "<|cite_9|>": "ss-1471866", "<|cite_10|>": "ss-1315775", "<|cite_11|>": "arxiv-306186", "<|cite_12|>": "ss-2483034", "<|multi_cite_13_1|>": "ss-2229952", "<|multi_cite_13_3|>": "ss-1316828", "<|multi_cite_14_1|>": "ss-1261283", "<|multi_cite_14_2|>": "ss-1255129", "<|multi_cite_14_3|>": "ss-1541270", "<|cite_15|>": "ss-766678", "<|multi_cite_16_2|>": "ss-2026491", "<|cite_17|>": "ss-766678", "<|multi_cite_18_1|>": "ss-766678", "<|multi_cite_18_2|>": "ss-2026491", "<|multi_cite_18_3|>": "arxiv-63857", "<|multi_cite_18_4|>": "ss-1909427", "<|multi_cite_19_1|>": "ss-738155", "<|multi_cite_19_2|>": "arxiv-45579", "<|multi_cite_20_1|>": "ss-2229953", "<|multi_cite_20_2|>": "ss-1331132", "<|multi_cite_21_1|>": "ss-2229953", "<|multi_cite_21_2|>": "arxiv-462850", "<|multi_cite_22_1|>": "ss-1537801", "<|multi_cite_22_2|>": "ss-1683462", "<|multi_cite_22_3|>": "ss-850507", "<|cite_23|>": "ss-738155", "<|multi_cite_24_1|>": "ss-1261283", "<|cite_25|>": "ss-1331132", "<|multi_cite_26_1|>": "ss-1261283", "<|multi_cite_26_2|>": "ss-1255129", "<|multi_cite_26_3|>": "ss-1541270", "<|multi_cite_27_1|>": "ss-1261283", "<|multi_cite_27_2|>": "ss-2501711", "<|multi_cite_27_3|>": "ss-1929109", "<|multi_cite_27_4|>": "ss-2229954", "<|multi_cite_28_1|>": "ss-2149560", "<|multi_cite_28_2|>": "ss-1167847", "<|multi_cite_28_3|>": "ss-2150002", "<|multi_cite_28_4|>": "ss-769774", "<|multi_cite_28_5|>": "ss-769776", "<|multi_cite_29_1|>": "ss-1167846", "<|multi_cite_29_2|>": "ss-2501711", "<|cite_30|>": "ss-1261283", "<|cite_31|>": "ss-1541270", "<|cite_32|>": "ss-2229955", "<|cite_33|>": "arxiv-306186", "<|multi_cite_34_1|>": "ss-2501711", "<|multi_cite_34_2|>": "arxiv-98504", "<|multi_cite_34_3|>": "arxiv-92510", "<|cite_35|>": "ss-1261283", "<|multi_cite_36_1|>": "ss-1929109", "<|multi_cite_36_2|>": "ss-2229954", "<|cite_37|>": "ss-1929109", "<|multi_cite_38_1|>": "ss-962639", "<|multi_cite_38_2|>": "ss-2501711", "<|cite_39|>": "ss-2229955", "<|cite_40|>": "ss-1331132", "<|multi_cite_41_1|>": "ss-1341716", "<|multi_cite_41_2|>": "arxiv-306943", "<|multi_cite_41_3|>": "ss-957962", "<|multi_cite_42_1|>": "ss-1256273", "<|multi_cite_42_2|>": "arxiv-236127", "<|multi_cite_42_3|>": "ss-2540643", "<|multi_cite_42_4|>": "ss-2229951", "<|cite_43|>": "ss-1857890", "<|multi_cite_44_1|>": "ss-1316834", "<|multi_cite_44_2|>": "ss-1316835", "<|multi_cite_44_3|>": "ss-1316836", "<|multi_cite_45_1|>": "ss-699297", "<|multi_cite_45_2|>": "ss-1210431", "<|multi_cite_45_3|>": "ss-1375482", "<|multi_cite_46_1|>": "ss-2579528", "<|multi_cite_46_2|>": "ss-1526475", "<|multi_cite_46_3|>": "ss-2229956", "<|multi_cite_47_1|>": "ss-2501711", "<|multi_cite_47_2|>": "arxiv-233776", "<|multi_cite_47_3|>": "ss-1929109"}
2201.12594-0
<|paper_start|> Title: Robust Imitation Learning from Corrupted Demonstrations Abstract: Robust Imitation Learning from Corrupted Demonstrations: We consider offline Imitation Learning from corrupted demonstrations where a constant fraction of data can be noise or even arbitrary outliers. Classical approaches such as Behavior Cloning assumes that demonstrations are collected by an presumably optimal expert, hence may fail drastically when learning from corrupted demonstrations. We propose a novel robust algorithm by minimizing a Median-of-Means (MOM) objective which guarantees the accurate estimation of policy, even in the presence of constant fraction of outliers. Our theoretical analysis shows that our robust method in the corrupted setting enjoys nearly the same error scaling and sample complexity guarantees as the classical Behavior Cloning in the expert demonstration setting. Our experiments on continuous-control benchmarks validate that our method exhibits the predicted robustness and effectiveness, and achieves competitive results compared to existing imitation learning methods. Introduction \label{sec:intro} Recent years have witnessed the success of using autonomous agent to learn and adapt to complex tasks and environments in a range of applications such as playing games~\citep[e.g.][]{mnih2015human, silver2018general, vinyals2019grandmaster}, autonomous driving~\citep[e.g.][]{kendall2019learning, bellemare2020autonomous}, robotics <|cite_start|> (Reference: Reinforcement Learning with Deep Energy-Based Policies: We propose a method for learning expressive energy-based policies for continuous states and actions, which has been feasible only in tabular domains before. We apply our method to learning maximum entropy policies, resulting into a new algorithm, called soft Q-learning, that expresses the optimal policy via a Boltzmann distribution. We use the recently proposed amortized Stein variational gradient descent to learn a stochastic sampling network that approximates samples from this distribution. The benefits of the proposed algorithm include improved exploration and compositionality that allows transferring skills between tasks, which we confirm in simulated experiments with swimming and walking robots. We also draw a connection to actor-critic methods, which can be viewed performing approximate inference on the corresponding energy-based model.) <|cite_end|>, medical treatment~\citep[e.g.][]{RLHealthCare} and recommendation system and advertisement~\citep[e.g.][]{li11unbiased, thomas17predictive}. Previous success for sequential decision making often requires two key components: (1) a careful design reward function that can provide the supervision signal during learning and (2) an unlimited number of online interactions with the real-world environment (or a carefully designed simulator) to query new unseen region. However, in many scenarios, both components are not allowed. For example, it is hard to define the reward signal in uncountable many extreme situations in autonomous driving <|cite_start|> (Reference: Deep Reinforcement Learning for Autonomous Driving: A Survey: With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.) <|cite_end|>; and it is dangerous and risky to directly deploy a learning policy on human to gather information in autonomous medical treatment <|cite_start|> (Reference: Reinforcement Learning in Healthcare: A Survey: As a subfield of machine learning, reinforcement learning (RL) aims at empowering one's capabilities in behavioural decision making by using interaction experience with the world and an evaluative feedback. Unlike traditional supervised learning methods that usually rely on one-shot, exhaustive and supervised reward signals, RL tackles with sequential decision making problems with sampled, evaluative and delayed feedback simultaneously. Such distinctive features make RL technique a suitable candidate for developing powerful solutions in a variety of healthcare domains, where diagnosing decisions or treatment regimes are usually characterized by a prolonged and sequential procedure. This survey discusses the broad applications of RL techniques in healthcare domains, in order to provide the research community with systematic understanding of theoretical foundations, enabling methods and techniques, existing challenges, and new insights of this emerging paradigm. By first briefly examining theoretical foundations and key techniques in RL research from efficient and representational directions, we then provide an overview of RL applications in healthcare domains ranging from dynamic treatment regimes in chronic diseases and critical care, automated medical diagnosis from both unstructured and structured clinical data, as well as many other control or scheduling domains that have infiltrated many aspects of a healthcare system. Finally, we summarize the challenges and open issues in current research, and point out some potential solutions and directions for future research.) <|cite_end|>. Therefore an \emph{offline} sequential decision making algorithm without reward signal is in demand. Imitation Learning (IL) <|cite_start|> (Reference: An Algorithmic Perspective on Imitation Learning: As robots and other intelligent agents move from simple environments and problems to more complex, unstructured settings, manually programming their behavior has become increasingly challenging and expensive. Often, it is easier for a teacher to demonstrate a desired behavior rather than attempt to manually engineer it. This process of learning from demonstrations, and the study of algorithms to do so, is called imitation learning. This work provides an introduction to imitation learning. It covers the underlying assumptions, approaches, and how they relate; the rich set of algorithms developed to tackle the problem; and advice on effective tools and implementation. We intend this paper to serve two audiences. First, we want to familiarize machine learning experts with the challenges of imitation learning, particularly those arising in robotics, and the interesting theoretical and practical distinctions between it and more familiar frameworks like statistical supervised learning theory and reinforcement learning. Second, we want to give roboticists and experts in applied artificial intelligence a broader appreciation for the frameworks and tools available for imitation learning.) <|cite_end|>offers an elegant way to train intelligent agents for complex task without the knowledge of reward functions. In order to guide intelligent agents to correct behaviors, it is crucial to have high quality expert demonstrations. The well-known imitation learning algorithms such as Behavior Cloning (BC, <|cite_start|> (Reference: ALVINN: an autonomous land vehicle in a neural network: The support-vector network is a new leaming machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very highdimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.) <|cite_end|>) or Generative Adversarial Imitation Learning (GAIL, <|cite_start|> (Reference: Generative Adversarial Imitation Learning: Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal. One approach is to recover the expert's cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large, high-dimensional environments.) <|cite_end|>) require that the demonstrations given for training are all \emph{presumably optimal} and it aims to learn the optimal policy from expert demonstration data set. More specifically, BC only uses offline demonstration data without any interaction with the environment, whereas GAIL requires online interactions. However in real world scenario, since the demonstration is often collected from human, we cannot guarantee that \emph{all} the demonstrations we collected have high quality. This has been addressed in a line of research <|cite_start|> (Reference: Imitation Learning from Imperfect Demonstration: Imitation learning (IL) aims to learn an optimal policy from demonstrations. However, such demonstrations are often imperfect since collecting optimal ones is costly. To effectively learn from imperfect demonstrations, we propose a novel approach that utilizes confidence scores, which describe the quality of demonstrations. More specifically, we propose two confidence-based IL methods, namely two-step importance weighting IL (2IWIL) and generative adversarial IL with imperfect demonstration and confidence (IC-GAIL). We show that confidence scores given only to a small portion of sub-optimal demonstrations significantly improve the performance of IL both theoretically and empirically.) <|cite_end|> <|cite_start|> (Reference: Variational Imitation Learning with Diverse-quality Demonstrations: . (19) Since ftpφ,ωq “ Ftpφ,ω,ψq “ maxψ Ftpφ,ω,ψq, we have that fpφ,ωq “ maxψ Fpφ,ω,ψq. A.2. Lower-bound G Next, we derive the lower-bound G of gpφ,ωq “ logZφ,ω . We first derive a trivial lower-bound using a “general” variational distribution over trajectories and discuss its issue. Then, we derive a lower-bound presented in the paper by using a structured variational distribution. Recall that the normalization term Zφ,ω of the model pφ,ω is given by Zφ,ω “ K) <|cite_end|> <|cite_start|> (Reference: Robust Imitation Learning from Noisy Demonstrations: Robust learning from noisy demonstrations is a practical but highly challenging problem in imitation learning. In this paper, we first theoretically show that robust imitation learning can be achieved by optimizing a classification risk with a symmetric loss. Based on this theoretical finding, we then propose a new imitation learning method that optimizes the classification risk by effectively combining pseudo-labeling with co-training. Unlike existing methods, our method does not require additional labels or strict assumptions about noise distributions. Experimental results on continuous-control benchmarks show that our method is more robust compared to state-of-the-art methods.) <|cite_end|> <|cite_start|> (Reference: Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations: A critical flaw of existing inverse reinforcement learning (IRL) methods is their inability to significantly outperform the demonstrator. This is because IRL typically seeks a reward function that makes the demonstrator appear near-optimal, rather than inferring the underlying intentions of the demonstrator that may have been poorly executed in practice. In this paper, we introduce a novel reward-learning-from-observation algorithm, Trajectory-ranked Reward EXtrapolation (T-REX), that extrapolates beyond a set of (approximately) ranked demonstrations in order to infer high-quality reward functions from a set of potentially poor demonstrations. When combined with deep reinforcement learning, T-REX outperforms state-of-the-art imitation learning and IRL methods on multiple Atari and MuJoCo benchmark tasks and achieves performance that is often more than twice the performance of the best demonstration. We also demonstrate that T-REX is robust to ranking noise and can accurately extrapolate intention by simply watching a learner noisily improve at a task over time.) <|cite_end|> <|cite_start|> (Reference: Behavioral Cloning from Noisy Demonstrations: ) <|cite_end|>. An human expert can make mistakes by accident or due to the hardness of a complicated scenario (e.g., medical diagnosis). Furthermore, even an expert demonstrates a successful behavior, the recorder or the recording system can have a chance to contaminate the data by accident or on purpose \citep[e.g.][]{neff2016automation, eykholt2018robust, Zhu2106}. This leads to the central question of the paper: \begin{center} \fbox{\begin{varwidth}{\columnwidth} \centering Can the optimality assumption on expert demonstrations be weakened or even tolerate arbitrary outliers under offline imitation learning settings? \end{varwidth}} \end{center} \begin{figure} \centering \includegraphics[width=.5\linewidth]{ICML_Hopper_intro.pdf} \caption{Reward vs. percentage of corruptions in Hopper environment from the PyBullet with corrupted demonstrations. We fix the sample size for the demonstration data set, and vary the fraction of corruptions $\epsilon$ up to 20\%. Shaded region represents one standard deviation for 20 trials. Our algorithm Robust Behavior Cloning (RBC) on corrupted demonstrations has nearly the same performance as BC on expert demonstrations (this is the case when $\epsilon=0$), which achieves expert level. And it barely changes when $\epsilon$ grows larger to 20\%. By contrast, the performance of vanilla BC on corrupted demos fails drastically. The detailed experimental setup and comparisons with existing methods are included in \Cref{fig:curve} and \Cref{fig:curve_size}. } \label{fig:Curve_Hopper} \end{figure} More concretely, we consider \emph{corrupted demonstrations} setting where the majority of the demonstration data is collected by an expert policy (presumably optimal), and the remaining data can be even \emph{arbitrary} outliers (the formal definition is presented in \Cref{def:Huber}). Such definitions allowing \emph{arbitrary} outliers for the corrupted samples have rich history in robust statistics <|cite_start|> (Reference: Robust Estimation of a Location Parameter: ) <|cite_end|> <|cite_start|> (Reference: {Robust statistics: The classical books on this subject are Hampel et al. (1986); Huber (1981), with somewhat simpler (but partial) introductions by Rousseeuw & Leroy (1987); Staudte & Sheather (1990). The dates reflect the development of the subject: it had tremendous growth for about two decades from 1964, but failed to win over the mainstream. I think it is an important area that is used a lot less than it ought to be.) <|cite_end|>, yet have not been widely used in imitation learning. This has great significance in many applications, such as automated medical diagnosis for healthcare ( <|cite_start|> (Reference: Reinforcement Learning in Healthcare: A Survey: As a subfield of machine learning, reinforcement learning (RL) aims at empowering one's capabilities in behavioural decision making by using interaction experience with the world and an evaluative feedback. Unlike traditional supervised learning methods that usually rely on one-shot, exhaustive and supervised reward signals, RL tackles with sequential decision making problems with sampled, evaluative and delayed feedback simultaneously. Such distinctive features make RL technique a suitable candidate for developing powerful solutions in a variety of healthcare domains, where diagnosing decisions or treatment regimes are usually characterized by a prolonged and sequential procedure. This survey discusses the broad applications of RL techniques in healthcare domains, in order to provide the research community with systematic understanding of theoretical foundations, enabling methods and techniques, existing challenges, and new insights of this emerging paradigm. By first briefly examining theoretical foundations and key techniques in RL research from efficient and representational directions, we then provide an overview of RL applications in healthcare domains ranging from dynamic treatment regimes in chronic diseases and critical care, automated medical diagnosis from both unstructured and structured clinical data, as well as many other control or scheduling domains that have infiltrated many aspects of a healthcare system. Finally, we summarize the challenges and open issues in current research, and point out some potential solutions and directions for future research.) <|cite_end|>) and autonomous driving <|cite_start|> (Reference: Improved Robustness and Safety for Autonomous Vehicle Control with Adversarial Reinforcement Learning: To improve efficiency and reduce failures in autonomous vehicles, research has focused on developing robust and safe learning methods that take into account disturbances in the environment. Existing literature in robust reinforcement learning poses the learning problem as a two player game between the autonomous system and disturbances. This paper examines two different algorithms to solve the game, Robust Adversarial Reinforcement Learning and Neural Fictitious Self Play, and compares performance on an autonomous driving scenario. We extend the game formulation to a semi-competitive setting and demonstrate that the resulting adversary better captures meaningful disturbances that lead to better overall performance. The resulting robust policy exhibits improved driving efficiency while effectively reducing collision rates compared to baseline control policies produced by traditional reinforcement learning methods.) <|cite_end|>, where the historical data (demonstration) is often complicated and noisy which requires robustness consideration. However, the classical \emph{offline} imitation learning approaches such as Behavior Cloning (BC) fails drastically under this corrupted demonstration settings. We illustrated this phenomenon in \Cref{fig:Curve_Hopper}. We use BC on Hopper environment (a continuous control environment from PyBullet), and the performance of the policy learned by BC drops drastically as the fraction of corruptions increases in the offline demonstration data set. In this paper, we propose a novel robust imitation learning algorithm -- Robust Behavior Cloning (RBC, \Cref{alg:RBC}), which is resilient to corruptions in the offline demonstrations. Particularly, our RBC does not require potentially costly or risky interaction with the real world environment or any human annotations. In \Cref{fig:Curve_Hopper}, Our RBC on corrupted demonstrations has nearly the same performance as BC on expert demonstrations (this is the case when $\epsilon=0$), which achieves expert level. And it barely changes when $\epsilon$ grows larger to 20\%. The detailed experimental setup and comparisons with existing methods (e.g., <|cite_start|> (Reference: Behavioral Cloning from Noisy Demonstrations: ) <|cite_end|>) are included in \Cref{sec:Experiments}. \subsection{Main Contributions} \begin{itemize} \item (Algorithm) We consider robustness in offline imitation learning where we have corrupted demonstrations. Our definition for corrupted demonstrations significantly weakens the presumably optimal assumption on demonstration data, and can tolerate a constant $\epsilon$-fraction of state-action pairs to be arbitrarily corrupted. We refer to \Cref{def:Huber} for a more precise statement. To deal with this issue, we propose a novel algorithm Robust Behavior Cloning (\Cref{alg:RBC}) for robust imitation learning. Our algorithm works in the offline setting, without any further interaction with the environment or any human annotations. The core ingredient of our robust algorithm is using a novel median of means objective in policy estimation compared to classical Behavior Cloning. Hence, it's simple to implement, and computationally efficient. \item (Theoretical guarantees) We analyze our Robust Behavior Cloning algorithm when there exists a constant fraction of outliers in the demonstrations under the offline setting. To the best of our knowledge, we provide the \emph{first theoretical guarantee} robust to constant fraction of arbitrary outliers in offline imitation learning. We show that our RBC achieves nearly the same error scaling and sample complexity compared to vanilla BC with expert demonstrations. To this end, our algorithm guarantees robustness to corrupted demonstrations at no cost of statistical estimation error. This is the content of \Cref{sec:theory}. \item (Empirical support) We validate the predicted robustness and show the effectiveness of our algorithm on a number of different high-dimensional continuous control benchmarks. The vanilla BC is fragile indeed with corrupted demonstrations, yet our Robust Behavior Cloning is computationally efficient, and achieves nearly the same performance compared to vanilla BC with expert demonstrations. \Cref{sec:Experiments} also shows that our algorithm achieves competitive results compared to existing imitation learning methods. \end{itemize} \paragraph{Notation.} Throughout this paper, we use $\{c_i\}_{i=1,2,3}$ to denote the universal positive constant. We utilize the big-$O$ notation $f(n) = O(g(n))$ to denote that there exists a positive constant $c_1$ and a natural number $n_0$ such that, for all $n\geq n_0$, we have $f(n) \leq c_1 g(n)$. \paragraph{Outline.} The rest of this paper is organized as follows. In \Cref{sec:setup}, we formally define the setup and the corrupted demonstrations. In \Cref{sec:algo}, we introduce our RBC and the computationally efficient algorithm (\Cref{alg:RBC}). We provide the theoretical analysis in \Cref{sec:theory}, and experimental results in \Cref{sec:Experiments}. We leave the detailed discussion and related works in \Cref{sec:related}. All proofs and experimental details are collected in the Appendix. Related Work \label{sec:related} \textbf{Imitation Learning.} Behavior Cloning (BC) is the most widely-used imitation learning algorithm <|cite_start|> (Reference: ALVINN: an autonomous land vehicle in a neural network: The support-vector network is a new leaming machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very highdimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.) <|cite_end|> <|cite_start|> (Reference: An Algorithmic Perspective on Imitation Learning: As robots and other intelligent agents move from simple environments and problems to more complex, unstructured settings, manually programming their behavior has become increasingly challenging and expensive. Often, it is easier for a teacher to demonstrate a desired behavior rather than attempt to manually engineer it. This process of learning from demonstrations, and the study of algorithms to do so, is called imitation learning. This work provides an introduction to imitation learning. It covers the underlying assumptions, approaches, and how they relate; the rich set of algorithms developed to tackle the problem; and advice on effective tools and implementation. We intend this paper to serve two audiences. First, we want to familiarize machine learning experts with the challenges of imitation learning, particularly those arising in robotics, and the interesting theoretical and practical distinctions between it and more familiar frameworks like statistical supervised learning theory and reinforcement learning. Second, we want to give roboticists and experts in applied artificial intelligence a broader appreciation for the frameworks and tools available for imitation learning.) <|cite_end|>due to its simplicity, effectiveness and scalability, and has been widely used in practice. From a theoretical viewpoint, it has been showed that BC achieves informational optimality in the offline setting <|cite_start|> (Reference: Toward the Fundamental Limits of Imitation Learning: Imitation learning (IL) aims to mimic the behavior of an expert policy in a sequential decision-making problem given only demonstrations. In this paper, we focus on understanding the minimax statistical limits of IL in episodic Markov Decision Processes (MDPs). We first consider the setting where the learner is provided a dataset of $N$ expert trajectories ahead of time, and cannot interact with the MDP. Here, we show that the policy which mimics the expert whenever possible is in expectation $\lesssim \frac{|\mathcal{S}| H^2 \log (N)}{N}$ suboptimal compared to the value of the expert, even when the expert follows an arbitrary stochastic policy. Here $\mathcal{S}$ is the state space, and $H$ is the length of the episode. Furthermore, we establish a suboptimality lower bound of $\gtrsim |\mathcal{S}| H^2 / N$ which applies even if the expert is constrained to be deterministic, or if the learner is allowed to actively query the expert at visited states while interacting with the MDP for $N$ episodes. To our knowledge, this is the first algorithm with suboptimality having no dependence on the number of actions, under no additional assumptions. We then propose a novel algorithm based on minimum-distance functionals in the setting where the transition model is given and the expert is deterministic. The algorithm is suboptimal by $\lesssim \min \{ H \sqrt{|\mathcal{S}| / N} ,\ |\mathcal{S}| H^{3/2} / N \}$, showing that knowledge of transition improves the minimax rate by at least a $\sqrt{H}$ factor.) <|cite_end|>where we do not have \emph{further online interactions} or the knowledge of the transition dynamic $\Tran$. With online interaction, there's a line of research focusing on improving BC in different scenarios -- for example, <|cite_start|> (Reference: A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning: Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem.) <|cite_end|>proposed DAgger (Data Aggregation) by querying the expert policy in the online setting. <|cite_start|> (Reference: {Disagreement-regularized Imitation Learning: We present a simple and effective algorithm designed to address the covariate shift problem in imitation learning. It operates by training an ensemble of policies on the expert demonstration data, and using the variance of their predictions as a cost which is minimized with RL together with a supervised behavioral cloning cost. Unlike adversarial imitation methods, it uses a fixed reward function which is easy to optimize. We prove a regret bound for the algorithm in the tabular setting which is linear in the time horizon multiplied by a coefficient which we show to be low for certain problems in which behavioral cloning fails. We evaluate our algorithm empirically across multiple pixel-based Atari environments and continuous control tasks, and show that it matches or significantly outperforms behavioral cloning and generative adversarial imitation learning.) <|cite_end|>proposed using an ensemble of BC as uncertainty measure and interacts with the environment to improve BC by taking the uncertainty into account, without the need to query the expert. Very recently, <|cite_start|> (Reference: Error Bounds of Imitating Policies and Environments for Reinforcement Learning: In sequential decision-making, imitation learning (IL) trains a policy efficiently by mimicking expert demonstrations. Various imitation methods were proposed and empirically evaluated, meanwhile, their theoretical understandings need further studies, among which the compounding error in long-horizon decisions is a major issue. In this paper, we first analyze the value gap between the expert policy and imitated policies by two imitation methods, behavioral cloning (BC) and generative adversarial imitation. The results support that generative adversarial imitation can reduce the compounding error compared to BC. Furthermore, we establish the lower bounds of IL under two settings, suggesting the significance of environment interactions in IL. By considering the environment transition model as a dual agent, IL can also be used to learn the environment model. Therefore, based on the bounds of imitating policies, we further analyze the performance of imitating environments. The results show that environment models can be more effectively imitated by generative adversarial imitation than BC. Particularly, we obtain a policy evaluation error that is linear with the effective planning horizon w.r.t. the model bias, suggesting a novel application of adversarial imitation for model-based reinforcement learning (MBRL). We hope these results could inspire future advances in IL and MBRL.) <|cite_end|> <|cite_start|> (Reference: Provably Breaking the Quadratic Error Compounding Barrier in Imitation Learning, Optimally: We study the statistical limits of Imitation Learning (IL) in episodic Markov Decision Processes (MDPs) with a state space $\mathcal{S}$. We focus on the known-transition setting where the learner is provided a dataset of $N$ length-$H$ trajectories from a deterministic expert policy and knows the MDP transition. We establish an upper bound $O(|\mathcal{S}|H^{3/2}/N)$ for the suboptimality using the Mimic-MD algorithm in Rajaraman et al (2020) which we prove to be computationally efficient. In contrast, we show the minimax suboptimality grows as $\Omega( H^{3/2}/N)$ when $|\mathcal{S}|\geq 3$ while the unknown-transition setting suffers from a larger sharp rate $\Theta(|\mathcal{S}|H^2/N)$ (Rajaraman et al (2020)). The lower bound is established by proving a two-way reduction between IL and the value estimation problem of the unknown expert policy under any given reward function, as well as building connections with linear functional estimation with subsampled observations. We further show that under the additional assumption that the expert is optimal for the true reward function, there exists an efficient algorithm, which we term as Mimic-Mixture, that provably achieves suboptimality $O(1/N)$ for arbitrary 3-state MDPs with rewards only at the terminal layer. In contrast, no algorithm can achieve suboptimality $O(\sqrt{H}/N)$ with high probability if the expert is not constrained to be optimal. Our work formally establishes the benefit of the expert optimal assumption in the known transition setting, while Rajaraman et al (2020) showed it does not help when transitions are unknown.) <|cite_end|> <|cite_start|> (Reference: Nearly Minimax Optimal Adversarial Imitation Learning with Known and Unknown Transitions: model-based) <|cite_end|>leveraged the knowledge of the transition dynamic $\Tran$ to eliminate compounding error/distribution shift issue in BC. Besides BC, there are other imitation learning algorithms: <|cite_start|> (Reference: Generative Adversarial Imitation Learning: Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal. One approach is to recover the expert's cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large, high-dimensional environments.) <|cite_end|>used generative adversarial networks for distribution matching to learn a reward function; <|cite_start|> (Reference: SQIL: Imitation Learning via Reinforcement Learning with Sparse Rewards: Learning to imitate expert behavior from demonstrations can be challenging, especially in environments with high-dimensional, continuous observations and unknown dynamics. Supervised learning methods based on behavioral cloning (BC) suffer from distribution shift: because the agent greedily imitates demonstrated actions, it can drift away from demonstrated states due to error accumulation. Recent methods based on reinforcement learning (RL), such as inverse RL and generative adversarial imitation learning (GAIL), overcome this issue by training an RL agent to match the demonstrations over a long horizon. Since the true reward function for the task is unknown, these methods learn a reward function from the demonstrations, often using complex and brittle approximation techniques that involve adversarial training. We propose a simple alternative that still uses RL, but does not require learning a reward function. The key idea is to provide the agent with an incentive to match the demonstrations over a long horizon, by encouraging it to return to demonstrated states upon encountering new, out-of-distribution states. We accomplish this by giving the agent a constant reward of r=+1 for matching the demonstrated action in a demonstrated state, and a constant reward of r=0 for all other behavior. Our method, which we call soft Q imitation learning (SQIL), can be implemented with a handful of minor modifications to any standard Q-learning or off-policy actor-critic algorithm. Theoretically, we show that SQIL can be interpreted as a regularized variant of BC that uses a sparsity prior to encourage long-horizon imitation. Empirically, we show that SQIL outperforms BC and achieves competitive results compared to GAIL, on a variety of image-based and low-dimensional tasks in Box2D, Atari, and MuJoCo.) <|cite_end|>provided a RL framework to deal with IL by artificially setting the reward; <|cite_start|> (Reference: A Divergence Minimization Perspective on Imitation Learning Methods: In many settings, it is desirable to learn decision-making and control policies through learning or bootstrapping from expert demonstrations. The most common approaches under this Imitation Learning (IL) framework are Behavioural Cloning (BC), and Inverse Reinforcement Learning (IRL). Recent methods for IRL have demonstrated the capacity to learn effective policies with access to a very limited set of demonstrations, a scenario in which BC methods often fail. Unfortunately, due to multiple factors of variation, directly comparing these methods does not provide adequate intuition for understanding this difference in performance. In this work, we present a unified probabilistic perspective on IL algorithms based on divergence minimization. We present $f$-MAX, an $f$-divergence generalization of AIRL [Fu et al., 2018], a state-of-the-art IRL method. $f$-MAX enables us to relate prior IRL methods such as GAIL [Ho & Ermon, 2016] and AIRL [Fu et al., 2018], and understand their algorithmic properties. Through the lens of divergence minimization we tease apart the differences between BC and successful IRL approaches, and empirically evaluate these nuances on simulated high-dimensional continuous control domains. Our findings conclusively identify that IRL's state-marginal matching objective contributes most to its superior performance. Lastly, we apply our new understanding of IL methods to the problem of state-marginal matching, where we demonstrate that in simulated arm pushing environments we can teach agents a diverse range of behaviours using simply hand-specified state distributions and no reward functions or expert demonstrations. For datasets and reproducing results please refer to https://github.com/KamyarGh/rl_swiss/blob/master/reproducing/fmax_paper.md .) <|cite_end|>unified several existing imitation learning algorithm as minimizing distribution divergence between learned policy and expert demonstration, just to name a few. \textbf{Offline RL.} RL leverages the signal from reward function to train the policy. Different from IL, offline RL often does not require the demonstration to be expert demonstration \citep[e.g.][]{fujimoto2019off, fujimoto2021minimalist, kumar2020conservative} (interested readers are referred to <|cite_start|> (Reference: Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems: In this tutorial article, we aim to provide the reader with the conceptual tools needed to get started on research on offline reinforcement learning algorithms: reinforcement learning algorithms that utilize previously collected data, without additional online data collection. Offline reinforcement learning algorithms hold tremendous promise for making it possible to turn large datasets into powerful decision making engines. Effective offline reinforcement learning methods would be able to extract policies with the maximum possible utility out of the available data, thereby allowing automation of a wide range of decision-making domains, from healthcare and education to robotics. However, the limitations of current algorithms make this difficult. We will aim to provide the reader with an understanding of these challenges, particularly in the context of modern deep reinforcement learning methods, and describe some potential solutions that have been explored in recent work to mitigate these challenges, along with recent applications, and a discussion of perspectives on open problems in the field.) <|cite_end|>), and even expects the offline data with higher coverage for different sub-optimal policies <|cite_start|> (Reference: The Importance of Pessimism in Fixed-Dataset Policy Optimization: We study worst-case guarantees on the expected return of fixed-dataset policy optimization algorithms. Our core contribution is a unified conceptual and mathematical framework for the study of algorithms in this regime. This analysis reveals that for naive approaches, the possibility of erroneous value overestimation leads to a difficult-to-satisfy requirement: in order to guarantee that we select a policy which is near-optimal, we may need the dataset to be informative of the value of every policy. To avoid this, algorithms can follow the pessimism principle, which states that we should choose the policy which acts optimally in the worst possible world. We show why pessimistic algorithms can achieve good performance even when the dataset is not informative of every policy, and derive families of algorithms which follow this principle. These theoretical findings are validated by experiments on a tabular gridworld, and deep learning experiments on four MinAtar environments.) <|cite_end|> <|cite_start|> (Reference: Is Pessimism Provably Efficient for Offline RL?: We study offline reinforcement learning (RL), which aims to learn an optimal policy based on a dataset collected a priori. Due to the lack of further interactions with the environment, offline RL suffers from the insufficient coverage of the dataset, which eludes most existing theoretical analysis. In this paper, we propose a pessimistic variant of the value iteration algorithm (PEVI), which incorporates an uncertainty quantifier as the penalty function. Such a penalty function simply flips the sign of the bonus function for promoting exploration in online RL, which makes it easily implementable and compatible with general function approximators. Without assuming the sufficient coverage of the dataset, we establish a data-dependent upper bound on the suboptimality of PEVI for general Markov decision processes (MDPs). When specialized to linear MDPs, it matches the information-theoretic lower bound up to multiplicative factors of the dimension and horizon. In other words, pessimism is not only provably efficient but also minimax optimal. In particular, given the dataset, the learned policy serves as the "best effort" among all policies, as no other policies can do better. Our theoretical analysis identifies the critical role of pessimism in eliminating a notion of spurious correlation, which emerges from the "irrelevant" trajectories that are less covered by the dataset and not informative for the optimal policy.) <|cite_end|> <|cite_start|> (Reference: Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism: Offline (or batch) reinforcement learning (RL) algorithms seek to learn an optimal policy from a fixed dataset without active data collection. Based on the composition of the offline dataset, two main categories of methods are used: imitation learning which is suitable for expert datasets and vanilla offline RL which often requires uniform coverage datasets. From a practical standpoint, datasets often deviate from these two extremes and the exact data composition is usually unknown a priori. To bridge this gap, we present a new offline RL framework that smoothly interpolates between the two extremes of data composition, hence unifying imitation learning and vanilla offline RL. The new framework is centered around a weak version of the concentrability coefficient that measures the deviation from the behavior policy to the expert policy alone. Under this new framework, we further investigate the question on algorithm design: can one develop an algorithm that achieves a minimax optimal rate and also adapts to unknown data composition? To address this question, we consider a lower confidence bound (LCB) algorithm developed based on pessimism in the face of uncertainty in offline RL. We study finite-sample properties of LCB as well as information-theoretic limits in multi-armed bandits, contextual bandits, and Markov decision processes (MDPs). Our analysis reveals surprising facts about optimality rates. In particular, in all three settings, LCB achieves a faster rate of $1/N$ for nearly-expert datasets compared to the usual rate of $1/\sqrt{N}$ in offline RL, where $N$ is the number of samples in the batch dataset. In the case of contextual bandits with at least two contexts, we prove that LCB is adaptively optimal for the entire data composition range, achieving a smooth transition from imitation learning to offline RL. We further show that LCB is almost adaptively optimal in MDPs.) <|cite_end|>. Behavior-agnostic setting <|cite_start|> (Reference: DualDICE: Behavior-Agnostic Estimation of Discounted Stationary Distribution Corrections: In many real-world reinforcement learning applications, access to the environment is limited to a fixed dataset, instead of direct (online) interaction with the environment. When using this data for either evaluation or training of a new policy, accurate estimates of discounted stationary distribution ratios -- correction terms which quantify the likelihood that the new policy will experience a certain state-action pair normalized by the probability with which the state-action pair appears in the dataset -- can improve accuracy and performance. In this work, we propose an algorithm, DualDICE, for estimating these quantities. In contrast to previous approaches, our algorithm is agnostic to knowledge of the behavior policy (or policies) used to generate the dataset. Furthermore, it eschews any direct use of importance weights, thus avoiding potential optimization instabilities endemic of previous methods. In addition to providing theoretical guarantees, we present an empirical study of our algorithm applied to off-policy policy evaluation and find that our algorithm significantly improves accuracy compared to existing techniques.) <|cite_end|> <|cite_start|> (Reference: Black-box Off-policy Estimation for Infinite-Horizon Reinforcement Learning: Off-policy estimation for long-horizon problems is important in many real-life applications such as healthcare and robotics, where high-fidelity simulators may not be available and on-policy evaluation is expensive or impossible. Recently, \cite{liu18breaking} proposed an approach that avoids the \emph{curse of horizon} suffered by typical importance-sampling-based methods. While showing promising results, this approach is limited in practice as it requires data be drawn from the \emph{stationary distribution} of a \emph{known} behavior policy. In this work, we propose a novel approach that eliminates such limitations. In particular, we formulate the problem as solving for the fixed point of a certain operator. Using tools from Reproducing Kernel Hilbert Spaces (RKHSs), we develop a new estimator that computes importance ratios of stationary distributions, without knowledge of how the off-policy data are collected. We analyze its asymptotic consistency and finite-sample generalization. Experiments on benchmarks verify the effectiveness of our approach.) <|cite_end|>even does not require the collected data from a single policy. The closest relation between offline RL and IL is the learning of stationary visitation distribution, where learning such visitation distribution does not involve with reward signal, similar to IL. A line of recent research especially for off-policy evaluation tries to learn the stationary visitation distribution of a given target policy~\citep[e.g.][]{liu2018breaking, nachum2019dualdice, tang2019doubly, mousavi2020blackbox, dai2020coindice}. Especially <|cite_start|> (Reference: Imitation Learning via Off-Policy Distribution Matching: When performing imitation learning from expert demonstrations, distribution matching is a popular approach, in which one alternates between estimating distribution ratios and then using these ratios as rewards in a standard reinforcement learning (RL) algorithm. Traditionally, estimation of the distribution ratio requires on-policy data, which has caused previous work to either be exorbitantly data-inefficient or alter the original objective in a manner that can drastically change its optimum. In this work, we show how the original distribution ratio estimation objective may be transformed in a principled manner to yield a completely off-policy objective. In addition to the data-efficiency that this provides, we are able to show that this objective also renders the use of a separate RL optimization unnecessary.Rather, an imitation policy may be learned directly from this objective without the use of explicit rewards. We call the resulting algorithm ValueDICE and evaluate it on a suite of popular imitation learning benchmarks, finding that it can achieve state-of-the-art sample efficiency and performance.) <|cite_end|>leverages the off-policy evaluation idea to IL area. \textbf{Robustness in IL and RL.} There are several recent papers consider corruption-robust in either RL or IL. In RL, <|cite_start|> (Reference: Robust Policy Gradient against Strong Data Corruption: We study the problem of robust reinforcement learning under adversarial corruption on both rewards and transitions. Our attack model assumes an \textit{adaptive} adversary who can arbitrarily corrupt the reward and transition at every step within an episode, for at most $\epsilon$-fraction of the learning episodes. Our attack model is strictly stronger than those considered in prior works. Our first result shows that no algorithm can find a better than $O(\epsilon)$-optimal policy under our attack model. Next, we show that surprisingly the natural policy gradient (NPG) method retains a natural robustness property if the reward corruption is bounded, and can find an $O(\sqrt{\epsilon})$-optimal policy. Consequently, we develop a Filtered Policy Gradient (FPG) algorithm that can tolerate even unbounded reward corruption and can find an $O(\epsilon^{1/4})$-optimal policy. We emphasize that FPG is the first that can achieve a meaningful learning guarantee when a constant fraction of episodes are corrupted. Complimentary to the theoretical results, we show that a neural implementation of FPG achieves strong robust learning performance on the MuJoCo continuous control benchmarks.) <|cite_end|>considers that the adversarial corruption may corrupt the whole episode in the online RL setting while a more recent one <|cite_start|> (Reference: Corruption-Robust Offline Reinforcement Learning: We study the adversarial robustness in offline reinforcement learning. Given a batch dataset consisting of tuples $(s, a, r, s')$, an adversary is allowed to arbitrarily modify $\epsilon$ fraction of the tuples. From the corrupted dataset the learner aims to robustly identify a near-optimal policy. We first show that a worst-case $\Omega(d\epsilon)$ optimality gap is unavoidable in linear MDP of dimension $d$, even if the adversary only corrupts the reward element in a tuple. This contrasts with dimension-free results in robust supervised learning and best-known lower-bound in the online RL setting with corruption. Next, we propose robust variants of the Least-Square Value Iteration (LSVI) algorithm utilizing robust supervised learning oracles, which achieve near-matching performances in cases both with and without full data coverage. The algorithm requires the knowledge of $\epsilon$ to design the pessimism bonus in the no-coverage case. Surprisingly, in this case, the knowledge of $\epsilon$ is necessary, as we show that being adaptive to unknown $\epsilon$ is impossible.This again contrasts with recent results on corruption-robust online RL and implies that robust offline RL is a strictly harder problem.) <|cite_end|>considers \emph{offline RL} where $\epsilon$-fraction of the whole data set can be replaced by the outliers. However, the $\epsilon$ dependency scales with the dimension in <|cite_start|> (Reference: Corruption-Robust Offline Reinforcement Learning: We study the adversarial robustness in offline reinforcement learning. Given a batch dataset consisting of tuples $(s, a, r, s')$, an adversary is allowed to arbitrarily modify $\epsilon$ fraction of the tuples. From the corrupted dataset the learner aims to robustly identify a near-optimal policy. We first show that a worst-case $\Omega(d\epsilon)$ optimality gap is unavoidable in linear MDP of dimension $d$, even if the adversary only corrupts the reward element in a tuple. This contrasts with dimension-free results in robust supervised learning and best-known lower-bound in the online RL setting with corruption. Next, we propose robust variants of the Least-Square Value Iteration (LSVI) algorithm utilizing robust supervised learning oracles, which achieve near-matching performances in cases both with and without full data coverage. The algorithm requires the knowledge of $\epsilon$ to design the pessimism bonus in the no-coverage case. Surprisingly, in this case, the knowledge of $\epsilon$ is necessary, as we show that being adaptive to unknown $\epsilon$ is impossible.This again contrasts with recent results on corruption-robust online RL and implies that robust offline RL is a strictly harder problem.) <|cite_end|>, yet $\epsilon$ can be a constant in this paper for robust offline IL. Many other papers consider perturbations, heavy tails, or corruptions in either reward function <|cite_start|> (Reference: Bandits with heavy tail: The stochastic multi-armed bandit problem is well understood when the reward distributions are sub-Gaussian. In this paper we examine the bandit problem under the weaker assumption that the distributions have moments of order 1+\epsilon, for some $\epsilon \in (0,1]$. Surprisingly, moments of order 2 (i.e., finite variance) are sufficient to obtain regret bounds of the same order as under sub-Gaussian reward distributions. In order to achieve such regret, we define sampling strategies based on refined estimators of the mean such as the truncated empirical mean, Catoni's M-estimator, and the median-of-means estimator. We also derive matching lower bounds that also show that the best achievable regret deteriorates when \epsilon <1.) <|cite_end|>or in transition dynamic <|cite_start|> (Reference: {Distributionally robust Markov decision processes: We consider Markov decision processes where the values of the parameters are uncertain. This uncertainty is described by a sequence of nested sets (that is, each set contains the previous one), each of which corresponds to a probabilistic guarantee for a different confidence level. Consequently, a set of admissible probability distributions of the unknown parameters is specified. This formulation models the case where the decision maker is aware of and wants to exploit some (yet imprecise) a priori information of the distribution of parameters, and it arises naturally in practice where methods for estimating the confidence region of parameters abound. We propose a decision criterion based on distributional robustness: the optimal strategy maximizes the expected total reward under the most adversarial admissible probability distributions. We show that finding the optimal distributionally robust strategy can be reduced to the standard robust MDP where parameters are known to belong to a single uncertainty set; hence, it can be computed in polynomial time under mild technical conditions.) <|cite_end|> <|cite_start|> (Reference: Scaling Up Robust MDPs using Function Approximation: We consider large-scale Markov decision processes (MDPs) with parameter uncertainty, under the robust MDP paradigm. Previous studies showed that robust MDPs, based on a minimax approach to handling uncertainty, can be solved using dynamic programming for small to medium sized problems. However, due to the "curse of dimensionality", MDPs that model real-life problems are typically prohibitively large for such approaches. In this work we employ a reinforcement learning approach to tackle this planning problem: we develop a robust approximate dynamic programming method based on a projected fixed point equation to approximately solve large scale robust MDPs. We show that the proposed method provably succeeds under certain technical conditions, and demonstrate its effectiveness through simulation of an option pricing problem. To the best of our knowledge, this is the first attempt to scale up the robust MDP paradigm.) <|cite_end|> <|cite_start|> (Reference: Reinforcement Learning under Model Mismatch: We study reinforcement learning under model misspecification, where we do not have access to the true environment but only to a reasonably close approximation to it. We address this problem by extending the framework of robust MDPs to the model-free Reinforcement Learning setting, where we do not have access to the model parameters, but can only sample states from it. We define robust versions of Q-learning, SARSA, and TD-learning and prove convergence to an approximately optimal robust policy and approximate value function respectively. We scale up the robust algorithms to large MDPs via function approximation and prove convergence under two different settings. We prove convergence of robust approximate policy iteration and robust approximate value iteration for linear architectures (under mild assumptions). We also define a robust loss function, the mean squared robust projected Bellman error and give stochastic gradient descent algorithms that are guaranteed to converge to a local minimum.) <|cite_end|>. The most related papers follow a similar setting of robust IL are <|cite_start|> (Reference: Imitation Learning from Imperfect Demonstration: Imitation learning (IL) aims to learn an optimal policy from demonstrations. However, such demonstrations are often imperfect since collecting optimal ones is costly. To effectively learn from imperfect demonstrations, we propose a novel approach that utilizes confidence scores, which describe the quality of demonstrations. More specifically, we propose two confidence-based IL methods, namely two-step importance weighting IL (2IWIL) and generative adversarial IL with imperfect demonstration and confidence (IC-GAIL). We show that confidence scores given only to a small portion of sub-optimal demonstrations significantly improve the performance of IL both theoretically and empirically.) <|cite_end|> <|cite_start|> (Reference: Variational Imitation Learning with Diverse-quality Demonstrations: . (19) Since ftpφ,ωq “ Ftpφ,ω,ψq “ maxψ Ftpφ,ω,ψq, we have that fpφ,ωq “ maxψ Fpφ,ω,ψq. A.2. Lower-bound G Next, we derive the lower-bound G of gpφ,ωq “ logZφ,ω . We first derive a trivial lower-bound using a “general” variational distribution over trajectories and discuss its issue. Then, we derive a lower-bound presented in the paper by using a structured variational distribution. Recall that the normalization term Zφ,ω of the model pφ,ω is given by Zφ,ω “ K) <|cite_end|> <|cite_start|> (Reference: Robust Imitation Learning from Noisy Demonstrations: Robust learning from noisy demonstrations is a practical but highly challenging problem in imitation learning. In this paper, we first theoretically show that robust imitation learning can be achieved by optimizing a classification risk with a symmetric loss. Based on this theoretical finding, we then propose a new imitation learning method that optimizes the classification risk by effectively combining pseudo-labeling with co-training. Unlike existing methods, our method does not require additional labels or strict assumptions about noise distributions. Experimental results on continuous-control benchmarks show that our method is more robust compared to state-of-the-art methods.) <|cite_end|> <|cite_start|> (Reference: Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations: A critical flaw of existing inverse reinforcement learning (IRL) methods is their inability to significantly outperform the demonstrator. This is because IRL typically seeks a reward function that makes the demonstrator appear near-optimal, rather than inferring the underlying intentions of the demonstrator that may have been poorly executed in practice. In this paper, we introduce a novel reward-learning-from-observation algorithm, Trajectory-ranked Reward EXtrapolation (T-REX), that extrapolates beyond a set of (approximately) ranked demonstrations in order to infer high-quality reward functions from a set of potentially poor demonstrations. When combined with deep reinforcement learning, T-REX outperforms state-of-the-art imitation learning and IRL methods on multiple Atari and MuJoCo benchmark tasks and achieves performance that is often more than twice the performance of the best demonstration. We also demonstrate that T-REX is robust to ranking noise and can accurately extrapolate intention by simply watching a learner noisily improve at a task over time.) <|cite_end|> <|cite_start|> (Reference: Behavioral Cloning from Noisy Demonstrations: ) <|cite_end|>, where they consider imperfect or noisy observations in imitation learning. However, they do not provide theoretical guarantees to handle arbitrary outliers in the demonstrations. And to the best of our knowledge, we provide the \emph{first theoretical guarantee} robust to constant fraction of arbitrary outliers in offline imitation learning. Furthermore, <|cite_start|> (Reference: Imitation Learning from Imperfect Demonstration: Imitation learning (IL) aims to learn an optimal policy from demonstrations. However, such demonstrations are often imperfect since collecting optimal ones is costly. To effectively learn from imperfect demonstrations, we propose a novel approach that utilizes confidence scores, which describe the quality of demonstrations. More specifically, we propose two confidence-based IL methods, namely two-step importance weighting IL (2IWIL) and generative adversarial IL with imperfect demonstration and confidence (IC-GAIL). We show that confidence scores given only to a small portion of sub-optimal demonstrations significantly improve the performance of IL both theoretically and empirically.) <|cite_end|> <|cite_start|> (Reference: Variational Imitation Learning with Diverse-quality Demonstrations: . (19) Since ftpφ,ωq “ Ftpφ,ω,ψq “ maxψ Ftpφ,ω,ψq, we have that fpφ,ωq “ maxψ Fpφ,ω,ψq. A.2. Lower-bound G Next, we derive the lower-bound G of gpφ,ωq “ logZφ,ω . We first derive a trivial lower-bound using a “general” variational distribution over trajectories and discuss its issue. Then, we derive a lower-bound presented in the paper by using a structured variational distribution. Recall that the normalization term Zφ,ω of the model pφ,ω is given by Zφ,ω “ K) <|cite_end|> <|cite_start|> (Reference: Robust Imitation Learning from Noisy Demonstrations: Robust learning from noisy demonstrations is a practical but highly challenging problem in imitation learning. In this paper, we first theoretically show that robust imitation learning can be achieved by optimizing a classification risk with a symmetric loss. Based on this theoretical finding, we then propose a new imitation learning method that optimizes the classification risk by effectively combining pseudo-labeling with co-training. Unlike existing methods, our method does not require additional labels or strict assumptions about noise distributions. Experimental results on continuous-control benchmarks show that our method is more robust compared to state-of-the-art methods.) <|cite_end|>require additional \emph{online interactions} with the environment, and <|cite_start|> (Reference: Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations: A critical flaw of existing inverse reinforcement learning (IRL) methods is their inability to significantly outperform the demonstrator. This is because IRL typically seeks a reward function that makes the demonstrator appear near-optimal, rather than inferring the underlying intentions of the demonstrator that may have been poorly executed in practice. In this paper, we introduce a novel reward-learning-from-observation algorithm, Trajectory-ranked Reward EXtrapolation (T-REX), that extrapolates beyond a set of (approximately) ranked demonstrations in order to infer high-quality reward functions from a set of potentially poor demonstrations. When combined with deep reinforcement learning, T-REX outperforms state-of-the-art imitation learning and IRL methods on multiple Atari and MuJoCo benchmark tasks and achieves performance that is often more than twice the performance of the best demonstration. We also demonstrate that T-REX is robust to ranking noise and can accurately extrapolate intention by simply watching a learner noisily improve at a task over time.) <|cite_end|> <|cite_start|> (Reference: Imitation Learning from Imperfect Demonstration: Imitation learning (IL) aims to learn an optimal policy from demonstrations. However, such demonstrations are often imperfect since collecting optimal ones is costly. To effectively learn from imperfect demonstrations, we propose a novel approach that utilizes confidence scores, which describe the quality of demonstrations. More specifically, we propose two confidence-based IL methods, namely two-step importance weighting IL (2IWIL) and generative adversarial IL with imperfect demonstration and confidence (IC-GAIL). We show that confidence scores given only to a small portion of sub-optimal demonstrations significantly improve the performance of IL both theoretically and empirically.) <|cite_end|>
[ "<|reference_start|> Generative Adversarial Imitation Learning: Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal. One approach is to recover the expert's cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large, high-dimensional environments. <|reference_end|>", "<|reference_start|> Improved Robustness and Safety for Autonomous Vehicle Control with Adversarial Reinforcement Learning: To improve efficiency and reduce failures in autonomous vehicles, research has focused on developing robust and safe learning methods that take into account disturbances in the environment. Existing literature in robust reinforcement learning poses the learning problem as a two player game between the autonomous system and disturbances. This paper examines two different algorithms to solve the game, Robust Adversarial Reinforcement Learning and Neural Fictitious Self Play, and compares performance on an autonomous driving scenario. We extend the game formulation to a semi-competitive setting and demonstrate that the resulting adversary better captures meaningful disturbances that lead to better overall performance. The resulting robust policy exhibits improved driving efficiency while effectively reducing collision rates compared to baseline control policies produced by traditional reinforcement learning methods. <|reference_end|>", "<|reference_start|> Toward the Fundamental Limits of Imitation Learning: Imitation learning (IL) aims to mimic the behavior of an expert policy in a sequential decision-making problem given only demonstrations. In this paper, we focus on understanding the minimax statistical limits of IL in episodic Markov Decision Processes (MDPs). We first consider the setting where the learner is provided a dataset of $N$ expert trajectories ahead of time, and cannot interact with the MDP. Here, we show that the policy which mimics the expert whenever possible is in expectation $\\lesssim \\frac{|\\mathcal{S}| H^2 \\log (N)}{N}$ suboptimal compared to the value of the expert, even when the expert follows an arbitrary stochastic policy. Here $\\mathcal{S}$ is the state space, and $H$ is the length of the episode. Furthermore, we establish a suboptimality lower bound of $\\gtrsim |\\mathcal{S}| H^2 / N$ which applies even if the expert is constrained to be deterministic, or if the learner is allowed to actively query the expert at visited states while interacting with the MDP for $N$ episodes. To our knowledge, this is the first algorithm with suboptimality having no dependence on the number of actions, under no additional assumptions. We then propose a novel algorithm based on minimum-distance functionals in the setting where the transition model is given and the expert is deterministic. The algorithm is suboptimal by $\\lesssim \\min \\{ H \\sqrt{|\\mathcal{S}| / N} ,\\ |\\mathcal{S}| H^{3/2} / N \\}$, showing that knowledge of transition improves the minimax rate by at least a $\\sqrt{H}$ factor. <|reference_end|>", "<|reference_start|> {Disagreement-regularized Imitation Learning: We present a simple and effective algorithm designed to address the covariate shift problem in imitation learning. It operates by training an ensemble of policies on the expert demonstration data, and using the variance of their predictions as a cost which is minimized with RL together with a supervised behavioral cloning cost. Unlike adversarial imitation methods, it uses a fixed reward function which is easy to optimize. We prove a regret bound for the algorithm in the tabular setting which is linear in the time horizon multiplied by a coefficient which we show to be low for certain problems in which behavioral cloning fails. We evaluate our algorithm empirically across multiple pixel-based Atari environments and continuous control tasks, and show that it matches or significantly outperforms behavioral cloning and generative adversarial imitation learning. <|reference_end|>" ]
[ 5, 14, 18, 20 ]
{"<|cite_20|>": "arxiv-117603", "<|cite_1|>": "arxiv-246015", "<|cite_21|>": "arxiv-220059", "<|cite_2|>": "arxiv-180650", "<|cite_3|>": "ss-946865", "<|cite_4|>": "arxiv-99846", "<|multi_cite_5_1|>": "arxiv-189049", "<|multi_cite_5_2|>": "ss-1292541", "<|multi_cite_5_3|>": "arxiv-297636", "<|multi_cite_5_4|>": "arxiv-199732", "<|multi_cite_5_5|>": "ss-1184657", "<|multi_cite_7_1|>": "ss-683319", "<|multi_cite_7_2|>": "ss-1089863", "<|cite_8|>": "arxiv-220059", "<|cite_22|>": "arxiv-194642", "<|cite_10|>": "ss-1184657", "<|multi_cite_23_1|>": "ss-946865", "<|multi_cite_23_2|>": "arxiv-180650", "<|cite_24|>": "arxiv-289593", "<|cite_11|>": "arxiv-17086", "<|cite_12|>": "ss-720493", "<|multi_cite_25_1|>": "ss-1177445", "<|multi_cite_25_2|>": "arxiv-323624", "<|multi_cite_25_3|>": "ss-1206213", "<|cite_13|>": "arxiv-99846", "<|cite_14|>": "arxiv-206167", "<|cite_15|>": "arxiv-232731", "<|cite_26|>": "arxiv-263399", "<|multi_cite_27_1|>": "arxiv-289910", "<|multi_cite_27_2|>": "arxiv-312718", "<|multi_cite_27_3|>": "arxiv-329093", "<|multi_cite_28_1|>": "arxiv-209239", "<|multi_cite_28_2|>": "arxiv-255433", "<|cite_16|>": "arxiv-238756", "<|cite_17|>": "arxiv-320451", "<|cite_29|>": "arxiv-347783", "<|cite_18|>": "arxiv-347783", "<|cite_30|>": "arxiv-35910", "<|multi_cite_31_1|>": "ss-1277220", "<|multi_cite_31_2|>": "ss-1367987", "<|multi_cite_31_3|>": "arxiv-126825", "<|multi_cite_32_1|>": "arxiv-189049", "<|multi_cite_32_2|>": "ss-1292541", "<|multi_cite_32_3|>": "arxiv-297636", "<|multi_cite_32_4|>": "arxiv-199732", "<|multi_cite_32_5|>": "ss-1184657", "<|multi_cite_33_1|>": "arxiv-189049", "<|multi_cite_33_2|>": "ss-1292541", "<|multi_cite_33_3|>": "arxiv-297636", "<|multi_cite_19_1|>": "arxiv-199732", "<|multi_cite_19_2|>": "arxiv-189049"}
1405.7487
<|paper_start|> Title: Asynchronous Execution of the Fast Multipole Method Using Charm++ Abstract: Asynchronous Execution of the Fast Multipole Method Using Charm++: Fast multipole methods (FMM) on distributed mem- ory have traditionally used a bulk-synchronous model of com- municating the local essential tree (LET) and overlapping it with computation of the local data. This could be perceived as an extreme case of data aggregation, where the whole LET is communicated at once. Charm++ allows a much finer control over the granularity of communication, and has a asynchronous execution model that fits well with the structure of our FMM code. Unlike previous work on asynchronous fast N-body methods such as ChaNGa and PEPC, the present work performs a direct comparison against the traditional bulk-synchronous approach and the asynchronous approach using Charm++. Furthermore, the serial performance of our FMM code is over an order of magnitude better than these previous codes, so it is much more challenging to hide the overhead of Charm++. Introduction When applying data-driven execution models to parallel hierarchical N-body methods, it is important first to understand the significance of the dynamic load-balancing and data prefetching mechanisms that have existed in them for over two decades. Parallel N-body methods start by partitioning the particles in a way that maximizes data locality while balancing the workload among the partitions. This is done by using the workload from the previous time step as weights when splitting a space filling curve that connects all particles <|cite_start|> (Reference: A parallel hashed oct-tree N-body algorithm: The authors report on an efficient adaptive N-body method which we have recently designed and implemented. The algorithm computes the forces on an arbitrary distribution of bodies in a time which scales as N log N with the particle number. The accuracy of the force calculations is analytically bounded, and can be adjusted via a user defined parameter between a few percent relative accuracy, down to machine arithmetic accuracy. Instead of using pointers to indicate the topology of the tree, the authors identify each possible cell with a key. The mapping of keys into memory locations is achieved via a hash table. This allows the program to access data in an efficient manner across multiple processors. Performance of the parallel program is measured on the 512 processor Intel Touchstone Delta system. Comments on a number of wide-ranging applications which can benefit from application of this type of algorithm are included.) <|cite_end|>. Parallel N-body methods also have a mechanism for prefetching the data on remote processes by communicating all necessary parts of the remote trees upfront. The resulting tree is a subset of the entire global tree, which is called the local essential tree (LET) <|cite_start|> (Reference: A parallel hashed oct-tree N-body algorithm: The authors report on an efficient adaptive N-body method which we have recently designed and implemented. The algorithm computes the forces on an arbitrary distribution of bodies in a time which scales as N log N with the particle number. The accuracy of the force calculations is analytically bounded, and can be adjusted via a user defined parameter between a few percent relative accuracy, down to machine arithmetic accuracy. Instead of using pointers to indicate the topology of the tree, the authors identify each possible cell with a key. The mapping of keys into memory locations is achieved via a hash table. This allows the program to access data in an efficient manner across multiple processors. Performance of the parallel program is measured on the 512 processor Intel Touchstone Delta system. Comments on a number of wide-ranging applications which can benefit from application of this type of algorithm are included.) <|cite_end|>. Any data-driven execution model that provides features such as dynamic load-balancing and data prefetching/caching must augment these existing tailored mechanisms rather than compete with them. One area where the existing load-balancing and prefetching scheme can be improved is the granularity at which they are performed. Figure~\ref{fig:granularity_partition} shows the spectrum of granularity for the partitioning phase. Currently, the partitioning phase is constrained to the granularity of a single time step. One could coarsen the granularity by delaying the update of the partition for a few time steps, thereby adding more room for asynchronous execution. It is also possible that a repartitioning could take place within a time step in case of a node failure. Adding such flexibility to the partitioning granularity is a partial requirement for making the algorithm fault tolerant. Figure~\ref{fig:granularity_let} shows the spectrum of granularity for the LET communication (prefetching) phase. Conventional parallel N-body methods use a bulk-synchronous \texttt{MPI\_alltoallv} to communicate the whole LET at once, and overlap this communication with the local tree traversal to hide latency. One could over decompose the LET down to a per cell request, and then aggregate the communication to the optimal granularity. The bulk-synchronous communication model can be thought of as an extreme case of aggregation, while something like an RDMA per task per cell would be at the other end of the granularity spectrum. \begin{figure}[t] \centering \subfigure[Partitioning phase]{ \includegraphics[width=0.45\textwidth]{granularity_partition.pdf}\label{fig:granularity_partition}}\\ \subfigure[LET communication phase]{ \includegraphics[width=0.45\textwidth]{granularity_let.pdf}\label{fig:granularity_let}} \caption{FMM has two major communication phases -- the partitioning of particles, and the communication of the local essential tree (LET). The former performs dynamic load-balancing and the latter can be thought of as a prefetching or data caching mechanism. Data-flow execution models add value not by providing these features, but by adding flexibility to the granularity at which these phases can be executed asynchronously. \ref{fig:granularity_let} shows the different granularity at which the partitioning phase can take place, while \ref{fig:granularity_let} shows the different granularity at which the LET communication can take place. The bulk-synchronous model can be viewed as an extreme case of communication aggregation.} \label{fig:granularity} \end{figure} There have already been a few attempts to use data-driven execution models with parallel hierarchical N-body methods. Jetley \textit{et al.} use the \texttt{Charm++} execution model for their cosmological N-body code \texttt{ChaNGa} <|cite_start|> (Reference: Massively parallel cosmological simulations with ChaNGa: Cosmological simulators are an important component in the study of the formation of galaxies and large scale structures, and can help answer many important questions about the Universe. Despite their utility, existing parallel simulators do not scale effectively on modern machines containing thousands of processors. In this paper we present ChaNGa, a production simulator based on the CHARM++ infrastructure. To achieve scalable performance, ChaNGa employs various optimizations that maximize the overlap between computation and communication. We present experimental results of ChaNGa simulations on machines with thousands of processors, including the IBM Blue Gene/L and the Cray XT3. The paper goes on to highlight efforts toward even more efficient and scalable cosmological simulations. In particular, novel load balancing schemes that base decisions on certain characteristics of tree-based particle codes are discussed. Further, the multistepping capabilities of ChaNGa are presented, as are solutions to the load imbalance that such multiphase simulations face. We outline key requirements for an effective practical implementation and conclude by discussing preliminary results from simulations run with our multiphase load balancer.) <|cite_end|>. They compare several different cosmological datasets on several different architectures, and show significant improvement in the scalability over another cosmological N-body code \texttt{PKDGRAV}. They show that a na\"{i}ve load-balancing scheme based on work-stealing increases the amount of communication three-fold. \texttt{ChaNGa} has also been extended to run on GPUs <|cite_start|> (Reference: Scaling hierarchical {N}-body simulations on {GPU} clusters: This paper focuses on the use of GPGPU-based clusters for hierarchical N-body simulations. Whereas the behavior of these hierarchical methods has been studied in the past on CPU-based architectures, we investigate key performance issues in the context of clusters of GPUs. These include kernel organization and efficiency, the balance between tree traversal and force computation work, grain size selection through the tuning of offloaded work request sizes, and the reduction of sequential bottlenecks. The effects of various application parameters are studied and experiments done to quantify gains in performance. Our studies are carried out in the context of a production-quality parallel cosmological simulator called ChaNGa. We highlight the re-engineering of the application to make it more suitable for GPU-based environments. Finally, we present performance results from experiments on the NCSA Lincoln GPU cluster, including a note on GPU use in multistepped simulations.) <|cite_end|>. The tree construction and tree traversal are done on the CPU and only the force calculation is performed on the GPU. They report 3.82 Tflops (single precision) on 896 CPU cores + 256 S1070 GPUs, which is less than 2\% of the theoretical peak. They are able to calculate approximately 10 million particles per second on 448 CPU cores + 128 GPUs. However, state-of-the-art parallel N-body codes such as \texttt{pfalcON} and \texttt{ExaFMM} can calculate 10 million particles per second on a single CPU socket <|cite_start|> (Reference: Parallel Dual Tree Traversal on Multi-core and Many-core Architectures for Astrophysical N-body Simulations: ) <|cite_end|>. When assessing the usefulness of new data-driven runtime systems, it is problematic to use a code with orders of magnitude slower serial performance. As mentioned earlier, data-driven execution models add value not by providing load-balancing or data-caching features to parallel N-body methods, but rather by adding flexibility to the granularity at which these mechanisms can be executed. However, slow serial performance of the code will skew the discussion on the optimal granularity. For example, techniques on the finer end of the spectrum in Figure~\ref{fig:granularity} will seem acceptable if the serial performance was slow enough, while in reality the communication latency could actually be too large for codes like \texttt{pfalcON} and \texttt{ExaFMM}. The same can be said to the case of Dekate \textit{et al.} <|cite_start|> (Reference: Improving the scalability of parallel N-body applications with an event-driven constraint-based execution model: The scalability and efficiency of graph applications are significantly constrained by conventional systems and their supporting programming models. Technology trends such as multicore, manycore, and heterogeneous system architectures are introducing further challenges and possibilities for emerging application domains such as graph applications. This paper explores the parallel execution of graphs that are generated using the Barnes–Hut algorithm to exemplify dynamic workloads. The workloads are expressed using the semantics of an exascale computing execution model called ParalleX. For comparison, results using conventional execution model semantics are also presented. We find improved load balancing during runtime and automatic parallelism discovery by using the advanced semantics for exascale computing.) <|cite_end|>, where they use the \texttt{ParalleX} execution model for the Barnes-Hut treecode and report a performance of 100K particles per second on a single CPU socket. This is exactly 100 times slower than the state-of-the-art N-body codes, which can do 10 million particle per second. \begin{figure}[t] \centering \subfigure[Hierarchical interaction using FMM]{ \includegraphics[width=0.25\textwidth]{fmm_interaction.pdf}\label{fig:interaction}}\\ \subfigure[Flow of FMM calculation]{ \includegraphics[width=0.5\textwidth]{fmm_flow.pdf}\label{fig:flow}} \caption{Illustration of the flow of FMM calculation, and the interaction between source and target particles.} \label{fig:fmm} \end{figure} There are a few other reports on the use of parallel N-body methods with data-driven execution models such as \texttt{StarPU} <|cite_start|> (Reference: Pipelining the Fast Multipole Method over a Runtime System: Fast Multipole Methods (FMM) are a fundamental operation for the simulation of many physical problems. The high performance design of such methods usually requires to carefully tune the algorithm for both the targeted physics and the hardware. In this paper, we propose a new approach that achieves high performance across architectures. Our method consists of expressing the FMM algorithm as a task flow and employing a state-of-the-art runtime system, StarPU, in order to process the tasks on the different processing units. We carefully design the task flow, the mathematical operators, their Central Processing Unit (CPU) and Graphics Processing Unit (GPU) implementations, as well as scheduling schemes. We compute potentials and forces of 200 million particles in 48.7 seconds on a homogeneous 160 cores SGI Altix UV 100 and of 38 million particles in 13.34 seconds on a heterogeneous 12 cores Intel Nehalem processor enhanced with 3 Nvidia M2090 Fermi GPUs.) <|cite_end|> <|cite_start|> (Reference: Parallelization on heterogeneous multicore and multi-GPU systems of the fast multipole method for the Helmholtz equation using a runtime system: The Fast Multipole Method (FMM) is considered as one of the top ten algorithms of the 20th century. The FMM can speed up solving of electromagnetic scattering problems. With N being the number of unknowns, the complexity usually O(N 2) becomes O(N log N ) allowing a problem with hundreds of millions of complex unknowns to be solved. The FMM applied in our context has a serious drawback: the parallel version is not very scalable. In this paper, we present a new approach in order to overcome this limit. We use StarPU, a runtime system for heterogeneous multicore architectures. Thus, our aim is to have good efficiency on a cluster with hundreds of CPUs, and GPUs. Much work have been done on parallelization with advanced distribution techniques but never with such a runtime system. StarPU is very useful, especially for the multi-level algorithm on a hybrid machine. At present, we have developed a multi-core and a GPU version. The techniques for distributing and grouping the data are detailed in this paper. The first results of the strategy used are promising.) <|cite_end|> and \texttt{OmpSs} <|cite_start|> (Reference: Towards a Dataflow FMM using the OmpSs Programming Model: ) <|cite_end|>, but these only consider shared memory architectures. Although there are qualitative similarities between inter-socket and inter-node data management, it is the quantitative difference that matters when discussing the granularity issues as mentioned before. The scope of the current work is on distributed memory data-driven execution models. Previous work with good serial performance have focused on optimizing the bulk-synchronous all-to-all communication itself rather than data-driven execution models. With these optimizations Lashuk \textit{et al.} were able to calculate 90 billion particles in approximately 300 seconds on 200K cores of Jaguar and achieved 0.7 Pflops <|cite_start|> (Reference: A massively parallel adaptive fast multipole method on heterogeneous architectures: We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30× speedup over a single core CPU and 7× speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs.) <|cite_end|>. Similarly, Yokota \textit{et al.} calculated 64 billion particles in approximately 100 seconds on 4000 GPUs of TSUBAME2.0 and achieved 1.0 Pflops <|cite_start|> (Reference: Petascale turbulence simulation using a highly parallel fast multipole method on GPUs: ) <|cite_end|>. The base of comparison for the data-driven execution models should be such highly optimized codes. The present work performs a direct comparison between a highly scalable bulk-synchronous N-body code, \texttt{ExaFMM}, with and without \texttt{Charm++}. Unlike studies where the comparison is made against a completely different code, the present work compares the same code with and without the data-driven execution model. <|paper_end|>
[ "<|reference_start|> Massively parallel cosmological simulations with ChaNGa: Cosmological simulators are an important component in the study of the formation of galaxies and large scale structures, and can help answer many important questions about the Universe. Despite their utility, existing parallel simulators do not scale effectively on modern machines containing thousands of processors. In this paper we present ChaNGa, a production simulator based on the CHARM++ infrastructure. To achieve scalable performance, ChaNGa employs various optimizations that maximize the overlap between computation and communication. We present experimental results of ChaNGa simulations on machines with thousands of processors, including the IBM Blue Gene/L and the Cray XT3. The paper goes on to highlight efforts toward even more efficient and scalable cosmological simulations. In particular, novel load balancing schemes that base decisions on certain characteristics of tree-based particle codes are discussed. Further, the multistepping capabilities of ChaNGa are presented, as are solutions to the load imbalance that such multiphase simulations face. We outline key requirements for an effective practical implementation and conclude by discussing preliminary results from simulations run with our multiphase load balancer. <|reference_end|>", "<|reference_start|> Parallel Dual Tree Traversal on Multi-core and Many-core Architectures for Astrophysical N-body Simulations: <|reference_end|>", "<|reference_start|> Improving the scalability of parallel N-body applications with an event-driven constraint-based execution model: The scalability and efficiency of graph applications are significantly constrained by conventional systems and their supporting programming models. Technology trends such as multicore, manycore, and heterogeneous system architectures are introducing further challenges and possibilities for emerging application domains such as graph applications. This paper explores the parallel execution of graphs that are generated using the Barnes–Hut algorithm to exemplify dynamic workloads. The workloads are expressed using the semantics of an exascale computing execution model called ParalleX. For comparison, results using conventional execution model semantics are also presented. We find improved load balancing during runtime and automatic parallelism discovery by using the advanced semantics for exascale computing. <|reference_end|>", "<|reference_start|> A massively parallel adaptive fast multipole method on heterogeneous architectures: We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30× speedup over a single core CPU and 7× speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs. <|reference_end|>" ]
[ 2, 4, 5, 9 ]
{"<|cite_1|>": "ss-1150683", "<|cite_2|>": "ss-1150683", "<|cite_3|>": "ss-1150684", "<|cite_4|>": "ss-1148740", "<|cite_5|>": "ss-1150685", "<|cite_6|>": "ss-1150686", "<|multi_cite_8_1|>": "arxiv-32291", "<|multi_cite_8_2|>": "ss-1150687", "<|cite_9|>": "ss-793829", "<|cite_10|>": "ss-1910245", "<|cite_11|>": "ss-1715774"}
2209.03927
<|paper_start|> Title: Sequential Information Design: Learning to Persuade in the Dark Abstract: Sequential Information Design: Learning to Persuade in the Dark: We study a repeated information design problem faced by an informed sender who tries to influence the behavior of a self-interested receiver. We consider settings where the receiver faces a sequential decision making (SDM) problem. At each round, the sender observes the realizations of random events in the SDM problem. This begets the challenge of how to incrementally disclose such information to the receiver to persuade them to follow (desirable) action recommendations. We study the case in which the sender does not know random events probabilities, and, thus, they have to gradually learn them while persuading the receiver. We start by providing a non-trivial polytopal approximation of the set of sender's persuasive information structures. This is crucial to design efficient learning algorithms. Next, we prove a negative result: no learning algorithm can be persuasive. Thus, we relax persuasiveness requirements by focusing on algorithms that guarantee that the receiver's regret in following recommendations grows sub-linearly. In the full-feedback setting -- where the sender observes all random events realizations -- , we provide an algorithm with $\tilde{O}(\sqrt{T})$ regret for both the sender and the receiver. Instead, in the bandit-feedback setting -- where the sender only observes the realizations of random events actually occurring in the SDM problem -- , we design an algorithm that, given an $\alpha \in [1/2, 1]$ as input, ensures $\tilde{O}({T^\alpha})$ and $\tilde{O}( T^{\max \{ \alpha, 1-\frac{\alpha}{2} \} })$ regrets, for the sender and the receiver respectively. This result is complemented by a lower bound showing that such a regrets trade-off is essentially tight. Introduction Bayesian persuasion <|cite_start|> (Reference: {Bayesian persuasion: When is it possible for one person to persuade another to change her action? We take a mechanism design approach to this question. Taking preferences and initial beliefs as given, we introduce the notion of a persuasion mechanism: a game between Sender and Receiver defined by an information structure and a message technology. We derive necessary and sufficient conditions for the existence of a persuasion mechanism that strictly benefits Sender. We characterize the optimal mechanism. Finally, we analyze several examples that illustrate the applicability of our results.) <|cite_end|> (a.k.a.~\emph{information design}) is the problem faced by an informed {\em sender} who wants to influence the behavior of a self-interested {\em receiver} via the provision of payoff-relevant information. This captures the problem of ``who gets to know what'', which is fundamental in all economic interactions. Thus, Bayesian persuasion is ubiquitous in real-world problems, such as, \emph{e.g.}, online advertising <|cite_start|> (Reference: Send mixed signals: Earn more, work less: Emek et al presented a model of probabilistic single-item second price auctions where an auctioneer who is informed about the type of an item for sale, broadcasts a signal about this type to uninformed bidders. They proved that finding the optimal (for the purpose of generating revenue) pure signaling scheme is strongly NP-hard. In contrast, we prove that finding the optimal mixed signaling scheme can be done in polynomial time using linear programming. For the proof, we show that the problem is strongly related to a problem of optimally bundling divisible goods for auctioning. We also prove that a mixed signaling scheme can in some cases generate twice as much revenue as the best pure signaling scheme and we prove a generally applicable lower bound on the revenue generated by the best mixed signaling scheme.) <|cite_end|>, voting <|cite_start|> (Reference: Persuading voters: In a symmetric information voting model, an individual (politician) can influence voters’ choices by strategically designing a policy experiment (public signal). We characterize the politician’s optimal experiment. With a non-unanimous voting rule, she exploits voters’ heterogeneity by designing an experiment with realizations targeting different winning coalitions. Consequently, under a simple-majority rule, a majority of voters might be strictly worse off due to the politician’s influence. We characterize voters’ preferences over electoral rules and provide conditions for a majority of voters to prefer a supermajority (or unanimity) voting rule, in order to induce the politician to supply a more informative experiment.) <|cite_end|> <|cite_start|> (Reference: Persuading Voters in District-based Elections: We focus on the scenario in which an agent can exploit his information advantage to manipulate the outcome of an election. In particular, we study district-based elections with two candidates, in which the winner of the election is the candidate that wins in the majority of the districts. District-based elections are adopted worldwide (e.g., UK and USA) and are a natural extension of widely studied voting mechanisms (e.g., k-voting and plurality voting). We resort to the Bayesian persuasion framework, where the manipulator (sender) strategically discloses information to the voters (receivers) that update their beliefs rationally. We study both private signaling, in which the sender can use a private communication channel per receiver, and public signaling, in which the sender can use a single communication channel for all the receivers. Furthermore, for the first time, we introduce semi-public signaling in which the sender can use a single communication channel per district. We show that there is a sharp distinction between private and (semi-)public signaling. In particular, optimal private signaling schemes can provide an arbitrarily better probability of victory than (semi-)public ones and can be computed efficiently, while optimal (semi-)public signaling schemes cannot be approximated to within any factor in polynomial time unless P=NP. However, we show that reasonable relaxations allow the design of multi-criteria PTASs for optimal (semi-)public signaling schemes. In doing so, we introduce a novel property, namely comparative stability, and we design a bi-criteria PTAS for public signaling in general Bayesian persuasion problems beyond elections when the sender's utility function is state-dependent.) <|cite_end|> <|cite_start|> (Reference: Persuading Voters: It's Easy to Whisper, It's Hard to Speak Loud: We focus on the following natural question: is it possible to influence the outcome of a voting process through the strategic provision of information to voters who update their beliefs rationally? We investigate whether it is computationally tractable to design a signaling scheme maximizing the probability with which the sender's preferred candidate is elected. We focus on the model recently introduced by Arieli and Babichenko (2019) (i.e., without inter-agent externalities), and consider, as explanatory examples, $k$-voting rule and plurality voting. There is a sharp contrast between the case in which private signals are allowed and the more restrictive setting in which only public signals are allowed. In the former, we show that an optimal signaling scheme can be computed efficiently both under a $k$-voting rule and plurality voting. In establishing these results, we provide two general (i.e., applicable to settings beyond voting) contributions. Specifically, we extend a well known result by Dughmi and Xu (2017) to more general settings, and prove that, when the sender's utility function is anonymous, computing an optimal signaling scheme is fixed parameter tractable w.r.t. the number of receivers' actions. In the public signaling case, we show that the sender's optimal expected return cannot be approximated to within any factor under a $k$-voting rule. This negative result easily extends to plurality voting and problems where utility functions are anonymous.) <|cite_end|>, traffic routing <|cite_start|> (Reference: Hardness Results for Signaling in Bayesian Zero-Sum and Network Routing Games: We study the optimization problem faced by a perfectly informed principal in a Bayesian game, who reveals information to the players about the state of nature to obtain a desirable equilibrium. This signaling problem is the natural design question motivated by uncertainty in games and has attracted much recent attention. We present new hardness results for signaling problems in (a) Bayesian two-player zero-sum games, and (b) Bayesian network routing games. For Bayesian zero-sum games, when the principal seeks to maximize the equilibrium utility of a player, we show that it is NP-hard to obtain an additive FPTAS. Our hardness proof exploits duality and the equivalence of separation and optimization in a novel way. Further, we rule out an additive PTAS assuming planted clique hardness, which states that no polynomial time algorithm can recover a planted clique from an Erd\H{o}s-R\'enyi random graph. Complementing these, we obtain a PTAS for a structured class of zero-sum games (where obtaining an FPTAS is still NP-hard) when the payoff matrices obey a Lipschitz condition. Previous results ruled out an FPTAS assuming planted-clique hardness, and a PTAS only for implicit games with quasi-polynomial-size strategy sets. For Bayesian network routing games, wherein the principal seeks to minimize the average latency of the Nash flow, we show that it is NP-hard to obtain a (multiplicative) $(4/3 - \epsilon)$-approximation, even for linear latency functions. This is the optimal inapproximability result for linear latencies, since we show that full revelation achieves a $(4/3)$-approximation for linear latencies.) <|cite_end|> <|cite_start|> (Reference: Public Signaling in Bayesian Ad Auctions: We study signaling in Bayesian ad auctions, in which bidders' valuations depend on a random, unknown state of nature. The auction mechanism has complete knowledge of the actual state of nature, and it can send signals to bidders so as to disclose information about the state and increase revenue. For instance, a state may collectively encode some features of the user that are known to the mechanism only, since the latter has access to data sources unaccessible to the bidders. We study the problem of computing how the mechanism should send signals to bidders in order to maximize revenue. While this problem has already been addressed in the easier setting of second-price auctions, to the best of our knowledge, our work is the first to explore ad auctions with more than one slot. In this paper, we focus on public signaling and VCG mechanisms, under which bidders truthfully report their valuations. We start with a negative result, showing that, in general, the problem does not admit a PTAS unless P = NP, even when bidders' valuations are known to the mechanism. The rest of the paper is devoted to settings in which such negative result can be circumvented. First, we prove that, with known valuations, the problem can indeed be solved in polynomial time when either the number of states d or the number of slots m is fixed. Moreover, in the same setting, we provide an FPTAS for the case in which bidders are single minded, but d and m can be arbitrary. Then, we switch to the random valuations setting, in which these are randomly drawn according to some probability distribution. In this case, we show that the problem admits an FPTAS, a PTAS, and a QPTAS, when, respectively, d is fixed, m is fixed, and bidders' valuations are bounded away from zero.) <|cite_end|>, security <|cite_start|> (Reference: Information Disclosure as a Means to Security: In this paper we present a novel Stackelberg-type model of security domains: Security Assets aSsignment with Information disclosure (SASI). The model combines both the features of the Stackelberg Security Games (SSGs) model and of the Bayesian Persuasion (BP) model. More specifically, SASI includes: a) an uncontrolled, exogenous security state that serves as the Defender's private information; b) multiple security assets with non-accumulating, targetlocal defence capability; c) a pro-active, verifiable and public, unidirectional information disclosure channel from the Defender to the Attacker. We show that SASI with a non-degenerate information disclosure can be arbitrarily more efficient, than a "silent" Stackelberg assets allocation. We also provide a linear program reformulation of SASI that can be solved in polynomial time in SASI parameters. Furthermore, we show that it is possible to remove one of SASI parameters and, rather than require it as an input, recover it by computation. As a result, SASI becomes highly scalable.) <|cite_end|> <|cite_start|> (Reference: Public Signaling in Bayesian Ad Auctions: We study signaling in Bayesian ad auctions, in which bidders' valuations depend on a random, unknown state of nature. The auction mechanism has complete knowledge of the actual state of nature, and it can send signals to bidders so as to disclose information about the state and increase revenue. For instance, a state may collectively encode some features of the user that are known to the mechanism only, since the latter has access to data sources unaccessible to the bidders. We study the problem of computing how the mechanism should send signals to bidders in order to maximize revenue. While this problem has already been addressed in the easier setting of second-price auctions, to the best of our knowledge, our work is the first to explore ad auctions with more than one slot. In this paper, we focus on public signaling and VCG mechanisms, under which bidders truthfully report their valuations. We start with a negative result, showing that, in general, the problem does not admit a PTAS unless P = NP, even when bidders' valuations are known to the mechanism. The rest of the paper is devoted to settings in which such negative result can be circumvented. First, we prove that, with known valuations, the problem can indeed be solved in polynomial time when either the number of states d or the number of slots m is fixed. Moreover, in the same setting, we provide an FPTAS for the case in which bidders are single minded, but d and m can be arbitrary. Then, we switch to the random valuations setting, in which these are randomly drawn according to some probability distribution. In this case, we show that the problem admits an FPTAS, a PTAS, and a QPTAS, when, respectively, d is fixed, m is fixed, and bidders' valuations are bounded away from zero.) <|cite_end|>, auctions <|cite_start|> (Reference: Signaling Schemes for Revenue Maximization: Signaling is an important topic in the study of asymmetric information in economic settings. In particular, the transparency of information available to a seller in an auction setting is a question of major interest. We introduce the study of signaling when conducting a second price auction of a probabilistic good whose actual instantiation is known to the auctioneer but not to the bidders. This framework can be used to model impressions selling in display advertising. We study the problem of computing a signaling scheme that maximizes the auctioneer's revenue in a Bayesian setting. While the general case is proved to be computationally hard, several cases of interest are shown to be polynomially solvable. In addition, we establish a tight bound on the minimum number of signals required to implement an optimal signaling scheme and show that at least half of the maximum social welfare can be preserved within such a scheme.) <|cite_end|> <|cite_start|> (Reference: Targeting and Signaling in Ad Auctions: Modern ad auctions allow advertisers to target more specific segments of the user population. Unfortunately, this is not always in the best interest of the ad platform. In this paper, we examine the following basic question in the context of second-price ad auctions: how should an ad platform optimally reveal information about the ad opportunity to the advertisers in order to maximize revenue? We consider a model in which bidders' valuations depend on a random state of the ad opportunity. Different from previous work, we focus on a more practical, and challenging, situation where the space of possible realizations of ad opportunities is extremely large. We thus focus on developing algorithms whose running time is independent of the number of ad opportunity realizations. We examine the auctioneer's algorithmic question of designing the optimal signaling scheme. When the auctioneer is restricted to send a public signal to all bidders, we focus on a well-motivated Bayesian valuation setting in which the auctioneer and bidders both have private information, and present two main results: 1. we exhibit a characterization result regarding approximately optimal schemes and prove that any constant-approximate public signaling scheme must use exponentially many signals; 2. we present a "simple" public signaling scheme that serves as a constant approximation under mild assumptions. We then initiate an exploration on the power of being able to send different signals privately to different bidders. Here we examine a basic setting where the auctioneer knows bidders' valuations, and exhibit a polynomial-time private scheme that extracts almost full surplus even in the worst Bayes Nash equilibrium. This illustrates the surprising power of private signaling schemes in extracting revenue.) <|cite_end|> <|cite_start|> (Reference: Public Signaling in Bayesian Ad Auctions: We study signaling in Bayesian ad auctions, in which bidders' valuations depend on a random, unknown state of nature. The auction mechanism has complete knowledge of the actual state of nature, and it can send signals to bidders so as to disclose information about the state and increase revenue. For instance, a state may collectively encode some features of the user that are known to the mechanism only, since the latter has access to data sources unaccessible to the bidders. We study the problem of computing how the mechanism should send signals to bidders in order to maximize revenue. While this problem has already been addressed in the easier setting of second-price auctions, to the best of our knowledge, our work is the first to explore ad auctions with more than one slot. In this paper, we focus on public signaling and VCG mechanisms, under which bidders truthfully report their valuations. We start with a negative result, showing that, in general, the problem does not admit a PTAS unless P = NP, even when bidders' valuations are known to the mechanism. The rest of the paper is devoted to settings in which such negative result can be circumvented. First, we prove that, with known valuations, the problem can indeed be solved in polynomial time when either the number of states d or the number of slots m is fixed. Moreover, in the same setting, we provide an FPTAS for the case in which bidders are single minded, but d and m can be arbitrary. Then, we switch to the random valuations setting, in which these are randomly drawn according to some probability distribution. In this case, we show that the problem admits an FPTAS, a PTAS, and a QPTAS, when, respectively, d is fixed, m is fixed, and bidders' valuations are bounded away from zero.) <|cite_end|> <|cite_start|> (Reference: Signaling in Posted Price Auctions: We study single-item single-unit Bayesian posted price auctions, where buyers arrive sequentially and their valuations for the item being sold depend on a random, unknown state of nature. The seller has complete knowledge of the actual state and can send signals to the buyers so as to disclose information about it. For instance, the state of nature may reflect the condition and/or some particular features of the item, which are known to the seller only. The problem faced by the seller is about how to partially disclose information about the state so as to maximize revenue. Unlike classical signaling problems, in this setting, the seller must also correlate the signals being sent to the buyers with some price proposals for them. This introduces additional challenges compared to standard settings. We consider two cases: the one where the seller can only send signals publicly visible to all buyers, and the case in which the seller can privately send a different signal to each buyer. As a first step, we prove that, in both settings, the problem of maximizing the seller's revenue does not admit an FPTAS unless P=NP, even for basic instances with a single buyer. As a result, in the rest of the paper, we focus on designing PTASs. In order to do so, we first introduce a unifying framework encompassing both public and private signaling, whose core result is a decomposition lemma that allows focusing on a finite set of possible buyers' posteriors. This forms the basis on which our PTASs are developed. In particular, in the public signaling setting, our PTAS employs some ad hoc techniques based on linear programming, while our PTAS for the private setting relies on the ellipsoid method to solve an exponentially-sized LP in polynomial time. In the latter case, we need a custom approximate separation oracle, which we implement with a dynamic programming approach.) <|cite_end|>, and marketing <|cite_start|> (Reference: Algorithmic Aspects of Private Bayesian Persuasion: We consider a multi-receivers Bayesian persuasion model where an informed sender tries to persuade a group of receivers to take a certain action. The state of nature is known to the sender, but it is unknown to the receivers. The sender is allowed to commit to a signaling policy where she sends a private signal to every receiver. This work studies the computation aspects of finding a signaling policy that maximizes the sender's revenue. We show that if the sender's utility is a submodular function of the set of receivers that take the desired action, then we can efficiently find a signaling policy whose revenue is at least (1-1/e) times the optimal. We also prove that approximating the sender's optimal revenue by a factor better than (1-1/e) is NP-hard and, hence, the developed approximation guarantee is essentially tight. When the sender's utility is a function of the number of receivers that take the desired action (i.e., the utility function is anonymous), we show that an optimal signaling policy can be computed in polynomial time. Our results are based on an interesting connection between the Bayesian persuasion problem and the evaluation of the concave closure of a set function.) <|cite_end|> <|cite_start|> (Reference: Persuasion in networks: public signals and k-cores: We consider a setting where agents in a social network take binary actions, which exhibit local strategic complementarities. In particular, the payoff of each agent depends on the number of her neighbors who take action 1, as well as an underlying state of the world. The agents are a priori uninformed about the state, which belongs to an interval of the real line. An information designer (sender) can commit to a public signaling mechanism, which once the state is realized reveals a public signal to all the agents. Agents update their posterior about the state using the realization of the public signal, and possibly change their actions. The objective of the information designer is to maximize the expected activity level, i.e., the expected total number of agents who take action 1. How should the information information designer choose her public signaling mechanism to achieve this objective? This is the first paper to study the design of public signaling mechanisms in social networks, and its main contribution is to provide an answer this question.) <|cite_end|>. We study Bayesian persuasion in settings where the receiver plays in a \emph{sequential decision making} (SDM) problem. An SDM problem is characterized by a tree structure made by: \emph{decision} nodes, where the receiver takes actions, and \emph{chance} nodes, in which \emph{partially observable} random events occur. The sender perfectly observes the realizations of random events, and their goal is to incrementally disclose the acquired information to induce the receiver towards desirable outcomes. In order to do so, the sender commits to a \emph{signaling scheme} specifying a probability distribution over action recommendations for the receiver at each decision node. Specifically, the sender commits to a \emph{persuasive} signaling scheme, meaning that the receiver is incentivized to follow recommendations. We consider the case of a \emph{farsighted} receiver, meaning that they take into account all the possible future events when deciding whether to deviate or \emph{not} from recommendations at each decision node. With some notable exceptions (see, \emph{e.g.}, <|cite_start|> (Reference: Learning to Persuade on the Fly: Robustness Against Ignorance: Motivated by information sharing in online platforms, we study repeated persuasion between a sender and a stream of receivers where at each time, the sender observes a payoff-relevant state drawn independently and identically from an unknown distribution, and shares state information with the receivers who each choose an action. The sender seeks to persuade the receivers into taking actions aligned with the sender's preference by selectively sharing state information. However, in contrast to the standard models, neither the sender nor the receivers know the distribution, and the sender has to persuade while learning the distribution on the fly. We study the sender's learning problem of making persuasive action recommendations to achieve low regret against the optimal persuasion mechanism with the knowledge of the distribution. To do this, we first propose and motivate a persuasiveness criterion for the unknown distribution setting that centers robustness as a requirement in the face of uncertainty. Our main result is an algorithm that, with high probability, is robustly-persuasive and achieves $O(\sqrt{T\log T})$ regret, where $T$ is the horizon length. Intuitively, at each time our algorithm maintains a set of candidate distributions, and chooses a signaling mechanism that is simultaneously persuasive for all of them. Core to our proof is a tight analysis about the cost of robust persuasion, which may be of independent interest. We further prove that this regret order is optimal (up to logarithmic terms) by showing that no algorithm can achieve regret better than $\Omega(\sqrt{T})$.) <|cite_end|>), Bayesian persuasion models in the literature make the stringent assumption that both the sender and the receiver know the \emph{prior}, which, in our setting, is defined by the probabilities of random events in the SDM problem. We relax such an assumption by considering an online learning framework in which the sender, without any knowledge of the prior, repeatedly interacts with the receiver to gradually learn the prior while still being persuasive. \paragraph{Original contributions.} Our goal is to design online learning algorithms that are no-regret for the sender, while being persuasive for the receiver. We start by providing a non-trivial polytopal approximation of the set of sender's persuasive signaling schemes. This will be crucial in designing efficient (\emph{i.e.}, polynomial-time) learning algorithms, and it also shows how a sender-optimal signaling scheme can be found in polynomial time in the offline version of our problem, which may be of independent interest. Next, we prove a negative result: without knowing the prior, no algorithm can be persuasive at each round with high probability. Thus, we relax persuasiveness requirements by focusing on learning algorithms that guarantee that the receiver's regret in following recommendations grows sub-linearly, while guaranteeing the same for sender's regret. First, we study the \emph{full-feedback} case, where the sender observes the realizations of \emph{all} the random events that may potentially happen in the SDM problem. In such a setting, we provide an algorithm with $\tilde{O}(\sqrt{T})$ regret for both the sender and the receiver. Then, we focus on the \emph{bandit-feedback} setting, where the sender only observes the realizations of random events on the path in the tree traversed during the SDM problem. In this case, we design an algorithm that achieves $\tilde{O}({T^\alpha})$ sender's regret and $\tilde{O}( T^{\max \{ \alpha, 1-\frac{\alpha}{2} \} })$ receiver's regret, for any $\alpha \in [1/2, 1]$ given as input. The crucial component of the algorithm is a non-trivial exploration phase that uniformly explores the tree defining the SDM problem to build suitable estimators of the prior. This is needed since, with bandit feedback, playing a signaling scheme may provide insufficient information about its persuasiveness. Finally, we provide a lower bound showing that the regrets trade off achieved by our algorithm is tight for $\alpha \in [1/2,2/3]$. \paragraph{Related works.} Some works addressed Bayesian persuasion in \emph{Markov decision processes} (MDPs). <|cite_start|> (Reference: Bayesian Persuasion in Sequential Decision-Making: We study a dynamic model of Bayesian persuasion in sequential decision-making settings. An informed principal observes an external parameter of the world and advises an uninformed agent about actions to take over time. The agent takes actions in each time step based on the current state, the principal's advice/signal, and beliefs about the external parameter. The action of the agent updates the state according to a stochastic process. The model arises naturally in many applications, e.g., an app (the principal) can advice the user (the agent) on possible choices between actions based on additional real-time information the app has. We study the problem of designing a signaling strategy from the principal's point of view. We show that the principal has an optimal strategy against a myopic agent, who only optimizes their rewards locally, and the optimal strategy can be computed in polynomial time. In contrast, it is NP-hard to approximate an optimal policy against a far-sighted agent. Further, if the principal has the power to threaten the agent by not providing future signals, then we can efficiently compute a threat-based strategy. This strategy guarantees the principal's payoff as if playing against an agent who is far-sighted but myopic to future signals.) <|cite_end|>~and <|cite_start|> (Reference: Sequential Information Design: Markov Persuasion Process and Its Efficient Reinforcement Learning: In today's economy, it becomes important for Internet platforms to consider the sequential information design problem to align its long term interest with incentives of the gig service providers. This paper proposes a novel model of sequential information design, namely the Markov persuasion processes (MPPs), where a sender, with informational advantage, seeks to persuade a stream of myopic receivers to take actions that maximizes the sender's cumulative utilities in a finite horizon Markovian environment with varying prior and utility functions. Planning in MPPs thus faces the unique challenge in finding a signaling policy that is simultaneously persuasive to the myopic receivers and inducing the optimal long-term cumulative utilities of the sender. Nevertheless, in the population level where the model is known, it turns out that we can efficiently determine the optimal (resp. $\epsilon$-optimal) policy with finite (resp. infinite) states and outcomes, through a modified formulation of the Bellman equation. Our main technical contribution is to study the MPP under the online reinforcement learning (RL) setting, where the goal is to learn the optimal signaling policy by interacting with with the underlying MPP, without the knowledge of the sender's utility functions, prior distributions, and the Markov transition kernels. We design a provably efficient no-regret learning algorithm, the Optimism-Pessimism Principle for Persuasion Process (OP4), which features a novel combination of both optimism and pessimism principles. Our algorithm enjoys sample efficiency by achieving a sublinear $\sqrt{T}$-regret upper bound. Furthermore, both our algorithm and theory can be applied to MPPs with large space of outcomes and states via function approximation, and we showcase such a success under the linear setting.) <|cite_end|> show how to efficiently find a sender-optimal policy when the receiver is \emph{myopic} (\emph{i.e.}, it only optimizes one-step rewards) in MDPs with infinite and finite horizon, respectively. Moreover, the former assume that the environment is known, while the latter do \emph{not}. These works considerably differ from ours, since we assume a farsighted receiver and also model partial observability of random events.\footnote{ <|cite_start|> (Reference: Bayesian Persuasion in Sequential Decision-Making: We study a dynamic model of Bayesian persuasion in sequential decision-making settings. An informed principal observes an external parameter of the world and advises an uninformed agent about actions to take over time. The agent takes actions in each time step based on the current state, the principal's advice/signal, and beliefs about the external parameter. The action of the agent updates the state according to a stochastic process. The model arises naturally in many applications, e.g., an app (the principal) can advice the user (the agent) on possible choices between actions based on additional real-time information the app has. We study the problem of designing a signaling strategy from the principal's point of view. We show that the principal has an optimal strategy against a myopic agent, who only optimizes their rewards locally, and the optimal strategy can be computed in polynomial time. In contrast, it is NP-hard to approximate an optimal policy against a far-sighted agent. Further, if the principal has the power to threaten the agent by not providing future signals, then we can efficiently compute a threat-based strategy. This strategy guarantees the principal's payoff as if playing against an agent who is far-sighted but myopic to future signals.) <|cite_end|>~also study a model with farsighted receiver, where they show that the problem of finding a sender-optimal policy is \textsf{NP}-hard. Thus, they do \emph{not} provide any algorithmic result for such a model.} Another work close to ours is <|cite_start|> (Reference: Learning to Persuade on the Fly: Robustness Against Ignorance: Motivated by information sharing in online platforms, we study repeated persuasion between a sender and a stream of receivers where at each time, the sender observes a payoff-relevant state drawn independently and identically from an unknown distribution, and shares state information with the receivers who each choose an action. The sender seeks to persuade the receivers into taking actions aligned with the sender's preference by selectively sharing state information. However, in contrast to the standard models, neither the sender nor the receivers know the distribution, and the sender has to persuade while learning the distribution on the fly. We study the sender's learning problem of making persuasive action recommendations to achieve low regret against the optimal persuasion mechanism with the knowledge of the distribution. To do this, we first propose and motivate a persuasiveness criterion for the unknown distribution setting that centers robustness as a requirement in the face of uncertainty. Our main result is an algorithm that, with high probability, is robustly-persuasive and achieves $O(\sqrt{T\log T})$ regret, where $T$ is the horizon length. Intuitively, at each time our algorithm maintains a set of candidate distributions, and chooses a signaling mechanism that is simultaneously persuasive for all of them. Core to our proof is a tight analysis about the cost of robust persuasion, which may be of independent interest. We further prove that this regret order is optimal (up to logarithmic terms) by showing that no algorithm can achieve regret better than $\Omega(\sqrt{T})$.) <|cite_end|>, which studies a (non-sequential) persuasion problem in which the sender and the receiver do \emph{not} know the prior and interact online. <|cite_start|> (Reference: Learning to Persuade on the Fly: Robustness Against Ignorance: Motivated by information sharing in online platforms, we study repeated persuasion between a sender and a stream of receivers where at each time, the sender observes a payoff-relevant state drawn independently and identically from an unknown distribution, and shares state information with the receivers who each choose an action. The sender seeks to persuade the receivers into taking actions aligned with the sender's preference by selectively sharing state information. However, in contrast to the standard models, neither the sender nor the receivers know the distribution, and the sender has to persuade while learning the distribution on the fly. We study the sender's learning problem of making persuasive action recommendations to achieve low regret against the optimal persuasion mechanism with the knowledge of the distribution. To do this, we first propose and motivate a persuasiveness criterion for the unknown distribution setting that centers robustness as a requirement in the face of uncertainty. Our main result is an algorithm that, with high probability, is robustly-persuasive and achieves $O(\sqrt{T\log T})$ regret, where $T$ is the horizon length. Intuitively, at each time our algorithm maintains a set of candidate distributions, and chooses a signaling mechanism that is simultaneously persuasive for all of them. Core to our proof is a tight analysis about the cost of robust persuasion, which may be of independent interest. We further prove that this regret order is optimal (up to logarithmic terms) by showing that no algorithm can achieve regret better than $\Omega(\sqrt{T})$.) <|cite_end|>~provide a persuasive learning algorithm, while, in our model, we show that the ignorance of the prior precludes the possibility of committing to persuasive signaling schemes, and, thus, we need to resort to new techniques to circumvent the issue. Another line of research, that uses similar techniques as the one employed in this work, studies learning in sequential decision making problems while satisfying unknown constraints <|cite_start|> (Reference: Exploiting Opponents Under Utility Constraints in Sequential Games: Recently, game-playing agents based on AI techniques have demonstrated super-human performance in several sequential games, such as chess, Go, and poker. Sur-prisingly, the multi-agent learning techniques that allowed to reach these achievements do not take into account the actual behavior of the human player, potentially leading to an impressive gap in performances. In this paper, we address the problem of designing artificial agents that learn how to effectively exploit unknown human opponents while playing repeatedly against them in an online fashion. We study the case in which the agent’s strategy during each repetition of the game is subject to constraints ensuring that the human’s expected utility is within some lower and upper thresholds. Our framework encompasses several real-world problems, such as human engagement in repeated game playing and human education by means of serious games . As a first result, we formalize a set of linear inequalities encoding the conditions that the agent’s strategy must satisfy at each iteration in order to do not violate the given bounds for the human’s expected utility. Then, we use such formulation in an upper confidence bound algorithm, and we prove that the resulting procedure suffers from sublinear regret and guarantees that the constraints are satisfied with high probability at each iteration. Finally, we empirically evaluate the convergence of our algorithm on standard testbeds of sequential games.) <|cite_end|> <|cite_start|> (Reference: Safe Learning in Tree-Form Sequential Decision Making: Handling Hard and Soft Constraints: We study decision making problems in which an agent sequentially interacts with a stochastic environment defined by means of a tree structure . The agent repeatedly faces the environment over time, and, after each round, it perceives a utility and a cost , which are both stochastic. The goal of the agent is to learn an optimal strategy in an online fashion, while keeping costs below a given safety threshold at the same time. Our model naturally fits many real-world scenarios, such as, e.g. , opponent exploitation in games and web link selection. We study the hard-threshold problem of achieving sublinear regret while guaranteeing that the threshold constraint is satisfied at every iteration with high probability. First, we show that, in general, any algorithm with such a guarantee incurs in a linear regret. This motivates the introduction of a relaxed problem, called the soft-threshold problem, in which we only require that the cumulative violation of the threshold constraint grows sublin-early, and, thus, we can provide an algorithm with sublinear regret. Next, in the hard-threshold problem, we show how a sublinear regret algorithm can be designed under the additional assumption that there exists a known strategy strictly satisfying the threshold constraint. We also show that our regret bounds are tight. Finally, we cast the opponent exploitation problem to our model, and we experimentally evaluate our algorithms on a standard testbed of sequential games.) <|cite_end|>. Finally, <|cite_start|> (Reference: Private Bayesian Persuasion with Sequential Games: We study an information-structure design problem (a.k.a. a persuasion problem) with a single sender and multiple receivers with actions of a priori unknown types, independently drawn from action-specific marginal probability distributions. As in the standard Bayesian persuasion model, the sender has access to additional information regarding the action types, which she can exploit when committing to a (noisy) signaling scheme through which she sends a private signal to each receiver. The novelty of our model is in considering the much more expressive case in which the receivers interact in a sequential game with imperfect information, with utilities depending on the game outcome and the realized action types. After formalizing the notions of ex ante and ex interim persuasiveness (which differ by the time at which the receivers commit to following the sender's signaling scheme), we investigate the continuous optimization problem of computing a signaling scheme which maximizes the sender's expected revenue. We show that computing an optimal ex ante persuasive signaling scheme is NP-hard when there are three or more receivers. Instead, in contrast with previous hardness results for ex interim persuasion, we show that, for games with two receivers, an optimal ex ante persuasive signaling scheme can be computed in polynomial time thanks to the novel algorithm we propose, based on the ellipsoid method.) <|cite_end|> study Bayesian persuasion with multiple receivers interacting in an imperfect-information sequential game. Differently from ours, their model adopts a different notion of persuasiveness, known as \emph{ex ante} persuasiveness, and it assumes that the prior is known. Other works study learning problems in which the sender does \emph{not} know the receivers' payoffs (but knows the prior); see, \emph{e.g.}, <|cite_start|> (Reference: Online Bayesian persuasion: In Bayesian persuasion, an informed sender has to design a signaling scheme that discloses the right amount of information so as to influence the behavior of a self-interested receiver. This kind of strategic interaction is ubiquitous in real-world economic scenarios. However, the seminal model by Kamenica and Gentzkow makes some stringent assumptions that limit its applicability in practice. One of the most limiting assumptions is, arguably, that the sender is required to know the receiver’s utility function to compute an optimal signaling scheme. We relax this assumption through an online learning framework in which the sender repeatedly faces a receiver whose type is unknown and chosen adversarially at each round from a finite set of possible types. We are interested in no-regret algorithms prescribing a signaling scheme at each round of the repeated interaction with performances close to that of a best-in-hindsight signaling scheme. First, we prove a hardness result on the per-round running time required to achieve no-α -regret for any α < 1 . Then, we provide algorithms for the full and partial feedback models with regret bounds sublinear in the number of rounds and polynomial in the size of the instance.) <|cite_end|> <|cite_start|> (Reference: Multi-Receiver Online Bayesian Persuasion: Bayesian persuasion studies how an informed sender should partially disclose information to influence the behavior of a self-interested receiver. Classical models make the stringent assumption that the sender knows the receiver's utility. This can be relaxed by considering an online learning framework in which the sender repeatedly faces a receiver of an unknown, adversarially selected type. We study, for the first time, an online Bayesian persuasion setting with multiple receivers. We focus on the case with no externalities and binary actions, as customary in offline models. Our goal is to design no-regret algorithms for the sender with polynomial per-iteration running time. First, we prove a negative result: for any $0 < \alpha \leq 1$, there is no polynomial-time no-$\alpha$-regret algorithm when the sender's utility function is supermodular or anonymous. Then, we focus on the case of submodular sender's utility functions and we show that, in this case, it is possible to design a polynomial-time no-$(1 - \frac{1}{e})$-regret algorithm. To do so, we introduce a general online gradient descent scheme to handle online learning problems with a finite number of possible loss functions. This requires the existence of an approximate projection oracle. We show that, in our setting, there exists one such projection oracle which can be implemented in polynomial time.) <|cite_end|> <|cite_start|> (Reference: Bayesian Persuasion Meets Mechanism Design: Going Beyond Intractability with Type Reporting: Bayesian persuasion studies how an informed sender should partially disclose information so as to influence the behavior of self-interested receivers. In the last years, a growing attention has been devoted to relaxing the assumption that the sender perfectly knows receiver's payoffs. The first crucial step towards such an achievement is to study settings where each receiver's payoffs depend on their unknown type, which is randomly determined by a known finite-supported probability distribution. This begets considerable computational challenges, as computing a sender-optimal signaling scheme is inapproximable up to within any constant factor. In this work, we circumvent this issue by leveraging ideas from mechanism design. In particular, we introduce a type reporting step in which the receiver is asked to report their type to the sender, after the latter has committed to a menu defining a signaling scheme for each possible receiver's type. We prove that, with a single receiver, the addition of this type reporting stage makes the sender's computational problem tractable. Then, we extend our framework to settings with multiple receivers, focusing on the case of no inter-agent externalities and binary actions. We show that it is possible to find a sender-optimal solution in polynomial-time by means of the ellipsoid method, given access to a suitable polynomial-time separation oracle. This can be implemented for supermodular and anonymous sender's utility functions. As for the case of submodular sender's utility functions, we first approximately cast the sender's problem into a linearly-constrained mathematical program whose objective function is the multi-linear extension of the sender's utility. Then, we show how to find in polynomial-time an approximate solution to the program by means of a continuous greedy algorithm. This provides a (1 -1/e)-approximation to the problem.) <|cite_end|>. <|paper_end|>
[ "<|reference_start|> Persuading voters: In a symmetric information voting model, an individual (politician) can influence voters’ choices by strategically designing a policy experiment (public signal). We characterize the politician’s optimal experiment. With a non-unanimous voting rule, she exploits voters’ heterogeneity by designing an experiment with realizations targeting different winning coalitions. Consequently, under a simple-majority rule, a majority of voters might be strictly worse off due to the politician’s influence. We characterize voters’ preferences over electoral rules and provide conditions for a majority of voters to prefer a supermajority (or unanimity) voting rule, in order to induce the politician to supply a more informative experiment. <|reference_end|>", "<|reference_start|> Bayesian Persuasion in Sequential Decision-Making: We study a dynamic model of Bayesian persuasion in sequential decision-making settings. An informed principal observes an external parameter of the world and advises an uninformed agent about actions to take over time. The agent takes actions in each time step based on the current state, the principal's advice/signal, and beliefs about the external parameter. The action of the agent updates the state according to a stochastic process. The model arises naturally in many applications, e.g., an app (the principal) can advice the user (the agent) on possible choices between actions based on additional real-time information the app has. We study the problem of designing a signaling strategy from the principal's point of view. We show that the principal has an optimal strategy against a myopic agent, who only optimizes their rewards locally, and the optimal strategy can be computed in polynomial time. In contrast, it is NP-hard to approximate an optimal policy against a far-sighted agent. Further, if the principal has the power to threaten the agent by not providing future signals, then we can efficiently compute a threat-based strategy. This strategy guarantees the principal's payoff as if playing against an agent who is far-sighted but myopic to future signals. <|reference_end|>", "<|reference_start|> Safe Learning in Tree-Form Sequential Decision Making: Handling Hard\nand Soft Constraints: We study decision making problems in which an agent sequentially interacts with a stochastic environment defined by means of a tree structure . The agent repeatedly faces the environment over time, and, after each round, it perceives a utility and a cost , which are both stochastic. The goal of the agent is to learn an optimal strategy in an online fashion, while keeping costs below a given safety threshold at the same time. Our model naturally fits many real-world scenarios, such as, e.g. , opponent exploitation in games and web link selection. We study the hard-threshold problem of achieving sublinear regret while guaranteeing that the threshold constraint is satisfied at every iteration with high probability. First, we show that, in general, any algorithm with such a guarantee incurs in a linear regret. This motivates the introduction of a relaxed problem, called the soft-threshold problem, in which we only require that the cumulative violation of the threshold constraint grows sublin-early, and, thus, we can provide an algorithm with sublinear regret. Next, in the hard-threshold problem, we show how a sublinear regret algorithm can be designed under the additional assumption that there exists a known strategy strictly satisfying the threshold constraint. We also show that our regret bounds are tight. Finally, we cast the opponent exploitation problem to our model, and we experimentally evaluate our algorithms on a standard testbed of sequential games. <|reference_end|>", "<|reference_start|> Multi-Receiver Online Bayesian Persuasion: Bayesian persuasion studies how an informed sender should partially disclose information to influence the behavior of a self-interested receiver. Classical models make the stringent assumption that the sender knows the receiver's utility. This can be relaxed by considering an online learning framework in which the sender repeatedly faces a receiver of an unknown, adversarially selected type. We study, for the first time, an online Bayesian persuasion setting with multiple receivers. We focus on the case with no externalities and binary actions, as customary in offline models. Our goal is to design no-regret algorithms for the sender with polynomial per-iteration running time. First, we prove a negative result: for any $0 < \\alpha \\leq 1$, there is no polynomial-time no-$\\alpha$-regret algorithm when the sender's utility function is supermodular or anonymous. Then, we focus on the case of submodular sender's utility functions and we show that, in this case, it is possible to design a polynomial-time no-$(1 - \\frac{1}{e})$-regret algorithm. To do so, we introduce a general online gradient descent scheme to handle online learning problems with a finite number of possible loss functions. This requires the existence of an approximate projection oracle. We show that, in our setting, there exists one such projection oracle which can be implemented in polynomial time. <|reference_end|>" ]
[ 2, 18, 22, 25 ]
{"<|cite_2|>": "ss-1064876", "<|cite_1|>": "ss-1413469", "<|multi_cite_3_1|>": "ss-1253892", "<|multi_cite_3_2|>": "arxiv-308862", "<|multi_cite_3_3|>": "arxiv-220755", "<|multi_cite_4_1|>": "arxiv-88904", "<|multi_cite_4_2|>": "ss-968555", "<|multi_cite_5_1|>": "ss-1253880", "<|multi_cite_5_2|>": "ss-968555", "<|multi_cite_6_1|>": "arxiv-28507", "<|multi_cite_6_2|>": "arxiv-130967", "<|multi_cite_6_3|>": "arxiv-394425", "<|multi_cite_6_4|>": "arxiv-395473", "<|multi_cite_7_1|>": "ss-968556", "<|multi_cite_7_2|>": "ss-1413468", "<|cite_8|>": "arxiv-322435", "<|cite_12|>": "arxiv-347085", "<|cite_13|>": "arxiv-400729", "<|cite_14|>": "arxiv-347085", "<|cite_9|>": "arxiv-322435", "<|cite_15|>": "arxiv-322435", "<|multi_cite_10_1|>": "ss-968557", "<|multi_cite_10_2|>": "ss-968558", "<|cite_16|>": "ss-1253882", "<|multi_cite_11_1|>": "ss-2247461", "<|multi_cite_11_2|>": "arxiv-347717", "<|multi_cite_11_3|>": "arxiv-396294"}
1908.07162
<|paper_start|> Title: Discriminative Topic Mining via Category-Name Guided Text Embedding Abstract: Discriminative Topic Mining via Category-Name Guided Text Embedding: Mining a set of meaningful and distinctive topics automatically from massive text corpora has broad applications. Existing topic models, however, typically work in a purely unsupervised way, which often generate topics that do not fit users' particular needs and yield suboptimal performance on downstream tasks. We propose a new task, discriminative topic mining, which leverages a set of user-provided category names to mine discriminative topics from text corpora. This new task not only helps a user understand clearly and distinctively the topics he/she is most interested in, but also benefits directly keyword-driven classification tasks. We develop CatE, a novel category-name guided text embedding method for discriminative topic mining, which effectively leverages minimal user guidance to learn a discriminative embedding space and discover category representative terms in an iterative manner. We conduct a comprehensive set of experiments to show that CatE mines high-quality set of topics guided by category names only, and benefits a variety of downstream applications including weakly-supervised classification and lexical entailment direction identification. Introduction To help users effectively and efficiently comprehend a large set of text documents, it is of great interest to generate a set of meaningful and coherent topics automatically from a given corpus. Topic models <|cite_start|> (Reference: {Latent Dirichlet allocation: with the most likely topic assignments D. Blei Topic Models Monday, June 16, 14 Learning Fix K number of topics We have a set of D documents Goal: use LDA to learn the topic representation of each document and the words associated to each topic.) <|cite_end|> <|cite_start|> (Reference: {Probabilistic Latent Semantic Indexing: Probabilistic Latent Semantic Indexing is a novel approach to automated document indexing which is based on a statistical latent class model for factor analysis of count data. Fitted from a training corpus of text documents by a generalization of the Expectation Maximization algorithm, the utilized model is able to deal with domain{specific synonymy as well as with polysemous words. In contrast to standard Latent Semantic Indexing (LSI) by Singular Value Decomposition, the probabilistic variant has a solid statistical foundation and defines a proper generative data model. Retrieval experiments on a number of test collections indicate substantial performance gains over direct term matching methods as well as over LSI. In particular, the combination of models with different dimensionalities has proven to be advantageous.) <|cite_end|> are such unsupervised statistical tools that discover latent topics from text corpora. Due to their effectiveness in uncovering hidden semantic structure in text collections, topic models are widely used in text mining <|cite_start|> (Reference: Mixture-model adaptation for smt: We describe a mixture-model approach to adapting a Statistical Machine Translation System for new domains, using weights that depend on text distances to mixture components. We investigate a number of variants on this approach, including cross-domain versus dynamic adaptation; linear versus loglinear mixtures; language and translation model adaptation; different methods of assigning weights; and granularity of the source unit being adapted to. The best methods achieve gains of approximately one BLEU percentage point over a state-of-the art non-adapted baseline system.) <|cite_end|> <|cite_start|> (Reference: Automatic labeling of multinomial topic models: Multinomial distributions over words are frequently used to model topics in text collections. A common, major challenge in applying all such topic models to any text mining problem is to label a multinomial topic model accurately so that a user can interpret the discovered topic. So far, such labels have been generated manually in a subjective way. In this paper, we propose probabilistic approaches to automatically labeling multinomial topic models in an objective way. We cast this labeling problem as an optimization problem involving minimizing Kullback-Leibler divergence between word distributions and maximizing mutual information between a label and a topic model. Experiments with user study have been done on two text data sets with different genres.The results show that the proposed labeling methods are quite effective to generate labels that are meaningful and useful for interpreting the discovered topic models. Our methods are general and can be applied to labeling topics learned through all kinds of topic models such as PLSA, LDA, and their variations.) <|cite_end|> and information retrieval tasks <|cite_start|> (Reference: A Large-scale Evaluation and Analysis of Personalized Search Strategies: Although personalized search has been proposed for many years and many personalization strategies have been investigated, it is still unclear whether personalization is consistently effective on different queries for different users, and under different search contexts. In this paper, we study this problem and get some preliminary conclusions. We present a large-scale evaluation framework for personalized search based on query logs, and then evaluate five personalized search strategies (including two click-based and three profile-based ones) using 12-day MSN query logs. By analyzing the results, we reveal that personalized search has significant improvement over common web search on some queries but it also has little effect on other queries (e.g., queries with small click entropy). It even harms search accuracy under some situations. Furthermore, we show that straightforward click-based personalization strategies perform consistently and considerably well, while profile-based ones are unstable in our experiments. We also reveal that both long-term and short-term contexts are very important in improving search performance for profile-based personalized search strategies.) <|cite_end|> <|cite_start|> (Reference: LDA-based Document Models for Ad-hoc Retrieval: Search algorithms incorporating some form of topic model have a long history in information retrieval. For example, cluster-based retrieval has been studied since the 60s and has recently produced good results in the language model framework. An approach to building topic models based on a formal generative model of documents, Latent Dirichlet Allocation (LDA), is heavily cited in the machine learning literature, but its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use LDA to improve ad-hoc retrieval. We propose an LDA-based document model within the language modeling framework, and evaluate it on several TREC collections. Gibbs sampling is employed to conduct approximate inference in LDA and the computational complexity is analyzed. We show that improvements over retrieval using cluster-based models can be obtained with reasonable efficiency.) <|cite_end|>. Despite of their effectiveness, traditional topic models suffer from two noteworthy limitations: (1) \emph{Failure to incorporate user guidance}. Topic models tend to retrieve the most general and prominent topics from a text collection, which may not be of a user's particular interest, or provide a skewed and biased summarization of the corpus. (2) \emph{Failure to enforce distinctiveness among retrieved topics}. Concepts are most effectively interpreted via their uniquely defining features. For example, Egypt is known for pyramids and China is known for the Great Wall. Topic models, however, do not impose disriminative constraints, resulting in vague interpretations of the retrieved topics. Table~\ref{tab:lda_topic} demonstrates three retrieved topics from the New York Times (\textbf{NYT}) annotated corpus <|cite_start|> (Reference: An Approach to Improving the Classification of the New York Times Annotated Corpus: ) <|cite_end|> via LDA <|cite_start|> (Reference: {Latent Dirichlet allocation: with the most likely topic assignments D. Blei Topic Models Monday, June 16, 14 Learning Fix K number of topics We have a set of D documents Goal: use LDA to learn the topic representation of each document and the words associated to each topic.) <|cite_end|>. We can see that it is difficult to clearly define the meaning of the three topics due to an overlap of their semantics (\eg, the term ``united states'' appears in all three topics). \setlength{\tabcolsep}{3pt} \begin{table}[h] \centering \caption{LDA retrieved topics on \textbf{NYT} dataset. The meanings of the retrieved topics have overlap with each other.} \vspace*{-1em} \label{tab:lda_topic} \scalebox{0.95}{ \begin{tabular}{c|c|c} \toprule Topic 1 & Topic 2 & Topic 3 \\ \midrule canada, united states & sports, united states & united states, iraq \\ canadian, economy & olympic, games & government, president \\ \bottomrule \end{tabular} } \vspace*{-1em} \end{table} \setlength{\tabcolsep}{5pt} In order to incorporate user knowledge or preference into topic discovery for mining distinctive topics from a text corpus, we propose a new task, \textbf{Discriminative Topic Mining}, which takes only a set of category names as user guidance, and aims to retrieve a set of representative and discriminative terms under each provided category. In many cases, a user may have a specific set of interested topics in mind, or have prior knowledge about the potential topics in a corpus. Such user interest or prior knowledge may come naturally in the form of a set of category names that could be used to guide the topic discovery process, resulting in more desirable results that better cater to a user's need and fit specific downstream applications. For example, a user may provide several country names and rely on discriminative topic mining to retrieve each country's provinces, cities, currency, etc. from a text corpus. We will show that this new task not only helps the user to clearly and distinctively understand his/her interested topics, but also benefits keywords-driven classification tasks. There exist previous studies that attempt to incorporate prior knowledge into topic models. Along one line of work, supervised topic models such as Supervised LDA <|cite_start|> (Reference: Supervised Topic Models: We introduce supervised latent Dirichlet allocation (sLDA), a statistical model of labelled documents. The model accommodates a variety of response types. We derive a maximum-likelihood procedure for parameter estimation, which relies on variational approximations to handle intractable posterior expectations. Prediction problems motivate this research: we use the fitted model to predict response values for new documents. We test sLDA on two real-world problems: movie ratings predicted from reviews, and web page popularity predicted from text descriptions. We illustrate the benefits of sLDA versus modern regularized regression, as well as versus an unsupervised LDA analysis followed by a separate regression.) <|cite_end|> and DiscLDA <|cite_start|> (Reference: DiscLDA: Discriminative learning for dimensionality reduction and classification: Probabilistic topic models have become popular as methods for dimensionality reduction in collections of text documents or images. These models are usually treated as generative models and trained using maximum likelihood or Bayesian methods. In this paper, we discuss an alternative: a discriminative framework in which we assume that supervised side information is present, and in which we wish to take that side information into account in finding a reduced dimensionality representation. Specifically, we present DiscLDA, a discriminative variation on Latent Dirichlet Allocation (LDA) in which a class-dependent linear transformation is introduced on the topic mixture proportions. This parameter is estimated by maximizing the conditional likelihood. By using the transformed topic mixture proportions as a new representation of documents, we obtain a supervised dimensionality reduction algorithm that uncovers the latent structure in a document collection while preserving predictive power for the task of classification. We compare the predictive power of the latent structure of DiscLDA with unsupervised LDA on the 20 Newsgroups document classification task and show how our model can identify shared topics across classes as well as class-dependent topics.) <|cite_end|> guide the model to predict category labels based on document-level training data. While they do improve the discriminative power of unsupervised topic models on classification tasks, they rely on massive hand-labeled documents, which may be difficult to obtain in practical applications. Along another line of work that is more similar to our setting, users are asked to provide a set of seed words to guide the topic discovery process, which is referred to as seed-guided topic modeling <|cite_start|> (Reference: Latent dirichlet allocation with topic-in-set knowledge: Latent Dirichlet Allocation is an unsupervised graphical model which can discover latent topics in unlabeled data. We propose a mechanism for adding partial supervision, called topic-in-set knowledge, to latent topic modeling. This type of supervision can be used to encourage the recovery of topics which are more relevant to user modeling goals than the topics which would be recovered otherwise. Preliminary experiments on text datasets are presented to demonstrate the potential effectiveness of this method.) <|cite_end|> <|cite_start|> (Reference: Incorporating lexical priors into topic models: Topic models have great potential for helping users understand document corpora. This potential is stymied by their purely unsupervised nature, which often leads to topics that are neither entirely meaningful nor effective in extrinsic tasks (Chang et al., 2009). We propose a simple and effective way to guide topic models to learn topics of specific interest to a user. We achieve this by providing sets of seed words that a user believes are representative of the underlying topics in a corpus. Our model uses these seeds to improve both topic-word distributions (by biasing topics to produce appropriate seed words) and to improve document-topic distributions (by biasing documents to select topics related to the seed words they contain). Extrinsic evaluation on a document clustering task reveals a significant improvement when using seed information, even over other models that use seed information naively.) <|cite_end|>. However, they still do not impose requirements on the distinctiveness of the retrieved topics and thus are not optimized for discriminative topic presentation and other applications such as keyword-driven classification. We develop a novel category-name guided text embedding method, \CatEm, for discriminative topic mining. \CatEm consists of two modules: (1) A \emph{category-name guided text embedding learning module} that takes a set of category names to learn category distinctive word embeddings by modeling the text generative process conditioned on the user provided categories, and (2) a \emph{category representative word retrieval module} that selects category representative words based on both word embedding similarity and word distributional specificity. The two modules collaborate in an iterative way: At each iteration, the former refines word embeddings and category embeddings for accurate representative word retrieval; and the latter selects representative words that will be used by the former at the next iteration. Our contributions can be summarized as follows. \begin{enumerate} \parskip -0.2ex \item We propose discriminative topic mining, a new task for topic discovery from text corpora with a set of category names as the only supervision. We show qualitatively and quantitatively that this new task helps users obtain a clear and distinctive understanding of interested topics, and directly benefits keyword-driven classification tasks. \item We develop a category-name guided text embedding framework for discriminative topic mining by modeling the text generation process. The model effectively learns a category distinctive embedding space that best separates the given set of categories based on word-level supervision. \item We propose an unsupervised method that jointly learns word embedding and word distributional specificity, which allow us to consider both relatedness and specificity when retrieving category representative terms. We also provide theoretical interpretations of the model. \item We conduct a comprehensive set of experiments on a variety of tasks including topic mining, weakly-supervised classification and lexical entailment direction identification to demonstrate the effectiveness of our model on these tasks. \end{enumerate} Related Work We review two lines of related work that are most relevant to our task: Topic modeling and task-oriented text embedding. \subsection{Topic Modeling} Topic models discover semantically relevant terms that form coherent topics via probabilistic generative models. Unsupervised topic models have been studied for decades, among which pLSA <|cite_start|> (Reference: {Probabilistic Latent Semantic Indexing: Probabilistic Latent Semantic Indexing is a novel approach to automated document indexing which is based on a statistical latent class model for factor analysis of count data. Fitted from a training corpus of text documents by a generalization of the Expectation Maximization algorithm, the utilized model is able to deal with domain{specific synonymy as well as with polysemous words. In contrast to standard Latent Semantic Indexing (LSI) by Singular Value Decomposition, the probabilistic variant has a solid statistical foundation and defines a proper generative data model. Retrieval experiments on a number of test collections indicate substantial performance gains over direct term matching methods as well as over LSI. In particular, the combination of models with different dimensionalities has proven to be advantageous.) <|cite_end|> and LDA <|cite_start|> (Reference: {Latent Dirichlet allocation: with the most likely topic assignments D. Blei Topic Models Monday, June 16, 14 Learning Fix K number of topics We have a set of D documents Goal: use LDA to learn the topic representation of each document and the words associated to each topic.) <|cite_end|> are the most famous ones, serving as the backbone for many future variants. The basic idea is to represent documents via mixtures over latent topics, where each topic is characterized by a distribution over words. Subsequent studies lead to a large number of variants such as Hierarchical LDA <|cite_start|> (Reference: Hierarchical topic models and the nested {Chinese} restaurant process: We address the problem of learning topic hierarchies from data. The model selection problem in this domain is daunting—which of the large collection of possible trees to use? We take a Bayesian approach, generating an appropriate prior via a distribution on partitions that we refer to as the nested Chinese restaurant process. This nonparametric prior allows arbitrarily large branching factors and readily accommodates growing data collections. We build a hierarchical topic model by combining this prior with a likelihood that is based on a hierarchical variant of latent Dirichlet allocation. We illustrate our approach on simulated data and with an application to the modeling of NIPS abstracts.) <|cite_end|>, Correlated Topic Models <|cite_start|> (Reference: {Correlated Topic Models: Topic models, such as latent Dirichlet allocation (LDA), can be useful tools for the statistical analysis of document collections and other discrete data. The LDA model assumes that the words of each document arise from a mixture of topics, each of which is a distribution over the vocabulary. A limitation of LDA is the inability to model topic correlation even though, for example, a document about genetics is more likely to also be about disease than x-ray astronomy. This limitation stems from the use of the Dirichlet distribution to model the variability among the topic proportions. In this paper we develop the correlated topic model (CTM), where the topic proportions exhibit correlation via the logistic normal distribution [1]. We derive a mean-field variational inference algorithm for approximate posterior inference in this model, which is complicated by the fact that the logistic normal is not conjugate to the multinomial. The CTM gives a better fit than LDA on a collection of OCRed articles from the journal Science. Furthermore, the CTM provides a natural way of visualizing and exploring this and other unstructured data sets.) <|cite_end|>, Pachinko Allocation <|cite_start|> (Reference: Pachinko allocation: Dag-structured mixture models of topic correlations: Latent Dirichlet allocation (LDA) and other related topic models are increasingly popular tools for summarization and manifold discovery in discrete data. However, LDA does not capture correlations between topics. In this paper, we introduce the pachinko allocation model (PAM), which captures arbitrary, nested, and possibly sparse correlations between topics using a directed acyclic graph (DAG). The leaves of the DAG represent individual words in the vocabulary, while each interior node represents a correlation among its children, which may be words or other interior nodes (topics). PAM provides a flexible alternative to recent work by Blei and Lafferty (2006), which captures correlations only between pairs of topics. Using text data from newsgroups, historic NIPS proceedings and other research paper corpora, we show improved performance of PAM in document classification, likelihood of held-out data, the ability to support finer-grained topics, and topical keyword coherence.) <|cite_end|> and Concept Topic Models <|cite_start|> (Reference: Combining concept hierarchies and statistical topic models: Statistical topic models provide a general data-driven framework for automated discovery of high-level knowledge from large collections of text documents. While topic models can potentially discover a broad range of themes in a data set, the interpretability of the learned topics is not always ideal. Human-defined concepts, on the other hand, tend to be semantically richer due to careful selection of words to define concepts but they tend not to cover the themes in a data set exhaustively. In this paper, we propose a probabilistic framework to combine a hierarchy of human-defined semantic concepts with statistical topic models to seek the best of both worlds. Experimental results using two different sources of concept hierarchies and two collections of text documents indicate that this combination leads to systematic improvements in the quality of the associated language models as well as enabling new techniques for inferring and visualizing the semantics of a document.) <|cite_end|>. Although unsupervised topic models are sufficiently expressive to model multiple topics per document, they are unable to incorporate the category and label information into their learning procedure. Several modifications of topic models have been proposed to incorporate supervision. Supervised LDA <|cite_start|> (Reference: Supervised Topic Models: We introduce supervised latent Dirichlet allocation (sLDA), a statistical model of labelled documents. The model accommodates a variety of response types. We derive a maximum-likelihood procedure for parameter estimation, which relies on variational approximations to handle intractable posterior expectations. Prediction problems motivate this research: we use the fitted model to predict response values for new documents. We test sLDA on two real-world problems: movie ratings predicted from reviews, and web page popularity predicted from text descriptions. We illustrate the benefits of sLDA versus modern regularized regression, as well as versus an unsupervised LDA analysis followed by a separate regression.) <|cite_end|> and DiscLDA <|cite_start|> (Reference: DiscLDA: Discriminative learning for dimensionality reduction and classification: Probabilistic topic models have become popular as methods for dimensionality reduction in collections of text documents or images. These models are usually treated as generative models and trained using maximum likelihood or Bayesian methods. In this paper, we discuss an alternative: a discriminative framework in which we assume that supervised side information is present, and in which we wish to take that side information into account in finding a reduced dimensionality representation. Specifically, we present DiscLDA, a discriminative variation on Latent Dirichlet Allocation (LDA) in which a class-dependent linear transformation is introduced on the topic mixture proportions. This parameter is estimated by maximizing the conditional likelihood. By using the transformed topic mixture proportions as a new representation of documents, we obtain a supervised dimensionality reduction algorithm that uncovers the latent structure in a document collection while preserving predictive power for the task of classification. We compare the predictive power of the latent structure of DiscLDA with unsupervised LDA on the 20 Newsgroups document classification task and show how our model can identify shared topics across classes as well as class-dependent topics.) <|cite_end|> assume each document is associated with a label and train the model by predicting the document category label. Author Topic Models <|cite_start|> (Reference: The Author-Topic Model for Authors and Documents: We introduce the author-topic model, a generative model for documents that extends Latent Dirichlet Allocation (LDA; Blei, Ng, & Jordan, 2003) to include authorship information. Each author is associated with a multinomial distribution over topics and each topic is associated with a multinomial distribution over words. A document with multiple authors is modeled as a distribution over topics that is a mixture of the distributions associated with the authors. We apply the model to a collection of 1,700 NIPS conference papers and 160,000 CiteSeer abstracts. Exact inference is intractable for these datasets and we use Gibbs sampling to estimate the topic and author distributions. We compare the performance with two other generative models for documents, which are special cases of the author-topic model: LDA (a topic model) and a simple author model in which each author is associated with a distribution over words rather than a distribution over topics. We show topics recovered by the author-topic model, and demonstrate applications to computing similarity between authors and entropy of author output.) <|cite_end|> and Multi-Label Topic Models <|cite_start|> (Reference: Statistical Topic Models for Multi-Label Document Classification: Machine learning approaches to multi-label document classification have to date largely relied on discriminative modeling techniques such as support vector machines. A drawback of these approaches is that performance rapidly drops off as the total number of labels and the number of labels per document increase. This problem is amplified when the label frequencies exhibit the type of highly skewed distributions that are often observed in real-world datasets. In this paper we investigate a class of generative statistical topic models for multi-label documents that associate individual word tokens with different labels. We investigate the advantages of this approach relative to discriminative models, particularly with respect to classification problems involving large numbers of relatively rare labels. We compare the performance of generative and discriminative approaches on document labeling tasks ranging from datasets with several thousand labels to datasets with tens of labels. The experimental results indicate that probabilistic generative models can achieve competitive multi-label classification performance compared to discriminative methods, and have advantages for datasets with many labels and skewed label frequencies.) <|cite_end|> further model each document as a bag of words with a bag of labels. However, these models obtain topics that do not correspond directly to the labels. Labeled LDA <|cite_start|> (Reference: Labeled LDA: A Supervised Topic Model for Credit Attribution in Multi-Labeled Corpora: A significant portion of the world's text is tagged by readers on social bookmarking websites. Credit attribution is an inherent problem in these corpora because most pages have multiple tags, but the tags do not always apply with equal specificity across the whole document. Solving the credit attribution problem requires associating each word in a document with the most appropriate tags and vice versa. This paper introduces Labeled LDA, a topic model that constrains Latent Dirichlet Allocation by defining a one-to-one correspondence between LDA's latent topics and user tags. This allows Labeled LDA to directly learn word-tag correspondences. We demonstrate Labeled LDA's improved expressiveness over traditional LDA with visualizations of a corpus of tagged web pages from del.icio.us. Labeled LDA outperforms SVMs by more than 3 to 1 when extracting tag-specific document snippets. As a multi-label text classifier, our model is competitive with a discriminative baseline on a variety of datasets.) <|cite_end|> and SSHLDA <|cite_start|> (Reference: SSHLDA: A Semi-Supervised Hierarchical Topic Model: Supervised hierarchical topic modeling and unsupervised hierarchical topic modeling are usually used to obtain hierarchical topics, such as hLLDA and hLDA. Supervised hierarchical topic modeling makes heavy use of the information from observed hierarchical labels, but cannot explore new topics; while unsupervised hierarchical topic modeling is able to detect automatically new topics in the data space, but does not make use of any information from hierarchical labels. In this paper, we propose a semi-supervised hierarchical topic model which aims to explore new topics automatically in the data space while incorporating the information from observed hierarchical labels into the modeling process, called Semi-Supervised Hierarchical Latent Dirichlet Allocation (SSHLDA). We also prove that hLDA and hLLDA are special cases of SSHLDA. We conduct experiments on Yahoo! Answers and ODP datasets, and assess the performance in terms of perplexity and clustering. The experimental results show that predictive ability of SSHLDA is better than that of baselines, and SSHLDA can also achieve significant improvement over baselines for clustering on the FScore measure.) <|cite_end|> can be used to solve this problem. However, all the \textit{supervised} models mentioned above requires sufficient annotated documents, which are expensive to obtain in some domains. In contrast, our model relies on very weak supervisions (\ie, a set of category names) which are much easier to obtain. Several studies leverage word-level supervision to build topic models. For example, Dirichlet Forest <|cite_start|> (Reference: Latent dirichlet allocation with topic-in-set knowledge: Latent Dirichlet Allocation is an unsupervised graphical model which can discover latent topics in unlabeled data. We propose a mechanism for adding partial supervision, called topic-in-set knowledge, to latent topic modeling. This type of supervision can be used to encourage the recovery of topics which are more relevant to user modeling goals than the topics which would be recovered otherwise. Preliminary experiments on text datasets are presented to demonstrate the potential effectiveness of this method.) <|cite_end|> has been used as priors to incorporate must-link and cannot-link constraints among seed words. Seeded LDA <|cite_start|> (Reference: Incorporating lexical priors into topic models: Topic models have great potential for helping users understand document corpora. This potential is stymied by their purely unsupervised nature, which often leads to topics that are neither entirely meaningful nor effective in extrinsic tasks (Chang et al., 2009). We propose a simple and effective way to guide topic models to learn topics of specific interest to a user. We achieve this by providing sets of seed words that a user believes are representative of the underlying topics in a corpus. Our model uses these seeds to improve both topic-word distributions (by biasing topics to produce appropriate seed words) and to improve document-topic distributions (by biasing documents to select topics related to the seed words they contain). Extrinsic evaluation on a document clustering task reveals a significant improvement when using seed information, even over other models that use seed information naively.) <|cite_end|> takes user-provided seed words as supervision to learn seed-related topics via a seed topic distribution. CorEx <|cite_start|> (Reference: Anchored Correlation Explanation: Topic Modeling with Minimal Domain Knowledge: While generative models such as Latent Dirichlet Allocation (LDA) have proven fruitful in topic modeling, they often require detailed assumptions and careful specification of hyperparameters. Such model complexity issues only compound when trying to generalize generative models to incorporate human input. We introduce Correlation Explanation (CorEx), an alternative approach to topic modeling that does not assume an underlying generative model, and instead learns maximally informative topics through an information-theoretic framework. This framework naturally generalizes to hierarchical and semi-supervised extensions with no additional modeling assumptions. In particular, word-level domain knowledge can be flexibly incorporated within CorEx through anchor words, allowing topic separability and representation to be promoted with minimal human intervention. Across a variety of datasets, metrics, and experiments, we demonstrate that CorEx produces topics that are comparable in quality to those produced by unsupervised and semi-supervised variants of LDA.) <|cite_end|> learns maximally informative topics from the corpus and uses total correlation as the measure. It can incorporate seed words by jointly compressing the text corpus and preserving seed relevant information. However, none of the above systems \textit{explicitly} model distinction among different topics, and they also do not require the retrieved terms to belong to the provided categories. As a result, the retrieved topics still suffer from irrelevant term intrusion, as we will demonstrate in the experiment section. With the development of word embeddings <|cite_start|> (Reference: Distributed Representations of Words and Phrases and their Compositionality: The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.) <|cite_end|> <|cite_start|> (Reference: GloVe: Global Vectors for word representation: Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.) <|cite_end|> <|cite_start|> (Reference: Enriching Word Vectors with Subword Information: Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models that learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skipgram model, where each word is represented as a bag of character $n$-grams. A vector representation is associated to each character $n$-gram; words being represented as the sum of these representations. Our method is fast, allowing to train models on large corpora quickly and allows us to compute word representations for words that did not appear in the training data. We evaluate our word representations on nine different languages, both on word similarity and analogy tasks. By comparing to recently proposed morphological word representations, we show that our vectors achieve state-of-the-art performance on these tasks.) <|cite_end|>, several studies propose to extend LDA to involve word embeddings. One common strategy is to convert the discrete text into continuous representations of embeddings, and then adapt LDA to generate real-valued data <|cite_start|> (Reference: Gaussian LDA for Topic Models with Word Embeddings: Continuous space word embeddings learned from large, unstructured corpora have been shown to be effective at capturing semantic regularities in language. In this paper we replace LDA’s parameterization of “topics” as categorical distributions over opaque word types with multivariate Gaussian distributions on the embedding space. This encourages the model to group words that are a priori known to be semantically related into topics. To perform inference, we introduce a fast collapsed Gibbs sampling algorithm based on Cholesky decompositions of covariance matrices of the posterior predictive distributions. We further derive a scalable algorithm that draws samples from stale posterior predictive distributions and corrects them with a Metropolis–Hastings step. Using vectors learned from a domain-general corpus (English Wikipedia), we report results on two document collections (20-newsgroups and NIPS). Qualitatively, Gaussian LDA infers different (but still very sensible) topics relative to standard LDA. Quantitatively, our technique outperforms existing models at dealing with OOV words in held-out documents.) <|cite_end|> <|cite_start|> (Reference: Topic discovery for short texts using word embeddings: Discovering topics in short texts, such as news titles and tweets, has become an important task for many content analysis applications. However, due to the lack of rich context information in short texts, the performance of conventional topic models on short texts is usually unsatisfying. In this paper, we propose a novel topic model for short text corpus using word embeddings. Continuous space word embeddings, which is proven effective at capturing regularities in language, is incorporated into our model to provide additional semantics. Thus we model each short document as a Gaussian topic over word embeddings in the vector space. In addition, considering that background words in a short text are usually not semantically related, we introduce a discrete background mode over word types to complement the continuous Gaussian topics. We evaluate our model on news titles from data sources like abcnews, showing that our model is able to extract more coherent topics from short texts compared with the baseline methods and learn better topic representation for each short document.) <|cite_end|> <|cite_start|> (Reference: Nonparametric Spherical Topic Modeling with Word Embeddings: Traditional topic models do not account for semantic regularities in language. Recent distributional representations of words exhibit semantic consistency over directional metrics such as cosine similarity. However, neither categorical nor Gaussian observational distributions used in existing topic models are appropriate to leverage such correlations. In this paper, we propose to use the von Mises-Fisher distribution to model the density of words over a unit sphere. Such a representation is well-suited for directional data. We use a Hierarchical Dirichlet Process for our base topic model and propose an efficient inference algorithm based on Stochastic Variational Inference. This model enables us to naturally exploit the semantic structures of word embeddings while flexibly discovering the number of topics. Experiments demonstrate that our method outperforms competitive approaches in terms of topic coherence on two different text corpora while offering efficient inference.) <|cite_end|> <|cite_start|> (Reference: Collaboratively improving topic discovery and word embeddings by coordinating global and local contexts: A text corpus typically contains two types of context information -- global context and local context. Global context carries topical information which can be utilized by topic models to discover topic structures from the text corpus, while local context can train word embeddings to capture semantic regularities reflected in the text corpus. This encourages us to exploit the useful information in both the global and the local context information. In this paper, we propose a unified language model based on matrix factorization techniques which 1) takes the complementary global and local context information into consideration simultaneously, and 2) models topics and learns word embeddings collaboratively. We empirically show that by incorporating both global and local context, this collaborative model can not only significantly improve the performance of topic discovery over the baseline topic models, but also learn better word embeddings than the baseline word embedding models. We also provide qualitative analysis that explains how the cooperation of global and local context information can result in better topic structures and word embeddings.) <|cite_end|>. There are a few other ways of combining LDA and embeddings. For example, <|cite_start|> (Reference: Improving Topic Models with Latent Feature Word Representations: Probabilistic topic models are widely used to discover latent topics in document collections, while latent feature vector representations of words have been used to obtain high performance in many NLP tasks. In this paper, we extend two different Dirichlet multinomial topic models by incorporating latent feature vector representations of words trained on very large corpora to improve the word-topic mapping learnt on a smaller corpus. Experimental results show that by using information from the external corpora, our new models produce significant improvements on topic coherence, document clustering and document classification tasks, especially on datasets with few or short documents.) <|cite_end|> mixes the likelihood defined by LDA with a log-linear model that uses pre-fitted word embeddings; <|cite_start|> (Reference: Distilled Wasserstein Learning for Word Embedding and Topic Modeling: We propose a novel Wasserstein method with a distillation mechanism, yielding joint learning of word embeddings and topics. The proposed method is based on the fact that the Euclidean distance between word embeddings may be employed as the underlying distance in the Wasserstein topic model. The word distributions of topics, their optimal transports to the word distributions of documents, and the embeddings of words are learned in a unified framework. When learning the topic model, we leverage a distilled underlying distance matrix to update the topic distributions and smoothly calculate the corresponding optimal transports. Such a strategy provides the updating of word embeddings with robust guidance, improving the algorithmic convergence. As an application, we focus on patient admission records, in which the proposed method embeds the codes of diseases and procedures and learns the topics of admissions, obtaining superior performance on clinically-meaningful disease network construction, mortality prediction as a function of admission codes, and procedure recommendation.) <|cite_end|> adopts a geometric perspective, using Wasserstein distances to learn topics and word embeddings jointly; <|cite_start|> (Reference: Topic Modeling in Embedding Spaces: Topic modeling analyzes documents to learn meaningful patterns of words. However, existing topic models fail to learn interpretable topics when working with large and heavy-tailed vocabularies. To this end, we develop the Embedded Topic Model (ETM), a generative model of documents that marries traditional topic models with word embeddings. In particular, it models each word with a categorical distribution whose natural parameter is the inner product between a word embedding and an embedding of its assigned topic. To fit the ETM, we develop an efficient amortized variational inference algorithm. The ETM discovers interpretable topics even with large vocabularies that include rare words and stop words. It outperforms existing document models, such as latent Dirichlet allocation (LDA), in terms of both topic quality and predictive performance.) <|cite_end|> uses the distributed representation of word embedding to enhance the robustness of topic models to rare words. Motivated by the success of these recent topic models, we model the text generation process in the embedding space, and propose several designs to tailor our model for the task of discriminative topic mining. \subsection{Task-Oriented Text Embedding} Discriminative text embeddings are typically trained under a supervised manner with task specific training data, such as training CNNs <|cite_start|> (Reference: Convolutional Neural Networks for Sentence Classification: We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.) <|cite_end|> or RNNs <|cite_start|> (Reference: {Hierarchical attention networks for document classification: We propose a hierarchical attention network for document classification. Our model has two distinctive characteristics: (i) it has a hierarchical structure that mirrors the hierarchical structure of documents; (ii) it has two levels of attention mechanisms applied at the wordand sentence-level, enabling it to attend differentially to more and less important content when constructing the document representation. Experiments conducted on six large scale text classification tasks demonstrate that the proposed architecture outperform previous methods by a substantial margin. Visualization of the attention layers illustrates that the model selects qualitatively informative words and sentences.) <|cite_end|> for text classification. Among supervised word embedding models, some previous studies are more relevant because they explicitly leverage the category information to optimize embedding for classification tasks. Predictive Text Embedding (PTE) <|cite_start|> (Reference: PTE: Predictive Text Embedding through Large-scale Heterogeneous Text Networks: Unsupervised text embedding methods, such as Skip-gram and Paragraph Vector, have been attracting increasing attention due to their simplicity, scalability, and effectiveness. However, comparing to sophisticated deep learning architectures such as convolutional neural networks, these methods usually yield inferior results when applied to particular machine learning tasks. One possible reason is that these text embedding methods learn the representation of text in a fully unsupervised way, without leveraging the labeled information available for the task. Although the low dimensional representations learned are applicable to many different tasks, they are not particularly tuned for any task. In this paper, we fill this gap by proposing a semi-supervised representation learning method for text data, which we call the \textit{predictive text embedding} (PTE). Predictive text embedding utilizes both labeled and unlabeled data to learn the embedding of text. The labeled information and different levels of word co-occurrence information are first represented as a large-scale heterogeneous text network, which is then embedded into a low dimensional space through a principled and efficient algorithm. This low dimensional embedding not only preserves the semantic closeness of words and documents, but also has a strong predictive power for the particular task. Compared to recent supervised approaches based on convolutional neural networks, predictive text embedding is comparable or more effective, much more efficient, and has fewer parameters to tune.) <|cite_end|> constructs a heterogeneous text network and jointly embeds words, documents and labels based on word-word and word-document co-occurrences as well as labeled documents. Label-Embedding Attentive Model <|cite_start|> (Reference: Joint Embedding of Words and Labels for Text Classification: Word embeddings are effective intermediate representations for capturing semantic regularities between words, when learning the representations of text sequences. We propose to view text classification as a label-word joint embedding problem: each label is embedded in the same space with the word vectors. We introduce an attention framework that measures the compatibility of embeddings between text sequences and labels. The attention is learned on a training set of labeled samples to ensure that, given a text sequence, the relevant words are weighted higher than the irrelevant ones. Our method maintains the interpretability of word embeddings, and enjoys a built-in ability to leverage alternative sources of information, in addition to input text sequences. Extensive results on the several large text datasets show that the proposed framework outperforms the state-of-the-art methods by a large margin, in terms of both accuracy and speed.) <|cite_end|> jointly embeds words and labels so that attention mechanisms can be employed to discover category distinctive words. All the above frameworks require labeled training documents for fine-tuning word embeddings. Our method only requires category names to learn a discriminative embedding space over the categories, which are much easier to obtain. Some recent studies propose to learn embeddings for lexical entailment, which is relevant to our task because it may help determine which terms belong to a category. Hyperbolic models such as Poincar\'e <|cite_start|> (Reference: On the nonlinear Poincar\'e flow: We develop a tool in order to analyse the dynamics of differentiable flows with singularities. It provides an abstract model for the local dynamics that can be used in order to control the size of invariant manifolds. This work is the first part of the results announced in [CY2].) <|cite_end|> <|cite_start|> (Reference: On the nonlinear Poincar\'e flow: We develop a tool in order to analyse the dynamics of differentiable flows with singularities. It provides an abstract model for the local dynamics that can be used in order to control the size of invariant manifolds. This work is the first part of the results announced in [CY2].) <|cite_end|> <|cite_start|> (Reference: Embedding Text in Hyperbolic Spaces: Natural language text exhibits hierarchical structure in a variety of respects. Ideally, we could incorporate our prior knowledge of this hierarchical structure into unsupervised learning algorithms that work on text data. Recent work by Nickel & Kiela (2017) proposed using hyperbolic instead of Euclidean embedding spaces to represent hierarchical data and demonstrated encouraging results when embedding graphs. In this work, we extend their method with a re-parameterization technique that allows us to learn hyperbolic embeddings of arbitrarily parameterized objects. We apply this framework to learn word and sentence embeddings in hyperbolic space in an unsupervised manner from text corpora. The resulting embeddings seem to encode certain intuitive notions of hierarchy, such as word-context frequency and phrase constituency. However, the implicit continuous hierarchy in the learned hyperbolic space makes interrogating the model's learned hierarchies more difficult than for models that learn explicit edges between items. The learned hyperbolic embeddings show improvements over Euclidean embeddings in some -- but not all -- downstream tasks, suggesting that hierarchical organization is more useful for some tasks than others.) <|cite_end|>, Lorentz <|cite_start|> (Reference: Learning Continuous Hierarchies in the Lorentz Model of Hyperbolic Geometry: We are concerned with the discovery of hierarchical relationships from large-scale unstructured similarity scores. For this purpose, we study different models of hyperbolic space and find that learning embeddings in the Lorentz model is substantially more efficient than in the Poincar\'e-ball model. We show that the proposed approach allows us to learn high-quality embeddings of large taxonomies which yield improvements over Poincar\'e embeddings, especially in low dimensions. Lastly, we apply our model to discover hierarchies in two real-world datasets: we show that an embedding in hyperbolic space can reveal important aspects of a company's organizational structure as well as reveal historical relationships between language families.) <|cite_end|> and hyperbolic cone <|cite_start|> (Reference: Hyperbolic Entailment Cones for Learning Hierarchical Embeddings: Learning graph representations via low-dimensional embeddings that preserve relevant network properties is an important class of problems in machine learning. We here present a novel method to embed directed acyclic graphs. Following prior work, we first advocate for using hyperbolic spaces which provably model tree-like structures better than Euclidean geometry. Second, we view hierarchical relations as partial orders defined using a family of nested geodesically convex cones. We prove that these entailment cones admit an optimal shape with a closed form expression both in the Euclidean and hyperbolic spaces, and they canonically define the embedding learning process. Experiments show significant improvements of our method over strong recent baselines both in terms of representational capacity and generalization.) <|cite_end|> have proven successful in graded lexical entailment detection. However, the above models are supervised and require hypernym-hyponym training pairs, which may not be available under the setting of topic discovery. Our model jointly learns the word vector representation in the embedding space and its distributional specificity without requiring supervision, and simultaneously considers relatedness and specificity of words when retrieving category representative terms. <|paper_end|>
[ "<|reference_start|> {Probabilistic Latent Semantic Indexing: Probabilistic Latent Semantic Indexing is a novel approach to automated document indexing which is based on a statistical latent class model for factor analysis of count data. Fitted from a training corpus of text documents by a generalization of the Expectation Maximization algorithm, the utilized model is able to deal with domain{specific synonymy as well as with polysemous words. In contrast to standard Latent Semantic Indexing (LSI) by Singular Value Decomposition, the probabilistic variant has a solid statistical foundation and defines a proper generative data model. Retrieval experiments on a number of test collections indicate substantial performance gains over direct term matching methods as well as over LSI. In particular, the combination of models with different dimensionalities has proven to be advantageous. <|reference_end|>", "<|reference_start|> Combining concept hierarchies and statistical topic models: Statistical topic models provide a general data-driven framework for automated discovery of high-level knowledge from large collections of text documents. While topic models can potentially discover a broad range of themes in a data set, the interpretability of the learned topics is not always ideal. Human-defined concepts, on the other hand, tend to be semantically richer due to careful selection of words to define concepts but they tend not to cover the themes in a data set exhaustively. In this paper, we propose a probabilistic framework to combine a hierarchy of human-defined semantic concepts with statistical topic models to seek the best of both worlds. Experimental results using two different sources of concept hierarchies and two collections of text documents indicate that this combination leads to systematic improvements in the quality of the associated language models as well as enabling new techniques for inferring and visualizing the semantics of a document. <|reference_end|>", "<|reference_start|> Gaussian LDA for Topic Models with Word Embeddings: Continuous space word embeddings learned from large, unstructured corpora have been shown to be effective at capturing semantic regularities in language. In this paper we replace LDA’s parameterization of “topics” as categorical distributions over opaque word types with multivariate Gaussian distributions on the embedding space. This encourages the model to group words that are a priori known to be semantically related into topics. To perform inference, we introduce a fast collapsed Gibbs sampling algorithm based on Cholesky decompositions of covariance matrices of the posterior predictive distributions. We further derive a scalable algorithm that draws samples from stale posterior predictive distributions and corrects them with a Metropolis–Hastings step. Using vectors learned from a domain-general corpus (English Wikipedia), we report results on two document collections (20-newsgroups and NIPS). Qualitatively, Gaussian LDA infers different (but still very sensible) topics relative to standard LDA. Quantitatively, our technique outperforms existing models at dealing with OOV words in held-out documents. <|reference_end|>", "<|reference_start|> {Hierarchical attention networks for document classification: We propose a hierarchical attention network for document classification. Our model has two distinctive characteristics: (i) it has a hierarchical structure that mirrors the hierarchical structure of documents; (ii) it has two levels of attention mechanisms applied at the wordand sentence-level, enabling it to attend differentially to more and less important content when constructing the document representation. Experiments conducted on six large scale text classification tasks demonstrate that the proposed architecture outperform previous methods by a substantial margin. Visualization of the attention layers illustrates that the model selects qualitatively informative words and sentences. <|reference_end|>" ]
[ 12, 17, 30, 38 ]
{"<|multi_cite_1_1|>": "ss-1126779", "<|multi_cite_1_2|>": "ss-1067570", "<|multi_cite_2_1|>": "ss-1453424", "<|multi_cite_2_2|>": "ss-1256249", "<|multi_cite_3_1|>": "ss-803516", "<|multi_cite_3_2|>": "ss-995508", "<|cite_4|>": "ss-1519691", "<|cite_5|>": "ss-1126779", "<|cite_6|>": "ss-1295902", "<|cite_7|>": "ss-1100382", "<|multi_cite_8_1|>": "ss-2394301", "<|multi_cite_8_2|>": "ss-1096759", "<|cite_9|>": "ss-1067570", "<|cite_10|>": "ss-1126779", "<|cite_11|>": "ss-1007328", "<|cite_12|>": "ss-1537993", "<|cite_13|>": "ss-1376711", "<|cite_14|>": "ss-1969408", "<|cite_15|>": "ss-1295902", "<|cite_16|>": "ss-1100382", "<|cite_17|>": "arxiv-34298", "<|cite_18|>": "arxiv-22967", "<|cite_19|>": "ss-1237282", "<|cite_20|>": "ss-1678581", "<|cite_21|>": "ss-2394301", "<|cite_22|>": "ss-1096759", "<|cite_23|>": "arxiv-111463", "<|multi_cite_24_1|>": "arxiv-51600", "<|multi_cite_24_2|>": "ss-806920", "<|multi_cite_24_3|>": "arxiv-102185", "<|multi_cite_25_1|>": "ss-1282780", "<|multi_cite_25_2|>": "ss-1985640", "<|multi_cite_25_3|>": "arxiv-95056", "<|multi_cite_25_4|>": "ss-975442", "<|cite_26|>": "arxiv-176290", "<|cite_27|>": "arxiv-172494", "<|cite_28|>": "arxiv-213929", "<|cite_29|>": "arxiv-65210", "<|cite_30|>": "ss-1112550", "<|cite_31|>": "arxiv-81941", "<|cite_32|>": "arxiv-158058", "<|multi_cite_33_1|>": "ss-1282221", "<|multi_cite_33_2|>": "ss-1282221", "<|multi_cite_33_3|>": "arxiv-162122", "<|cite_34|>": "arxiv-161851", "<|cite_35|>": "arxiv-153918"}
2409.03519
<|paper_start|> Title: Tissue Concepts: supervised foundation models in computational pathology Abstract: Tissue Concepts: supervised foundation models in computational pathology: Due to the increasing workload of pathologists, the need for automation to support diagnostic tasks and quantitative biomarker evaluation is becoming more and more apparent. Foundation models have the potential to improve generalizability within and across centers and serve as starting points for data efficient development of specialized yet robust AI models. However, the training foundation models themselves is usually very expensive in terms of data, computation, and time. This paper proposes a supervised training method that drastically reduces these expenses. The proposed method is based on multi-task learning to train a joint encoder, by combining 16 different classification, segmentation, and detection tasks on a total of 912,000 patches. Since the encoder is capable of capturing the properties of the samples, we term it the Tissue Concepts encoder. To evaluate the performance and generalizability of the Tissue Concepts encoder across centers, classification of whole slide images from four of the most prevalent solid cancers - breast, colon, lung, and prostate - was used. The experiments show that the Tissue Concepts model achieve comparable performance to models trained with self-supervision, while requiring only 6% of the amount of training patches. Furthermore, the Tissue Concepts encoder outperforms an ImageNet pre-trained encoder on both in-domain and out-of-domain data. Introduction \label{sec:into} \label{sec:into} The need for diagnostic systems to help pathologists manage the anticipated workload increases as cancer cancers worldwide are on the rise <|cite_start|> (Reference: Analysis of Histopathological images: An Overview: Histopathology is the study of change in tissues and cells affected by the disease and finding the root cause of the disease. Over recent years there is huge improvement in image analysis algorithms as well as in the computation power. In this paper we will review various techniques given by different authors for histopathological image analysis.Also we will cover the various methods for image preprocessing, segmentation, feature extraction and classification which are basic steps of histopathological image analysis.) <|cite_end|>. As <|cite_start|> (Reference: Global cancer statistics: The global burden of cancer continues to increase largely because of the aging and growth of the world population alongside an increasing adoption of cancer‐causing behaviors, particularly smoking, in economically developing countries. Based on the GLOBOCAN 2008 estimates, about 12.7 million cancer cases and 7.6 million cancer deaths are estimated to have occurred in 2008; of these, 56% of the cases and 64% of the deaths occurred in the economically developing world. Breast cancer is the most frequently diagnosed cancer and the leading cause of cancer death among females, accounting for 23% of the total cancer cases and 14% of the cancer deaths. Lung cancer is the leading cancer site in males, comprising 17% of the total new cancer cases and 23% of the total cancer deaths. Breast cancer is now also the leading cause of cancer death among females in economically developing countries, a shift from the previous decade during which the most common cause of cancer death was cervical cancer. Further, the mortality burden for lung cancer among females in developing countries is as high as the burden for cervical cancer, with each accounting for 11% of the total female cancer deaths. Although overall cancer incidence rates in the developing world are half those seen in the developed world in both sexes, the overall cancer mortality rates are generally similar. Cancer survival tends to be poorer in developing countries, most likely because of a combination of a late stage at diagnosis and limited access to timely and standard treatment. A substantial proportion of the worldwide burden of cancer could be prevented through the application of existing cancer control knowledge and by implementing programs for tobacco control, vaccination (for liver and cervical cancers), and early detection and treatment, as well as public health campaigns promoting physical activity and a healthier dietary intake. Clinicians, public health professionals, and policy makers can play an active role in accelerating the application of such interventions globally. CA Cancer J Clin 2011. © 2011 American Cancer Society, Inc.) <|cite_end|> estimate, breast, colorectal, prostate, and lung cancers are among the six most common cancers types. Projections suggest that cases of these cancers will continue to increase, posing significant challenges due to time-consuming diagnosis, increased demand for tumor subtyping, and personalized treatment <|cite_start|> (Reference: Estimated Projection of US Cancer Incidence and Death to 2040: Key Points Question How will the landscape of cancer incidences and deaths change in the next 2 decades? Findings In this cross-sectional study, the results estimate that leading cancer incidences and deaths in the US will be notably different in the year 2040 compared with current rankings. Estimates included increases in melanoma incidence, pancreatic cancer deaths, and liver cancer deaths, and decreases in prostate cancer incidence and breast cancer deaths. Meaning These estimates will be important to guide research, health care, and health policy efforts and emphasize the importance of cancer screening, early detection, and prevention.) <|cite_end|> <|cite_start|> (Reference: Planning for tomorrow: global cancer incidence and the role of prevention 2020–2070: ) <|cite_end|> <|cite_start|> (Reference: {Cancer statistics, 2023: Each year, the American Cancer Society estimates the numbers of new cancer cases and deaths in the United States and compiles the most recent data on population‐based cancer occurrence and outcomes using incidence data collected by central cancer registries and mortality data collected by the National Center for Health Statistics. In 2023, 1,958,310 new cancer cases and 609,820 cancer deaths are projected to occur in the United States. Cancer incidence increased for prostate cancer by 3% annually from 2014 through 2019 after two decades of decline, translating to an additional 99,000 new cases; otherwise, however, incidence trends were more favorable in men compared to women. For example, lung cancer in women decreased at one half the pace of men (1.1% vs. 2.6% annually) from 2015 through 2019, and breast and uterine corpus cancers continued to increase, as did liver cancer and melanoma, both of which stabilized in men aged 50 years and older and declined in younger men. However, a 65% drop in cervical cancer incidence during 2012 through 2019 among women in their early 20s, the first cohort to receive the human papillomavirus vaccine, foreshadows steep reductions in the burden of human papillomavirus‐associated cancers, the majority of which occur in women. Despite the pandemic, and in contrast with other leading causes of death, the cancer death rate continued to decline from 2019 to 2020 (by 1.5%), contributing to a 33% overall reduction since 1991 and an estimated 3.8 million deaths averted. This progress increasingly reflects advances in treatment, which are particularly evident in the rapid declines in mortality (approximately 2% annually during 2016 through 2020) for leukemia, melanoma, and kidney cancer, despite stable/increasing incidence, and accelerated declines for lung cancer. In summary, although cancer mortality rates continue to decline, future progress may be attenuated by rising incidence for breast, prostate, and uterine corpus cancers, which also happen to have the largest racial disparities in mortality.) <|cite_end|>. Deep learning (DL) has made significant progress in medical imaging, particularly in the field of computational pathology (CPath). Some studies have demonstrated that DL models even surpass human performance in certain tasks, making DL models effective tools to help pathologists cope with the increasing workload <|cite_start|> (Reference: Pathologist-level interpretable whole-slide cancer diagnosis with deep learning: ) <|cite_end|> <|cite_start|> (Reference: A deep learning system for differential diagnosis of skin diseases: Skin conditions affect an estimated 1.9 billion people worldwide. A shortage of dermatologists causes long wait times and leads patients to seek dermatologic care from general practitioners. However, the diagnostic accuracy of general practitioners has been reported to be only 0.24-0.70 (compared to 0.77-0.96 for dermatologists), resulting in referral errors, delays in care, and errors in diagnosis and treatment. In this paper, we developed a deep learning system (DLS) to provide a differential diagnosis of skin conditions for clinical cases (skin photographs and associated medical histories). The DLS distinguishes between 26 skin conditions that represent roughly 80% of the volume of skin conditions seen in primary care. The DLS was developed and validated using de-identified cases from a teledermatology practice serving 17 clinical sites via a temporal split: the first 14,021 cases for development and the last 3,756 cases for validation. On the validation set, where a panel of three board-certified dermatologists defined the reference standard for every case, the DLS achieved 0.71 and 0.93 top-1 and top-3 accuracies respectively. For a random subset of the validation set (n=963 cases), 18 clinicians reviewed the cases for comparison. On this subset, the DLS achieved a 0.67 top-1 accuracy, non-inferior to board-certified dermatologists (0.63, p<0.001), and higher than primary care physicians (PCPs, 0.45) and nurse practitioners (NPs, 0.41). The top-3 accuracy showed a similar trend: 0.90 DLS, 0.75 dermatologists, 0.60 PCPs, and 0.55 NPs. These results highlight the potential of the DLS to augment general practitioners to accurately diagnose skin conditions by suggesting differential diagnoses that may not have been considered. Future work will be needed to prospectively assess the clinical impact of using this tool in actual clinical workflows.) <|cite_end|>. However, the unavailability of the required large data sets and the needed investment of time and effort limits the effectiveness and impact of DL models in pathology. Recent advances in self-supervised learning have enabled the training of deep neural networks on large amounts of unlabeled medical data, resulting in the creation of foundation models in computer vision <|cite_start|> (Reference: A Systematic Literature Mining of Sponge City: What Has Been Done and The Challenges Standing Ahead: As the increase threat of flood risk and environmental safety due to the urbanization, Sponge city research has been attracting extensive attention both in practical and theoretical research field. To date, there are only scattered studies about Sponge city. Moreover, vary names of Sponge city prevalent in different countries, which leads to disconnection of literature in the same field of Sponge city. In this paper, a thorough systematic literature mining of Sponge city is presented. A literature analysis system is created, which includes literature export from Web of Sciences and systematic analysis via NoteExpress and CiteSpace. Some literature statistical results are derived. Challenges and opportunities for future research are anticipated. Our goals are to promote this promising thought, summarize past research, and identify issues for future research to create impacts on the practice of Sponge city.) <|cite_end|>. These models are pre-trained on a wide range of images, primarily using self-supervision through contrastive learning or masked image modeling. They have been shown to perform well in downstream tasks, including patch classification and weakly labeled whole slide image (WSI) classification <|cite_start|> (Reference: Transformer-based unsupervised contrastive learning for histopathological image classification: ) <|cite_end|> <|cite_start|> (Reference: Computational Pathology for Brain Disorders: Non-invasive brain imaging techniques allow understanding the behavior and macro changes in the brain to determine the progress of a disease. However, computational pathology provides a deeper understanding of brain disorders at cellular level, able to consolidate a diagnosis and make the bridge between the medical image and the omics analysis. In traditional histopathology, histology slides are visually inspected, under the microscope, by trained pathologists. This process is time-consuming and labor-intensive; therefore, the emergence of Computational Pathology has triggered great hope to ease this tedious task and make it more robust. This chapter focuses on understanding the state-of-the-art machine learning techniques used to analyze whole slide images within the context of brain disorders. We present a selective set of remarkable machine learning algorithms providing discriminative approaches and quality results on brain disorders. These methodologies are applied to different tasks, such as monitoring mechanisms contributing to disease progression and patient survival rates, analyzing morphological phenotypes for classification and quantitative assessment of disease, improving clinical care, diagnosing tumor specimens, and intraoperative interpretation. Thanks to the recent progress in machine learning algorithms for high-content image processing, computational pathology marks the rise of a new generation of medical discoveries and clinical protocols, including in brain disorders.) <|cite_end|>. Projects such as the Tissue Cancer Genome Atlas Program (TCGA) provide a data source of thousands of WSIs for training these networks on real-world data. This vast amount of data is necessary for self-supervised trained networks to reach their full potential <|cite_start|> (Reference: A Systematic Literature Mining of Sponge City: What Has Been Done and The Challenges Standing Ahead: As the increase threat of flood risk and environmental safety due to the urbanization, Sponge city research has been attracting extensive attention both in practical and theoretical research field. To date, there are only scattered studies about Sponge city. Moreover, vary names of Sponge city prevalent in different countries, which leads to disconnection of literature in the same field of Sponge city. In this paper, a thorough systematic literature mining of Sponge city is presented. A literature analysis system is created, which includes literature export from Web of Sciences and systematic analysis via NoteExpress and CiteSpace. Some literature statistical results are derived. Challenges and opportunities for future research are anticipated. Our goals are to promote this promising thought, summarize past research, and identify issues for future research to create impacts on the practice of Sponge city.) <|cite_end|>. However, the amount of resources required to create, train, and deploy such models has raised concerns among researchers about the environmental and other impacts <|cite_start|> (Reference: The carbon impact of artificial intelligence: ) <|cite_end|> <|cite_start|> (Reference: Aligning artificial intelligence with climate change mitigation: ) <|cite_end|> <|cite_start|> (Reference: Curbing the carbon footprint of health care.: ) <|cite_end|>. In addition, extended training periods of several weeks impede development cycles and prolong research time. Supervised learning, on the other hand, has been shown to outperform models trained on self-supervision in some tasks <|cite_start|> (Reference: Does Self-Injury Hurt?: Nonsuicidal self-injury (NSSI) represents a distinct topic in the study of pain in that it involves intentional engagement in behaviors despite, or potentially because of, the physical pain they elicit. This chapter reviews the experience of pain as part of NSSI, discussing research examining whether pain is experienced differently by self-injurers and what functions that pain elicited by NSSI may serve. In particular, the authors discuss the potential emotion regulation functions of pain elicited by NSSI. In addition, they discuss biological and psychological theories about the way pain and NSSI may serve these functions. Lastly, they discuss clinical implications and emphasize the importance for all treatment providers, in both medical and mental health settings, to understand the role of pain in NSSI in order to provide empathetic, nonjudgmental, and effective treatment for the growing number of children, adolescents, and adults engaging in these behaviors.) <|cite_end|> <|cite_start|> (Reference: Segment Anything Model for Medical Images?: The Segment Anything Model (SAM) is the first foundation model for general image segmentation. It has achieved impressive results on various natural image segmentation tasks. However, medical image segmentation (MIS) is more challenging because of the complex modalities, fine anatomical structures, uncertain and complex object boundaries, and wide-range object scales. To fully validate SAM's performance on medical data, we collected and sorted 53 open-source datasets and built a large medical segmentation dataset with 18 modalities, 84 objects, 125 object-modality paired targets, 1050K 2D images, and 6033K masks. We comprehensively analyzed different models and strategies on the so-called COSMOS 1050K dataset. Our findings mainly include the following: 1) SAM showed remarkable performance in some specific objects but was unstable, imperfect, or even totally failed in other situations. 2) SAM with the large ViT-H showed better overall performance than that with the small ViT-B. 3) SAM performed better with manual hints, especially box, than the Everything mode. 4) SAM could help human annotation with high labeling quality and less time. 5) SAM was sensitive to the randomness in the center point and tight box prompts, and may suffer from a serious performance drop. 6) SAM performed better than interactive methods with one or a few points, but will be outpaced as the number of points increases. 7) SAM's performance correlated to different factors, including boundary complexity, intensity differences, etc. 8) Finetuning the SAM on specific medical tasks could improve its average DICE performance by 4.39% and 6.68% for ViT-B and ViT-H, respectively. We hope that this comprehensive report can help researchers explore the potential of SAM applications in MIS, and guide how to appropriately use and develop SAM.) <|cite_end|>. Although there are many annotated datasets available through challenges or other benchmarks, these datasets vary in size and contain annotations with varying degrees of detail. This variability between the datasets makes it challenging to condense the knowledge they contain into a single model. One approach to integrating all of these label types is to use multi-task learning (MTL) <|cite_start|> (Reference: Multitask Learning: ) <|cite_end|>. In <|cite_start|> (Reference: Overcoming Data Gaps in Sales Analysis: In the rapidly evolving global marketplace, businesses face the critical challenge of navigating through extensive sales data to derive actionable insights. This paper explores the complexities of aligning and analyzing sales data from various third-party vendors across different geographies, highlighting the significant impact of data misalignment and gaps on strategic decision-making.) <|cite_end|>, we recently proposed a learning framework that combines the information contained in different labeling strategies, including detection, segmentation, and classification, and use it to train a single shared backbone model on a large corpus of images. In the study, images from different medical imaging domains, such as CT, X-ray, and microscopic images but also non-medical images were included. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{imgs/figure_one_smaller.png} \caption{Overview of the study. a) Different pre-training of Tissue Concepts using multi-task learning on 16 different tasks. b) the shared encoder is evaluated using multiple-instance learning on WSI classification. From each WSI, patches of size $224 \times 224$ are extracted in an iterative windowing fashion and the latent representation is positioned at the same spatial location as the patches. A simple CNN is trained on the latent WSIs to learn the label at the slice level.} \label{fig:overview} \end{figure*} This paper demonstrates that training a foundation model on supervised signals in CPath using MTL requires less data, time, and energy compared to models trained with self-supervision. At the same time, the measured performance is similar to that obtained from models trained on about 17 times more data without supervision. Following the MTL training scheme presented in \autoref{fig:overview}a, this paper presents \textit{Tissue Concepts} (TC), a robust encoder that is trained on a mixture of diverse annotations from small and medium-sized datasets in CPath to learn different concepts related to tissue. Considering the future prediction of cancer cases and clinical workflow, we evaluated the performance of the encoder on the four major cancer types breast, colon, lung, and prostate, for whole slide image classification, as shown in \autoref{fig:overview}b. In addition, since models trained on one site are known to perform worse when evaluated on different sites, we test the performance of Tissue Concepts using a cross-center evaluation scheme <|cite_start|> (Reference: The impact of site-specific digital histology signatures on deep learning model accuracy and bias: ) <|cite_end|>. The paper's main contributions can be summarized as follows. \begin{itemize} \item We show that diverse pre-training using MTL learns robust representations and drastically reduces the required amount of data compared to self-supervised approaches. \item Our evaluation of the Tissue Concepts encoder on four of the most prevalent cancer types across multiple centers highlights the generalizability of our approach. \end{itemize} Related Work \label{sec:related_work} First approaches using MTL in CPath were presented by <|cite_start|> (Reference: Multi-task pre-training of deep neural networks for digital pathology: In this work, we investigate multi-task learning as a way of pre-training models for classification tasks in digital pathology. It is motivated by the fact that many small and medium-size datasets have been released by the community over the years whereas there is no large scale dataset similar to ImageNet in the domain. We first assemble and transform many digital pathology datasets into a pool of 22 classification tasks and almost 900k images. Then, we propose a simple architecture and training scheme for creating a transferable model and a robust evaluation and selection protocol in order to evaluate our method. Depending on the target task, we show that our models used as feature extractors either improve significantly over ImageNet pre-trained models or provide comparable performance. Fine-tuning improves performance over feature extraction and is able to recover the lack of specificity of ImageNet features, as both pre-training sources yield comparable performance.) <|cite_end|> and <|cite_start|> (Reference: One Model is All You Need: Multi-Task Learning Enables Simultaneous Histology Image Segmentation and Classification: The recent surge in performance for image analysis of digitised pathology slides can largely be attributed to the advances in deep learning. Deep models can be used to initially localise various structures in the tissue and hence facilitate the extraction of interpretable features for biomarker discovery. However, these models are typically trained for a single task and therefore scale poorly as we wish to adapt the model for an increasing number of different tasks. Also, supervised deep learning models are very data hungry and therefore rely on large amounts of training data to perform well. In this paper, we present a multi-task learning approach for segmentation and classification of nuclei, glands, lumina and different tissue regions that leverages data from multiple independent data sources. While ensuring that our tasks are aligned by the same tissue type and resolution, we enable meaningful simultaneous prediction with a single network. As a result of feature sharing, we also show that the learned representation can be used to improve the performance of additional tasks via transfer learning, including nuclear classification and signet ring cell detection. As part of this work, we train our developed Cerberus model on a huge amount of data, consisting of over 600 thousand objects for segmentation and 440 thousand patches for classification. We use our approach to process 599 colorectal whole-slide images from TCGA, where we localise 377 million, 900 thousand and 2.1 million nuclei, glands and lumina respectively. We make this resource available to remove a major barrier in the development of explainable models for computational pathology.) <|cite_end|>. Mormont and colleagues converted different datasets into 22 classification tasks to train a shared network and contrasted the learned encoders against ImageNet-weights. A SVM was trained on the latent representations of the encoder. They found that the representations perform equal or better to the baseline ImageNet-weights. Graham et al. then used MTL on segmentation and classification tasks. This research focused on specific tasks that were present in the pre-training. However, the evaluation of the general-purpose encoder and the corresponding latent representations based on whole slide image classification combined with cross-center evaluation is still an unexplored area. In addition, general purpose encoders in the form of foundation models have not been considered by <|cite_start|> (Reference: One Model is All You Need: Multi-Task Learning Enables Simultaneous Histology Image Segmentation and Classification: The recent surge in performance for image analysis of digitised pathology slides can largely be attributed to the advances in deep learning. Deep models can be used to initially localise various structures in the tissue and hence facilitate the extraction of interpretable features for biomarker discovery. However, these models are typically trained for a single task and therefore scale poorly as we wish to adapt the model for an increasing number of different tasks. Also, supervised deep learning models are very data hungry and therefore rely on large amounts of training data to perform well. In this paper, we present a multi-task learning approach for segmentation and classification of nuclei, glands, lumina and different tissue regions that leverages data from multiple independent data sources. While ensuring that our tasks are aligned by the same tissue type and resolution, we enable meaningful simultaneous prediction with a single network. As a result of feature sharing, we also show that the learned representation can be used to improve the performance of additional tasks via transfer learning, including nuclear classification and signet ring cell detection. As part of this work, we train our developed Cerberus model on a huge amount of data, consisting of over 600 thousand objects for segmentation and 440 thousand patches for classification. We use our approach to process 599 colorectal whole-slide images from TCGA, where we localise 377 million, 900 thousand and 2.1 million nuclei, glands and lumina respectively. We make this resource available to remove a major barrier in the development of explainable models for computational pathology.) <|cite_end|>. In <|cite_start|> (Reference: Overcoming Data Gaps in Sales Analysis: In the rapidly evolving global marketplace, businesses face the critical challenge of navigating through extensive sales data to derive actionable insights. This paper explores the complexities of aligning and analyzing sales data from various third-party vendors across different geographies, highlighting the significant impact of data misalignment and gaps on strategic decision-making.) <|cite_end|> we presented a first approach using MTL to train supervised foundation models. By utilized expert knowledge in the form of multi-task learning we trained a shared model, called UMedPT, which can be applied to various medical images. To achieve this, different imaging domains, such as CT, X-ray, and microscopic images were used to train a shared backbone on classification, segmentation, and detection tasks. Currently, the impact of tasks outside the histopathology domain remains unclear due to the diverse pre-training of the encoder. This impact on performance and robustness requires further investigation. The following sections focus in more detail on two topics discussed in this paper. While foundation models are still largely unexplored in terms of their application and performance, some approaches are mentioned below. \subsection{Foundation Models} A foundation model is broadly defined as being trained on a wide variety of data and being easily adaptable to many different downstream tasks <|cite_start|> (Reference: Are There Opportunities in Opportunity Zones: ) <|cite_end|> <|cite_start|> (Reference: Towards artificial general intelligence via a multimodal foundation model: The fundamental goal of artificial intelligence (AI) is to mimic the core cognitive activities of human. Despite tremendous success in the AI research, most of existing methods have only single-cognitive ability. To overcome this limitation and take a solid step towards artificial general intelligence (AGI), we develop a foundation model pre-trained with huge multimodal data, which can be quickly adapted for various downstream cognitive tasks. To achieve this goal, we propose to pre-train our foundation model by self-supervised learning with weak semantic correlation data crawled from the Internet and show that promising results can be obtained on a wide range of downstream tasks. Particularly, with the developed model-interpretability tools, we demonstrate that strong imagination ability is now possessed by our foundation model. We believe that our work makes a transformative stride towards AGI, from our common practice of "weak or narrow AI" to that of "strong or generalized AI".) <|cite_end|>. <|cite_start|> (Reference: Transformer-based unsupervised contrastive learning for histopathological image classification: ) <|cite_end|> used data from the TCGA in combination with data from the pathology AI platform PAIP to train a modified swin transformer, called CTransPath (CTP), using self-supervision on 15 million patches. They presented an adapted contrastive loss, based on MoCo v3 <|cite_start|> (Reference: An Empirical Assessment of Empirical Corporate Finance: Abstract We empirically evaluate 20 prominent contributions across a broad range of areas in the empirical corporate finance literature. We assemble the necessary data and apply a single, simple econometric method, the connected-groups approach of Abowd et al. to appraise the extent to which prevailing empirical specifications explain variation of the dependent variable, differ in composition of fit arising from various classes of independent variables, and exhibit resistance to omitted variable bias and other endogeneity problems. We assess empirical performance across a wide spectrum of areas in corporate finance and indicate varying research opportunities for empiricists and theorists.) <|cite_end|>, which uses a memory bank to retrieve the top S semantically relevant entries. These entries were used as additional positive examples for the loss calculation. The authors evaluated their model using patch classification, image retrieval, and weakly labeled WSI classification. Due to the large number of training images and the slow convergence of self-supervised training, they reported a training time of 250 hours on 48 GPUs (12.000 GPU-hours). <|cite_start|> (Reference: Computational Pathology for Brain Disorders: Non-invasive brain imaging techniques allow understanding the behavior and macro changes in the brain to determine the progress of a disease. However, computational pathology provides a deeper understanding of brain disorders at cellular level, able to consolidate a diagnosis and make the bridge between the medical image and the omics analysis. In traditional histopathology, histology slides are visually inspected, under the microscope, by trained pathologists. This process is time-consuming and labor-intensive; therefore, the emergence of Computational Pathology has triggered great hope to ease this tedious task and make it more robust. This chapter focuses on understanding the state-of-the-art machine learning techniques used to analyze whole slide images within the context of brain disorders. We present a selective set of remarkable machine learning algorithms providing discriminative approaches and quality results on brain disorders. These methodologies are applied to different tasks, such as monitoring mechanisms contributing to disease progression and patient survival rates, analyzing morphological phenotypes for classification and quantitative assessment of disease, improving clinical care, diagnosing tumor specimens, and intraoperative interpretation. Thanks to the recent progress in machine learning algorithms for high-content image processing, computational pathology marks the rise of a new generation of medical discoveries and clinical protocols, including in brain disorders.) <|cite_end|>, presented a comparable approach, training a tiny vision transformer (ViT) using standard DINO, and a ViT base model using standard masked autoencoding (MAE), both trained on about 3 billion patches. The authors report a training time of over 3000 GPU-hours for the models that were evaluated on a variety of tasks ranging from disease detection to outcome prediction. The evaluation also included images scanned at a different hospital than the training slides. <|cite_start|> (Reference: Towards a general-purpose foundation model for computational pathology.: ) <|cite_end|> present a general purpose foundation model that leverages over 100 million patches from 100.000 WSI slides across 20 major cancer types. They train a large ViT on patches, that were collected in an internal dataset, using DINOv2 <|cite_start|> (Reference: DINOv2: Learning Robust Visual Features without Supervision: The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model (Dosovitskiy et al., 2020) with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP (Ilharco et al., 2021) on most of the benchmarks at image and pixel levels.) <|cite_end|>. They evaluate the model on 34 Tasks and find that it surpasses the previous baselines on most of them. The model was trained on 24 80GB GPUs. Overall, all of the presented models rely on large image databases and require long training times, which contributes to increased CO\textsubscript{2} emissions. The presented TC encoder and MTL training aim to reduce the need for large amounts of data while maintaining the desired performance. In addition cross-center evaluation is needed to accurately asses models' performances. \subsection{Weakly Labeled WSI Classification} \label{sec:wsi_clf} Learning from WSIs that are only labeled on a case basis, or that have only one endpoint, is challenging because training on the entire image at once typically exceeds the GPU memory. In addition, since a WSI provides only one sample, many WSIs are needed to effectively train a deep learning model. Classification of such gigapixel images is therefore typically performed using multiple instance learning (MIL) <|cite_start|> (Reference: Clinical-grade computational pathology using weakly supervised deep learning on whole slide images: ) <|cite_end|> <|cite_start|> (Reference: Benchmarking weakly-supervised deep learning pipelines for whole slide classification in computational pathology: ) <|cite_end|>. Using MIL involves two parts: first, extracting features from patches of the WSI using a pre-trained encoder to convert them into their latent representations, and second, aggregating features from a WSI using a trainable MIL head to predict the given label. Therefore, robust encoders are needed to obtain patch representations that facilitate the second step of MIL <|cite_start|> (Reference: Immune subtyping of melanoma whole slide images using multiple instance learning: ) <|cite_end|>. In this paper MIL is used as an evaluation procedure to test the representativeness of the encoder's features. The following presents the most commonly used approaches, that focus on solving the second stage of MIL using either attention or convolution-based methods. <|cite_start|> (Reference: Data-efficient and weakly supervised computational pathology on whole-slide images: ) <|cite_end|> introduced CLAM, a clustering-constrained attention MIL algorithm. The authors trained an attention-based head on features extracted from patches of a WSI to classify the corresponding labels. The attention was then used to identify sub-regions of high diagnostic value, which in turn were used to classify the entire slide. In addition, instance-level clustering was applied over the representative regions to constrain and refine the feature space. proposed TransMIL, an attention-based correlation method for solving weakly labeled classification tasks. The method uses differently sized convolutional layers to apply additional pyramid position encoding information between the attention modules. This allows the attention layers to aggregate morphological features, while the Pyramid Position Encoding Generator (PPEG) encodes spatial information. <|cite_start|> (Reference: Semi-Parametric Neural Image Synthesis: Novel architectures have recently improved generative image synthesis leading to excellent visual quality in various tasks. Much of this success is due to the scalability of these architectures and hence caused by a dramatic increase in model complexity and in the computational resources invested in training these models. Our work questions the underlying paradigm of compressing large training data into ever growing parametric representations. We rather present an orthogonal, semi-parametric approach. We complement comparably small diffusion or autoregressive models with a separate image database and a retrieval strategy. During training we retrieve a set of nearest neighbors from this external database for each training instance and condition the generative model on these informative samples. While the retrieval approach is providing the (local) content, the model is focusing on learning the composition of scenes based on this content. As demonstrated by our experiments, simply swapping the database for one with different contents transfers a trained model post-hoc to a novel domain. The evaluation shows competitive performance on tasks which the generative model has not been trained on, such as class-conditional synthesis, zero-shot stylization or text-to-image synthesis without requiring paired text-image data. With negligible memory and computational overhead for the external database and retrieval we can significantly reduce the parameter count of the generative model and still outperform the state-of-the-art.) <|cite_end|> proposed neural image compression to train on entire WSIs. The authors trained an autoencoder on patches and used the resulting encoder for feature extraction. The patches extracted in the image domain were encoded and their latent representations were placed in the same spatial location. This effectively compressed the entire WSI into a smaller latent image with more channels, while preserving the spatial relationship between the individual patches. A small CNN was then trained on the compressed WSIs to predict the label of the WSI. In a second version of this approach the same authors used multi-task learning on four classification tasks to train the feature extractor <|cite_start|> (Reference: Extending Unsupervised Neural Image Compression With Supervised Multitask Learning: We focus on the problem of training convolutional neural networks on gigapixel histopathology images to predict image-level targets. For this purpose, we extend Neural Image Compression (NIC), an image compression framework that reduces the dimensionality of these images using an encoder network trained unsupervisedly. We propose to train this encoder using supervised multitask learning (MTL) instead. We applied the proposed MTL NIC to two histopathology datasets and three tasks. First, we obtained state-of-the-art results in the Tumor Proliferation Assessment Challenge of 2016 (TUPAC16). Second, we successfully classified histopathological growth patterns in images with colorectal liver metastasis (CLM). Third, we predicted patient risk of death by learning directly from overall survival in the same CLM data. Our experimental results suggest that the representations learned by the MTL objective are: (1) highly specific, due to the supervised training signal, and (2) transferable, since the same features perform well across different tasks. Additionally, we trained multiple encoders with different training objectives, e.g. unsupervised and variants of MTL, and observed a positive correlation between the number of tasks in MTL and the system performance on the TUPAC16 dataset.) <|cite_end|>. The effect of segmentation and detection tasks, as well as more diverse pre-training, remained a point of further investigation and are part of the research presented in this paper. All presented methods propose different aggregation methods to learn the desired label predictions based on the extracted features and thus work with the features extracted by the TC encoder. As an evaluation method, we adapted the convolution-based aggregation method presented by <|cite_start|> (Reference: Semi-Parametric Neural Image Synthesis: Novel architectures have recently improved generative image synthesis leading to excellent visual quality in various tasks. Much of this success is due to the scalability of these architectures and hence caused by a dramatic increase in model complexity and in the computational resources invested in training these models. Our work questions the underlying paradigm of compressing large training data into ever growing parametric representations. We rather present an orthogonal, semi-parametric approach. We complement comparably small diffusion or autoregressive models with a separate image database and a retrieval strategy. During training we retrieve a set of nearest neighbors from this external database for each training instance and condition the generative model on these informative samples. While the retrieval approach is providing the (local) content, the model is focusing on learning the composition of scenes based on this content. As demonstrated by our experiments, simply swapping the database for one with different contents transfers a trained model post-hoc to a novel domain. The evaluation shows competitive performance on tasks which the generative model has not been trained on, such as class-conditional synthesis, zero-shot stylization or text-to-image synthesis without requiring paired text-image data. With negligible memory and computational overhead for the external database and retrieval we can significantly reduce the parameter count of the generative model and still outperform the state-of-the-art.) <|cite_end|> and also applied an attention-based approach based on <|cite_start|> (Reference: Attention-based Deep Multiple Instance Learning: Multiple instance learning (MIL) is a variation of supervised learning where a single class label is assigned to a bag of instances. In this paper, we state the MIL problem as learning the Bernoulli distribution of the bag label where the bag label probability is fully parameterized by neural networks. Furthermore, we propose a neural network-based permutation-invariant aggregation operator that corresponds to the attention mechanism. Notably, an application of the proposed attention-based operator provides insight into the contribution of each instance to the bag label. We show empirically that our approach achieves comparable performance to the best MIL methods on benchmark MIL datasets and it outperforms other methods on a MNIST-based MIL dataset and two real-life histopathology datasets without sacrificing interpretability.) <|cite_end|>, which are further described in Section \ref{subsec:mil}. <|paper_end|>
[ "<|reference_start|> Planning for tomorrow: global cancer incidence and the role of prevention 2020–2070: <|reference_end|>", "<|reference_start|> Aligning artificial intelligence with climate change mitigation: <|reference_end|>", "<|reference_start|> Multitask Learning: <|reference_end|>", "<|reference_start|> Semi-Parametric Neural Image Synthesis: Novel architectures have recently improved generative image synthesis leading to excellent visual quality in various tasks. Much of this success is due to the scalability of these architectures and hence caused by a dramatic increase in model complexity and in the computational resources invested in training these models. Our work questions the underlying paradigm of compressing large training data into ever growing parametric representations. We rather present an orthogonal, semi-parametric approach. We complement comparably small diffusion or autoregressive models with a separate image database and a retrieval strategy. During training we retrieve a set of nearest neighbors from this external database for each training instance and condition the generative model on these informative samples. While the retrieval approach is providing the (local) content, the model is focusing on learning the composition of scenes based on this content. As demonstrated by our experiments, simply swapping the database for one with different contents transfers a trained model post-hoc to a novel domain. The evaluation shows competitive performance on tasks which the generative model has not been trained on, such as class-conditional synthesis, zero-shot stylization or text-to-image synthesis without requiring paired text-image data. With negligible memory and computational overhead for the external database and retrieval we can significantly reduce the parameter count of the generative model and still outperform the state-of-the-art. <|reference_end|>" ]
[ 3, 12, 16, 34 ]
{"<|cite_1|>": "ss-2370697", "<|cite_2|>": "ss-2370698", "<|multi_cite_3_1|>": "ss-2370699", "<|multi_cite_3_2|>": "ss-2370700", "<|multi_cite_3_3|>": "ss-823202", "<|multi_cite_4_1|>": "ss-2547648", "<|multi_cite_4_2|>": "arxiv-223239", "<|cite_5|>": "ss-841780", "<|multi_cite_6_1|>": "ss-677757", "<|multi_cite_6_2|>": "ss-1364368", "<|cite_7|>": "ss-841780", "<|multi_cite_8_1|>": "ss-1525464", "<|multi_cite_8_2|>": "ss-1342585", "<|multi_cite_8_3|>": "ss-2370701", "<|multi_cite_9_1|>": "ss-957518", "<|multi_cite_9_2|>": "arxiv-500665", "<|cite_10|>": "ss-1179340", "<|cite_11|>": "ss-2370702", "<|cite_12|>": "ss-724079", "<|cite_13|>": "arxiv-263749", "<|cite_14|>": "ss-819520", "<|cite_15|>": "ss-819520", "<|cite_16|>": "ss-2370702", "<|multi_cite_17_1|>": "ss-1844700", "<|multi_cite_17_2|>": "arxiv-377237", "<|cite_18|>": "ss-677757", "<|cite_19|>": "ss-1940116", "<|cite_20|>": "ss-1364368", "<|cite_21|>": "ss-1176904", "<|cite_22|>": "arxiv-497121", "<|multi_cite_23_1|>": "ss-680634", "<|multi_cite_23_2|>": "ss-1548611", "<|cite_25|>": "ss-1343814", "<|cite_26|>": "ss-756041", "<|cite_28|>": "ss-2356564", "<|cite_29|>": "ss-2370703", "<|cite_30|>": "ss-2356564", "<|cite_31|>": "arxiv-148247"}
2303.03376
<|paper_start|> Title: MAESTRO: Open-Ended Environment Design for Multi-Agent Reinforcement Learning Abstract: MAESTRO: Open-Ended Environment Design for Multi-Agent Reinforcement Learning: Open-ended learning methods that automatically generate a curriculum of increasingly challenging tasks serve as a promising avenue toward generally capable reinforcement learning agents. Existing methods adapt curricula independently over either environment parameters (in single-agent settings) or co-player policies (in multi-agent settings). However, the strengths and weaknesses of co-players can manifest themselves differently depending on environmental features. It is thus crucial to consider the dependency between the environment and co-player when shaping a curriculum in multi-agent domains. In this work, we use this insight and extend Unsupervised Environment Design (UED) to multi-agent environments. We then introduce Multi-Agent Environment Design Strategist for Open-Ended Learning (MAESTRO), the first multi-agent UED approach for two-player zero-sum settings. MAESTRO efficiently produces adversarial, joint curricula over both environments and co-players and attains minimax-regret guarantees at Nash equilibrium. Our experiments show that MAESTRO outperforms a number of strong baselines on competitive two-player games, spanning discrete and continuous control settings. Introduction \vspace{-1mm} The past few years have seen a series of remarkable achievements in producing deep reinforcement learning (RL) agents with expert <|cite_start|> (Reference: Grandmaster level in StarCraft II using multi-agent reinforcement learning: ) <|cite_end|> <|cite_start|> (Reference: Dota 2 with Large Scale Deep Reinforcement Learning: On April 13th, 2019, OpenAI Five became the first AI system to defeat the world champions at an esports game. The game of Dota 2 presents novel challenges for AI systems such as long time horizons, imperfect information, and complex, continuous state-action spaces, all challenges which will become increasingly central to more capable AI systems. OpenAI Five leveraged existing reinforcement learning techniques, scaled to learn from batches of approximately 2 million frames every 2 seconds. We developed a distributed training system and tools for continual training which allowed us to train OpenAI Five for 10 months. By defeating the Dota 2 world champion (Team OG), OpenAI Five demonstrates that self-play reinforcement learning can achieve superhuman performance on a difficult task.) <|cite_end|> <|cite_start|> (Reference: Outracing champion Gran Turismo drivers with deep reinforcement learning: ) <|cite_end|> and superhuman <|cite_start|> (Reference: Mastering the game of Go with deep neural networks and tree search: ) <|cite_end|> <|cite_start|> (Reference: Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model: Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function. When evaluated on 57 different Atari games - the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled - our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.) <|cite_end|> performance in challenging competitive games. Central to these successes are adversarial training processes that result in curricula creating new challenges at the frontier of an agent's capabilities <|cite_start|> (Reference: Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research: Evolution has produced a multi-scale mosaic of interacting adaptive units. Innovations arise when perturbations push parts of the system away from stable equilibria into new regimes where previously well-adapted solutions no longer work. Here we explore the hypothesis that multi-agent systems sometimes display intrinsic dynamics arising from competition and cooperation that provide a naturally emergent curriculum, which we term an autocurriculum. The solution of one social task often begets new social tasks, continually generating novel challenges, and thereby promoting innovation. Under certain conditions these challenges may become increasingly complex over time, demanding that agents accumulate ever more innovations.) <|cite_end|> <|cite_start|> (Reference: Diverse Auto-Curriculum is Critical for Successful Real-World Multiagent Learning Systems: Multiagent reinforcement learning (MARL) has achieved a remarkable amount of success in solving various types of video games. A cornerstone of this success is the auto-curriculum framework, which shapes the learning process by continually creating new challenging tasks for agents to adapt to, thereby facilitating the acquisition of new skills. In order to extend MARL methods to real-world domains outside of video games, we envision in this blue sky paper that maintaining a diversity-aware auto-curriculum is critical for successful MARL applications. Specifically, we argue that \emph{behavioural diversity} is a pivotal, yet under-explored, component for real-world multiagent learning systems, and that significant work remains in understanding how to design a diversity-aware auto-curriculum. We list four open challenges for auto-curriculum techniques, which we believe deserve more attention from this community. Towards validating our vision, we recommend modelling realistic interactive behaviours in autonomous driving as an important test bed, and recommend the SMARTS/ULTRA benchmark.) <|cite_end|>. Such automatic curricula, or autocurricula, can improve the sample efficiency and generality of trained policies <|cite_start|> (Reference: Open-Ended Learning Leads to Generally Capable Agents: In this work we create agents that can perform well beyond a single, individual task, that exhibit much wider generalisation of behaviour to a massive, rich space of challenges. We define a universe of tasks within an environment domain and demonstrate the ability to train agents that are generally capable across this vast space and beyond. The environment is natively multi-agent, spanning the continuum of competitive, cooperative, and independent games, which are situated within procedurally generated physical 3D worlds. The resulting space is exceptionally diverse in terms of the challenges posed to agents, and as such, even measuring the learning progress of an agent is an open research problem. We propose an iterative notion of improvement between successive generations of agents, rather than seeking to maximise a singular objective, allowing us to quantify progress despite tasks being incomparable in terms of achievable rewards. We show that through constructing an open-ended learning process, which dynamically changes the training task distributions and training objectives such that the agent never stops learning, we achieve consistent learning of new behaviours. The resulting agent is able to score reward in every one of our humanly solvable evaluation levels, with behaviour generalising to many held-out points in the universe of tasks. Examples of this zero-shot generalisation include good performance on Hide and Seek, Capture the Flag, and Tag. Through analysis and hand-authored probe tasks we characterise the behaviour of our agent, and find interesting emergent heuristic behaviours such as trial-and-error experimentation, simple tool use, option switching, and cooperation. Finally, we demonstrate that the general capabilities of this agent could unlock larger scale transfer of behaviour through cheap finetuning.) <|cite_end|>, as well as induce an open-ended learning process <|cite_start|> (Reference: Open-ended Learning in Symmetric Zero-sum Games: Zero-sum games such as chess and poker are, abstractly, functions that evaluate pairs of agents, for example labeling them `winner' and `loser'. If the game is approximately transitive, then self-play generates sequences of agents of increasing strength. However, nontransitive games, such as rock-paper-scissors, can exhibit strategic cycles, and there is no longer a clear objective -- we want agents to increase in strength, but against whom is unclear. In this paper, we introduce a geometric framework for formulating agent objectives in zero-sum games, in order to construct adaptive sequences of objectives that yield open-ended learning. The framework allows us to reason about population performance in nontransitive games, and enables the development of a new algorithm (rectified Nash response, PSRO_rN) that uses game-theoretic niching to construct diverse populations of effective agents, producing a stronger set of agents than existing algorithms. We apply PSRO_rN to two highly nontransitive resource allocation games and find that PSRO_rN consistently outperforms the existing alternatives.) <|cite_end|> that continues to endlessly robustify an agent. Autocurricula have been effective in multi-agent RL for adapting to different \emph{co-players} in competitive games <|cite_start|> (Reference: Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research: Evolution has produced a multi-scale mosaic of interacting adaptive units. Innovations arise when perturbations push parts of the system away from stable equilibria into new regimes where previously well-adapted solutions no longer work. Here we explore the hypothesis that multi-agent systems sometimes display intrinsic dynamics arising from competition and cooperation that provide a naturally emergent curriculum, which we term an autocurriculum. The solution of one social task often begets new social tasks, continually generating novel challenges, and thereby promoting innovation. Under certain conditions these challenges may become increasingly complex over time, demanding that agents accumulate ever more innovations.) <|cite_end|> <|cite_start|> (Reference: Pick Your Battles: Interaction Graphs as Population-Level Objectives for Strategic Diversity: Strategic diversity is often essential in games: in multi-player games, for example, evaluating a player against a diverse set of strategies will yield a more accurate estimate of its performance. Furthermore, in games with non-transitivities diversity allows a player to cover several winning strategies. However, despite the significance of strategic diversity, training agents that exhibit diverse behaviour remains a challenge. In this paper we study how to construct diverse populations of agents by carefully structuring how individuals within a population interact. Our approach is based on interaction graphs, which control the flow of information between agents during training and can encourage agents to specialise on different strategies, leading to improved overall performance. We provide evidence for the importance of diversity in multi-agent training and analyse the effect of applying different interaction graphs on the training trajectories, diversity and performance of populations in a range of games. This is an extended version of the long abstract published at AAMAS.) <|cite_end|> <|cite_start|> (Reference: Emergent Tool Use From Multi-Agent Autocurricula: Through multi-agent competition, the simple objective of hide-and-seek, and standard reinforcement learning algorithms at scale, we find that agents create a self-supervised autocurriculum inducing multiple distinct rounds of emergent strategy, many of which require sophisticated tool use and coordination. We find clear evidence of six emergent phases in agent strategy in our environment, each of which creates a new pressure for the opposing team to adapt; for instance, agents learn to build multi-object shelters using moveable boxes which in turn leads to agents discovering that they can overcome obstacles using ramps. We further provide evidence that multi-agent competition may scale better with increasing environment complexity and leads to behavior that centers around far more human-relevant skills than other self-supervised reinforcement learning methods such as intrinsic motivation. Finally, we propose transfer and fine-tuning as a way to quantitatively evaluate targeted capabilities, and we compare hide-and-seek agents to both intrinsic motivation and random initialization baselines in a suite of domain-specific intelligence tests.) <|cite_end|> <|cite_start|> (Reference: Emergent Complexity via Multi-Agent Competition: Reinforcement learning algorithms can train agents that solve problems in complex, interesting environments. Normally, the complexity of the trained agent is closely related to the complexity of the environment. This suggests that a highly capable agent requires a complex environment for training. In this paper, we point out that a competitive multi-agent environment trained with self-play can produce behaviors that are far more complex than the environment itself. We also point out that such environments come with a natural curriculum, because for any skill level, an environment full of agents of this level will have the right level of difficulty. This work introduces several competitive multi-agent environments where agents compete in a 3D world with simulated physics. The trained agents learn a wide variety of complex and interesting skills, even though the environment themselves are relatively simple. The skills include behaviors such as running, blocking, ducking, tackling, fooling opponents, kicking, and defending using both arms and legs. A highlight of the learned behaviors can be found here: https://goo.gl/eR7fbX) <|cite_end|> <|cite_start|> (Reference: Neural Auto-Curricula in Two-Player Zero-Sum Games: ) <|cite_end|>, where it is crucial to play against increasingly stronger opponents <|cite_start|> (Reference: A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play: One program to rule them all Computers can beat humans at increasingly complex games, including chess and Go. However, these programs are typically constructed for a particular game, exploiting its properties, such as the symmetries of the board on which it is played. Silver et al. developed a program called AlphaZero, which taught itself to play Go, chess, and shogi (a Japanese version of chess) (see the Editorial, and the Perspective by Campbell). AlphaZero managed to beat state-of-the-art programs specializing in these three games. The ability of AlphaZero to adapt to various game rules is a notable step toward achieving a general game-playing system. Science, this issue p. 1140; see also pp. 1087 and 1118 AlphaZero teaches itself to play three different board games and beats state-of-the-art programs in each. The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from self-play. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess), as well as Go.) <|cite_end|> and avoid being exploited by other agents <|cite_start|> (Reference: Grandmaster level in StarCraft II using multi-agent reinforcement learning: ) <|cite_end|>. Here, algorithms such as self-play <|cite_start|> (Reference: A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play: One program to rule them all Computers can beat humans at increasingly complex games, including chess and Go. However, these programs are typically constructed for a particular game, exploiting its properties, such as the symmetries of the board on which it is played. Silver et al. developed a program called AlphaZero, which taught itself to play Go, chess, and shogi (a Japanese version of chess) (see the Editorial, and the Perspective by Campbell). AlphaZero managed to beat state-of-the-art programs specializing in these three games. The ability of AlphaZero to adapt to various game rules is a notable step toward achieving a general game-playing system. Science, this issue p. 1140; see also pp. 1087 and 1118 AlphaZero teaches itself to play three different board games and beats state-of-the-art programs in each. The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from self-play. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess), as well as Go.) <|cite_end|> <|cite_start|> (Reference: Temporal difference learning and TD-gammon: Ever since the days of Shannon's proposal for a chess-playing algorithm [12] and Samuel's checkers-learning program [10] the domain of complex board games such as Go, chess, checkers, Othello, and backgammon has been widely regarded as an ideal testing ground for exploring a variety of concepts and approaches in artificial intelligence and machine learning. Such board games offer the challenge of tremendous complexity and sophistication required to play at expert level. At the same time, the problem inputs and performance measures are clear-cut and well defined, and the game environment is readily automated in that it is easy to simulate the board, the rules of legal play, and the rules regarding when the game is over and determining the outcome.) <|cite_end|> and fictitious self-play <|cite_start|> (Reference: Fictitious self-play in extensive-form games: Fictitious play is a popular game-theoretic model of learning in games. However, it has received little attention in practical applications to large problems. This paper introduces two variants of fictitious play that are implemented in behavioural strategies of an extensive-form game. The first variant is a full-width process that is realization equivalent to its normal-form counterpart and therefore inherits its convergence guarantees. However, its computational requirements are linear in time and space rather than exponential. The second variant, Fictitious Self-Play, is a machine learning framework that implements fictitious play in a sample-based fashion. Experiments in imperfect-information poker games compare our approaches and demonstrate their convergence to approximate Nash equilibria.) <|cite_end|> have proven especially effective. Similarly, in single-agent RL, autocurricula methods based on Unsupervised Environment Design \citep[UED,][]{paired} have proven effective in producing agents robust to a wide distribution of \textit{environments} <|cite_start|> (Reference: Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their Solutions: While the history of machine learning so far largely encompasses a series of problems posed by researchers and algorithms that learn their solutions, an important question is whether the problems themselves can be generated by the algorithm at the same time as they are being solved. Such a process would in effect build its own diverse and expanding curricula, and the solutions to problems at various stages would become stepping stones towards solving even more challenging problems later in the process. The Paired Open-Ended Trailblazer (POET) algorithm introduced in this paper does just that: it pairs the generation of environmental challenges and the optimization of agents to solve those challenges. It simultaneously explores many different paths through the space of possible problems and solutions and, critically, allows these stepping-stone solutions to transfer between problems if better, catalyzing innovation. The term open-ended signifies the intriguing potential for algorithms like POET to continue to create novel and increasingly complex capabilities without bound. Our results show that POET produces a diverse range of sophisticated behaviors that solve a wide range of environmental challenges, many of which cannot be solved by direct optimization alone, or even through a direct-path curriculum-building control algorithm introduced to highlight the critical role of open-endedness in solving ambitious challenges. The ability to transfer solutions from one environment to another proves essential to unlocking the full potential of the system as a whole, demonstrating the unpredictable nature of fortuitous stepping stones. We hope that POET will inspire a new push towards open-ended discovery across many domains, where algorithms like POET can blaze a trail through their interesting possible manifestations and solutions.) <|cite_end|> <|cite_start|> (Reference: Enhanced POET: Open-Ended Reinforcement Learning through Unbounded Invention of Learning Challenges and their Solutions: Creating open-ended algorithms, which generate their own never-ending stream of novel and appropriately challenging learning opportunities, could help to automate and accelerate progress in machine learning. A recent step in this direction is the Paired Open-Ended Trailblazer (POET), an algorithm that generates and solves its own challenges, and allows solutions to goal-switch between challenges to avoid local optima. However, the original POET was unable to demonstrate its full creative potential because of limitations of the algorithm itself and because of external issues including a limited problem space and lack of a universal progress measure. Importantly, both limitations pose impediments not only for POET, but for the pursuit of open-endedness in general. Here we introduce and empirically validate two new innovations to the original algorithm, as well as two external innovations designed to help elucidate its full potential. Together, these four advances enable the most open-ended algorithmic demonstration to date. The algorithmic innovations are (1) a domain-general measure of how meaningfully novel new challenges are, enabling the system to potentially create and solve interesting challenges endlessly, and (2) an efficient heuristic for determining when agents should goal-switch from one problem to another (helping open-ended search better scale). Outside the algorithm itself, to enable a more definitive demonstration of open-endedness, we introduce (3) a novel, more flexible way to encode environmental challenges, and (4) a generic measure of the extent to which a system continues to exhibit open-ended innovation. Enhanced POET produces a diverse range of sophisticated behaviors that solve a wide range of environmental challenges, many of which cannot be solved through other means.) <|cite_end|> <|cite_start|> (Reference: Replay-Guided Adversarial Environment Design: Deep reinforcement learning (RL) agents may successfully generalize to new settings if trained on an appropriately diverse set of environment and task configurations. Unsupervised Environment Design (UED) is a promising self-supervised RL paradigm, wherein the free parameters of an underspecified environment are automatically adapted during training to the agent's capabilities, leading to the emergence of diverse training environments. Here, we cast Prioritized Level Replay (PLR), an empirically successful but theoretically unmotivated method that selectively samples randomly-generated training levels, as UED. We argue that by curating completely random levels, PLR, too, can generate novel and complex levels for effective training. This insight reveals a natural class of UED methods we call Dual Curriculum Design (DCD). Crucially, DCD includes both PLR and a popular UED algorithm, PAIRED, as special cases and inherits similar theoretical guarantees. This connection allows us to develop novel theory for PLR, providing a version with a robustness guarantee at Nash equilibria. Furthermore, our theory suggests a highly counterintuitive improvement to PLR: by stopping the agent from updating its policy on uncurated levels (training on less data), we can improve the convergence to Nash equilibria. Indeed, our experiments confirm that our new method, PLR$^{\perp}$, obtains better results on a suite of out-of-distribution, zero-shot transfer tasks, in addition to demonstrating that PLR$^{\perp}$ improves the performance of PAIRED, from which it inherited its theoretical framework.) <|cite_end|> <|cite_start|> (Reference: Evolving Curricula with Regret-Based Environment Design: It remains a significant challenge to train generally capable agents with reinforcement learning (RL). A promising avenue for improving the robustness of RL agents is through the use of curricula. One such class of methods frames environment design as a game between a student and a teacher, using regret-based objectives to produce environment instantiations (or levels) at the frontier of the student agent's capabilities. These methods benefit from their generality, with theoretical guarantees at equilibrium, yet they often struggle to find effective levels in challenging design spaces. By contrast, evolutionary approaches seek to incrementally alter environment complexity, resulting in potentially open-ended learning, but often rely on domain-specific heuristics and vast amounts of computational resources. In this paper we propose to harness the power of evolution in a principled, regret-based curriculum. Our approach, which we call Adversarially Compounding Complexity by Editing Levels (ACCEL), seeks to constantly produce levels at the frontier of an agent's capabilities, resulting in curricula that start simple but become increasingly complex. ACCEL maintains the theoretical benefits of prior regret-based methods, while providing significant empirical gains in a diverse set of environments. An interactive version of the paper is available at accelagent.github.io.) <|cite_end|>. UED seeks to adapt distributions over environments to maximise some metrics of interest. Minimax-regret UED seeks to maximise the \emph{regret} of the learning agent, viewing this process as a game between a teacher that proposes challenging environments and a student that learns to solve them. At a Nash equilibrium of such games, the student policy provably reaches a minimax-regret policy over the set of possible environments, thereby providing a strong robustness guarantee. However, prior works in UED focus on single-agent RL and do not address the dependency between the environment and the strategies of other agents within it. In multi-agent domains, the behaviour of other agents plays a critical role in modulating the complexity and diversity of the challenges faced by a learning agent. For example, an empty environment that has no blocks to hide behind might be most challenging when playing against opponent policies that attack head-on, whereas environments that are full of winding hallways might be difficult when playing against defensive policies. Robust RL agents should be expected to interact successfully with a wide assortment of other rational agents in their environment <|cite_start|> (Reference: Diverse Auto-Curriculum is Critical for Successful Real-World Multiagent Learning Systems: Multiagent reinforcement learning (MARL) has achieved a remarkable amount of success in solving various types of video games. A cornerstone of this success is the auto-curriculum framework, which shapes the learning process by continually creating new challenging tasks for agents to adapt to, thereby facilitating the acquisition of new skills. In order to extend MARL methods to real-world domains outside of video games, we envision in this blue sky paper that maintaining a diversity-aware auto-curriculum is critical for successful MARL applications. Specifically, we argue that \emph{behavioural diversity} is a pivotal, yet under-explored, component for real-world multiagent learning systems, and that significant work remains in understanding how to design a diversity-aware auto-curriculum. We list four open challenges for auto-curriculum techniques, which we believe deserve more attention from this community. Towards validating our vision, we recommend modelling realistic interactive behaviours in autonomous driving as an important test bed, and recommend the SMARTS/ULTRA benchmark.) <|cite_end|> <|cite_start|> (Reference: Generalization in Cooperative Multi-Agent Systems: Collective intelligence is a fundamental trait shared by several species of living organisms. It has allowed them to thrive in the diverse environmental conditions that exist on our planet. From simple organisations in an ant colony to complex systems in human groups, collective intelligence is vital for solving complex survival tasks. As is commonly observed, such natural systems are flexible to changes in their structure. Specifically, they exhibit a high degree of generalization when the abilities or the total number of agents changes within a system. We term this phenomenon as Combinatorial Generalization (CG). CG is a highly desirable trait for autonomous systems as it can increase their utility and deployability across a wide range of applications. While recent works addressing specific aspects of CG have shown impressive results on complex domains, they provide no performance guarantees when generalizing towards novel situations. In this work, we shed light on the theoretical underpinnings of CG for cooperative multi-agent systems (MAS). Specifically, we study generalization bounds under a linear dependence of the underlying dynamics on the agent capabilities, which can be seen as a generalization of Successor Features to MAS. We then extend the results first for Lipschitz and then arbitrary dependence of rewards on team capabilities. Finally, empirical analysis on various domains using the framework of multi-agent reinforcement learning highlights important desiderata for multi-agent algorithms towards ensuring CG.) <|cite_end|>. Therefore, to become widely applicable, UED must be extended to include multi-agent dynamics as part of the environment design process. \begin{figure} \centering \includegraphics[width=0.72\linewidth]{figures/MAESTRO_v6.pdf} \vspace{-2mm} \caption{\textbf{A diagram of \method{}.} \method{} maintains a population of co-players, each having an individual buffer of high-regret environments. When new environments are sampled, the student's regret is calculated with respect to the corresponding co-player and added to the co-player's buffer. \method{} continually provides high-regret environment/co-player pairs for training the student. } \vspace{-3.8mm} \label{fig:maestro_diagram} \end{figure} We formalise this novel problem as an \emph{Underspecified Partially-Observable Stochastic Game} (UPOSG), which generalises UED to multi-agent settings. We then introduce \methodlongemph{} (\method{}), the first approach to train generally capable agents in two-player UPOSGs such that they are robust to changes in the environment and co-player policies. \method{} is a replay-guided approach that explicitly considers the dependence between agents and environments by jointly sampling over environment/co-player pairs using a regret-based curriculum and population learning (see \cref{fig:maestro_diagram}). In partially observable two-player zero-sum games, we show that at equilibrium, the \method{} student policy reaches a Bayes-Nash Equilibrium with respect to a regret-maximising distribution over environments. Furthermore, in fully observable settings, it attains a Nash-Equilibrium policy in every environment against every rational agent. We assess the curricula induced by \method{} and a variety of strong baselines in two competitive two-player games, namely a sparse-reward grid-based LaserTag environment with discrete actions <|cite_start|> (Reference: A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning: To achieve general intelligence, agents must learn how to interact with others in a shared environment: this is the challenge of multiagent reinforcement learning (MARL). The simplest form is independent reinforcement learning (InRL), where each agent treats its experience as part of its (non-stationary) environment. In this paper, we first observe that policies learned using InRL can overfit to the other agents' policies during training, failing to sufficiently generalize during execution. We introduce a new metric, joint-policy correlation, to quantify this effect. We describe an algorithm for general MARL, based on approximate best responses to mixtures of policies generated using deep reinforcement learning, and empirical game-theoretic analysis to compute meta-strategies for policy selection. The algorithm generalizes previous ones such as InRL, iterated best response, double oracle, and fictitious play. Then, we present a scalable implementation which reduces the memory requirement using decoupled meta-solvers. Finally, we demonstrate the generality of the resulting policies in two partially observable settings: gridworld coordination games and poker.) <|cite_end|> and a dense-reward pixel-based MultiCarRacing environment with continuous actions <|cite_start|> (Reference: Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space: Learning competitive behaviors in multi-agent settings such as racing requires long-term reasoning about potential adversarial interactions. This paper presents Deep Latent Competition (DLC), a novel reinforcement learning algorithm that learns competitive visual control policies through self-play in imagination. The DLC agent imagines multi-agent interaction sequences in the compact latent space of a learned world model that combines a joint transition function with opponent viewpoint prediction. Imagined self-play reduces costly sample generation in the real world, while the latent representation enables planning to scale gracefully with observation dimensionality. We demonstrate the effectiveness of our algorithm in learning competitive behaviors on a novel multi-agent racing benchmark that requires planning from image observations. Code and videos available at https://sites.google.com/view/deep-latent-competition.) <|cite_end|>. In both cases, \method{} produces more robust agents than baseline autocurriculum methods on out-of-distribution (OOD) human-designed environment instances against unseen co-players. Furthermore, we show that \method{} agents, trained only on randomised environments and having never seen the target task, can significantly outperform \textit{specialist} agents trained directly on the target environment. Moreover, in analysing how the student's regret varies across environments and co-players, we find that a joint curriculum, as produced by \method{}, is indeed required for finding the highest regret levels, as necessitated by UED. In summary, we make the following core contributions: (i) we provide the first formalism for multi-agent learning in underspecified environments, (ii) we introduce \method{}, a novel approach to jointly learn autocurricula over environment/co-player pairs, implicitly modelling their dependence, (iii) we prove \method{} inherits the theoretical property from the single-agent setting of implementing a minimax-regret policy at equilibrium, which corresponds to a Bayesian Nash or Nash equilibrium in certain settings, and (iv) by rigorously analysing the curriculum induced by \method{} and evaluating \method{} agents against strong baselines, we empirically demonstrate the importance of the joint curriculum over the environments and co-players. \begin{figure}[tp!] \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=1.65cm]{figures/levels_train_LT/lt_start_1.png} \includegraphics[width=1.65cm]{figures/levels_train_LT/lt_start_3.png}\\ \vspace{1mm} \includegraphics[width=1.65cm]{figures/levels_train_MCR/mcr_start_2.png} \includegraphics[width=1.65cm]{figures/levels_train_MCR/mcr_start_3.png} \caption{Start of training} \label{subfig:curr_lt_start} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=1.65cm]{figures/levels_train_LT/lt_mid_1.png} \includegraphics[width=1.65cm]{figures/levels_train_LT/lt_mid_2.png}\\ \vspace{1mm} \includegraphics[width=1.65cm]{figures/levels_train_MCR/mcr_mid_2.png} \includegraphics[width=1.65cm]{figures/levels_train_MCR/mcr_mid_3.png} \caption{Middle of training} \label{subfig:curr_lt_mid} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=1.65cm]{figures/levels_train_LT/lt_end_1.png} \includegraphics[width=1.65cm]{figures/levels_train_LT/lt_end_2.png}\\ \vspace{1mm} \includegraphics[width=1.65cm]{figures/levels_train_MCR/mcr_end_1.png} \includegraphics[width=1.65cm]{figures/levels_train_MCR/mcr_end_2.png} \caption{End of training} \label{subfig:curr_lt_end} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=1.65cm]{figures/lasertag-test/Lasertag-LargeCorridor-N2-v0.png} \includegraphics[width=1.65cm]{figures/lasertag-test/Lasertag-SixteenRooms-N2-v0.png}\\ \vspace{1mm} \includegraphics[width=1.65cm]{figures/carracing-f1/MultiCarRacing-F1-Germany-v0.png} \includegraphics[width=1.65cm]{figures/carracing-f1/MultiCarRacing-F1-USA-v0.png} \caption{Zero-shot evaluation} \label{subfig:lt_eval} \end{subfigure} \vspace{-4.6mm} \caption{\small{\textbf{Emergent complexity of autocurricula induced by \method{}}. Examples of partially observable environments provided to the \method{} student agent at the (a) start, (b) middle, and (c) end of training. Levels become more complex over time. LaserTag levels (top row) increase in wall density and active engagement between the \textcolor{lt_red}{\textbf{student}} and \textcolor{lt_blue}{\textbf{opponent}}. MultiCarRacing tracks (bottom row) become increasingly more challenging with many sharp turns. (d) Example held-out human-designed LaserTag levels and Formula 1 benchmark tracks <|cite_start|> (Reference: Replay-Guided Adversarial Environment Design: Deep reinforcement learning (RL) agents may successfully generalize to new settings if trained on an appropriately diverse set of environment and task configurations. Unsupervised Environment Design (UED) is a promising self-supervised RL paradigm, wherein the free parameters of an underspecified environment are automatically adapted during training to the agent's capabilities, leading to the emergence of diverse training environments. Here, we cast Prioritized Level Replay (PLR), an empirically successful but theoretically unmotivated method that selectively samples randomly-generated training levels, as UED. We argue that by curating completely random levels, PLR, too, can generate novel and complex levels for effective training. This insight reveals a natural class of UED methods we call Dual Curriculum Design (DCD). Crucially, DCD includes both PLR and a popular UED algorithm, PAIRED, as special cases and inherits similar theoretical guarantees. This connection allows us to develop novel theory for PLR, providing a version with a robustness guarantee at Nash equilibria. Furthermore, our theory suggests a highly counterintuitive improvement to PLR: by stopping the agent from updating its policy on uncurated levels (training on less data), we can improve the convergence to Nash equilibria. Indeed, our experiments confirm that our new method, PLR$^{\perp}$, obtains better results on a suite of out-of-distribution, zero-shot transfer tasks, in addition to demonstrating that PLR$^{\perp}$ improves the performance of PAIRED, from which it inherited its theoretical framework.) <|cite_end|> used for OOD evaluation. For the full list of evaluation environments see \cref{sec:env_details}. }} \label{fig:emergent} \vspace{-6mm} \end{figure} \vspace{-1mm} Related Work \label{sec:related_work} \vspace{-1mm} \emph{Unsupervised Environment Design}~\citep[UED,][]{paired} is a family of methods that provide an agent with a sequence of environments for training robust policies. The simplest UED approach is \emph{Domain Randomisation}~\citep[DR,][]{evolutionary_dr, cad2rl} which has demonstrated strong empirical performances in domains such as robotics <|cite_start|> (Reference: Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World: Bridging the 'reality gap' that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to $1.5$cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.) <|cite_end|> <|cite_start|> (Reference: Transferring End-to-End Visuomotor Control from Simulation to Real World for a Multi-Stage Task: End-to-end control for robot manipulation and grasping is emerging as an attractive alternative to traditional pipelined approaches. However, end-to-end methods tend to either be slow to train, exhibit little or no generalisability, or lack the ability to accomplish long-horizon or multi-stage tasks. In this paper, we show how two simple techniques can lead to end-to-end (image to velocity) execution of a multi-stage task, which is analogous to a simple tidying routine, without having seen a single real image. This involves locating, reaching for, and grasping a cube, then locating a basket and dropping the cube inside. To achieve this, robot trajectories are computed in a simulator, to collect a series of control velocities which accomplish the task. Then, a CNN is trained to map observed images to velocities, using domain randomisation to enable generalisation to real world images. Results show that we are able to successfully accomplish the task in the real world with the ability to generalise to novel environments, including those with dynamic lighting conditions, distractor objects, and moving objects, including the basket itself. We believe our approach to be simple, highly scalable, and capable of learning long-horizon tasks that have until now not been shown with the state-of-the-art in end-to-end robot control.) <|cite_end|> and magnetic control of tokamak plasmas <|cite_start|> (Reference: Magnetic control of tokamak plasmas through deep reinforcement learning: ) <|cite_end|>. PAIRED <|cite_start|> (Reference: Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design: A wide range of reinforcement learning (RL) problems - including robustness, transfer learning, unsupervised RL, and emergent complexity - require specifying a distribution of tasks or environments in which a policy will be trained. However, creating a useful distribution of environments is error prone, and takes a significant amount of developer time and effort. We propose Unsupervised Environment Design (UED) as an alternative paradigm, where developers provide environments with unknown parameters, and these parameters are used to automatically produce a distribution over valid, solvable environments. Existing approaches to automatically generating environments suffer from common failure modes: domain randomization cannot generate structure or adapt the difficulty of the environment to the agent's learning progress, and minimax adversarial training leads to worst-case environments that are often unsolvable. To generate structured, solvable environments for our protagonist agent, we introduce a second, antagonist agent that is allied with the environment-generating adversary. The adversary is motivated to generate environments which maximize regret, defined as the difference between the protagonist and antagonist agent's return. We call our technique Protagonist Antagonist Induced Regret Environment Design (PAIRED). Our experiments demonstrate that PAIRED produces a natural curriculum of increasingly complex environments, and PAIRED agents achieve higher zero-shot transfer performance when tested in highly novel environments.) <|cite_end|> <|cite_start|> (Reference: Environment Generation for Zero-Shot Compositional Reinforcement Learning: Many real-world problems are compositional - solving them requires completing interdependent sub-tasks, either in series or in parallel, that can be represented as a dependency graph. Deep reinforcement learning (RL) agents often struggle to learn such complex tasks due to the long time horizons and sparse rewards. To address this problem, we present Compositional Design of Environments (CoDE), which trains a Generator agent to automatically build a series of compositional tasks tailored to the RL agent's current skill level. This automatic curriculum not only enables the agent to learn more complex tasks than it could have otherwise, but also selects tasks where the agent's performance is weak, enhancing its robustness and ability to generalize zero-shot to unseen tasks at test-time. We analyze why current environment generation techniques are insufficient for the problem of generating compositional tasks, and propose a new algorithm that addresses these issues. Our results assess learning and generalization across multiple compositional tasks, including the real-world problem of learning to navigate and interact with web pages. We learn to generate environments composed of multiple pages or rooms, and train RL agents capable of completing wide-range of complex tasks in those environments. We contribute two new benchmark frameworks for generating compositional tasks, compositional MiniGrid and gMiniWoB for web navigation.CoDE yields 4x higher success rate than the strongest baseline, and demonstrates strong performance of real websites learned on 3500 primitive tasks.) <|cite_end|> trains an environment generator that maximises the student's regret, approximated as the difference in return between the student and an antagonist agent. \emph{Prioritized Level Replay} \citep[PLR,][]{jiang2021robustplr,plr} curates environment instances (i.e., levels) for training, by performing a random search of domain randomised levels for those with high learning potential, e.g., as measured by estimated regret. \emph{ACCEL} <|cite_start|> (Reference: Evolving Curricula with Regret-Based Environment Design: It remains a significant challenge to train generally capable agents with reinforcement learning (RL). A promising avenue for improving the robustness of RL agents is through the use of curricula. One such class of methods frames environment design as a game between a student and a teacher, using regret-based objectives to produce environment instantiations (or levels) at the frontier of the student agent's capabilities. These methods benefit from their generality, with theoretical guarantees at equilibrium, yet they often struggle to find effective levels in challenging design spaces. By contrast, evolutionary approaches seek to incrementally alter environment complexity, resulting in potentially open-ended learning, but often rely on domain-specific heuristics and vast amounts of computational resources. In this paper we propose to harness the power of evolution in a principled, regret-based curriculum. Our approach, which we call Adversarially Compounding Complexity by Editing Levels (ACCEL), seeks to constantly produce levels at the frontier of an agent's capabilities, resulting in curricula that start simple but become increasingly complex. ACCEL maintains the theoretical benefits of prior regret-based methods, while providing significant empirical gains in a diverse set of environments. An interactive version of the paper is available at accelagent.github.io.) <|cite_end|> is a replay-guided UED approach that extends PLR by making edits to high-regret environments. Several methods generate curricula by adapting the environment parameters in response to the agent's performance <|cite_start|> (Reference: Teacher algorithms for curriculum learning of Deep RL in continuously parameterized environments: We consider the problem of how a teacher algorithm can enable an unknown Deep Reinforcement Learning (DRL) student to become good at a skill over a wide range of diverse environments. To do so, we study how a teacher algorithm can learn to generate a learning curriculum, whereby it sequentially samples parameters controlling a stochastic procedural generation of environments. Because it does not initially know the capacities of its student, a key challenge for the teacher is to discover which environments are easy, difficult or unlearnable, and in what order to propose them to maximize the efficiency of learning over the learnable ones. To achieve this, this problem is transformed into a surrogate continuous bandit problem where the teacher samples environments in order to maximize absolute learning progress of its student. We present a new algorithm modeling absolute learning progress with Gaussian mixture models (ALP-GMM). We also adapt existing algorithms and provide a complete study in the context of DRL. Using parameterized variants of the BipedalWalker environment, we study their efficiency to personalize a learning curriculum for different learners (embodiments), their robustness to the ratio of learnable/unlearnable environments, and their scalability to non-linear and high-dimensional parameter spaces. Videos and code are available at https://github.com/flowersteam/teachDeepRL.) <|cite_end|> <|cite_start|> (Reference: Teacher-Student Curriculum Learning: We propose Teacher-Student Curriculum Learning (TSCL), a framework for automatic curriculum learning, where the Student tries to learn a complex task and the Teacher automatically chooses subtasks from a given set for the Student to train on. We describe a family of Teacher algorithms that rely on the intuition that the Student should practice more those tasks on which it makes the fastest progress, i.e. where the slope of the learning curve is highest. In addition, the Teacher algorithms address the problem of forgetting by also choosing tasks where the Student's performance is getting worse. We demonstrate that TSCL matches or surpasses the results of carefully hand-crafted curricula in two tasks: addition of decimal numbers with LSTM and navigation in Minecraft. Using our automatically generated curriculum enabled to solve a Minecraft maze that could not be solved at all when training directly on solving the maze, and the learning was an order of magnitude faster than uniform sampling of subtasks.) <|cite_end|> <|cite_start|> (Reference: Self-Paced Contextual Reinforcement Learning: Generalization and adaptation of learned skills to novel situations is a core requirement for intelligent autonomous robots. Although contextual reinforcement learning provides a principled framework for learning and generalization of behaviors across related tasks, it generally relies on uninformed sampling of environments from an unknown, uncontrolled context distribution, thus missing the benefits of structured, sequential learning. We introduce a novel relative entropy reinforcement learning algorithm that gives the agent the freedom to control the intermediate task distribution, allowing for its gradual progression towards the target context distribution. Empirical evaluation shows that the proposed curriculum learning scheme drastically improves sample efficiency and enables learning in scenarios with both broad and sharp target context distributions in which classical approaches perform sub-optimally.) <|cite_end|> <|cite_start|> (Reference: Self-Paced Context Evaluation for Contextual Reinforcement Learning: Reinforcement learning (RL) has made a lot of advances for solving a single problem in a given environment; but learning policies that generalize to unseen variations of a problem remains challenging. To improve sample efficiency for learning on such instances of a problem domain, we present Self-Paced Context Evaluation (SPaCE). Based on self-paced learning, \spc automatically generates \task curricula online with little computational overhead. To this end, SPaCE leverages information contained in state values during training to accelerate and improve training performance as well as generalization capabilities to new instances from the same problem domain. Nevertheless, SPaCE is independent of the problem domain at hand and can be applied on top of any RL agent with state-value function approximation. We demonstrate SPaCE's ability to speed up learning of different value-based RL agents on two environments, showing better generalization capabilities and up to 10x faster learning compared to naive approaches such as round robin or SPDRL, as the closest state-of-the-art approach.) <|cite_end|>. This adaptation is largely heuristic-driven, without the robustness guarantees shared by minimax-regret UED methods. Notably, all these methods focus on single-agent RL, while \method{} is designed for the two-player multi-agent setting. Many prior works study curricula over opponents in two-player zero-sum settings. The most naive approach, self-play (SP), consists of pitting the agent against a copy of itself. Combined with search, SP has led to superhuman performances in board games such as Backgammon <|cite_start|> (Reference: Temporal difference learning and TD-gammon: Ever since the days of Shannon's proposal for a chess-playing algorithm [12] and Samuel's checkers-learning program [10] the domain of complex board games such as Go, chess, checkers, Othello, and backgammon has been widely regarded as an ideal testing ground for exploring a variety of concepts and approaches in artificial intelligence and machine learning. Such board games offer the challenge of tremendous complexity and sophistication required to play at expert level. At the same time, the problem inputs and performance measures are clear-cut and well defined, and the game environment is readily automated in that it is easy to simulate the board, the rules of legal play, and the rules regarding when the game is over and determining the outcome.) <|cite_end|>, Chess and Go <|cite_start|> (Reference: Mastering the game of Go with deep neural networks and tree search: ) <|cite_end|>. <|cite_start|> (Reference: Regret minimization in games with incomplete information: Extensive games are a powerful model of multiagent decision-making scenarios with incomplete information. Finding a Nash equilibrium for very large instances of these games has received a great deal of recent attention. In this paper, we describe a new technique for solving large games based on regret minimization. In particular, we introduce the notion of counterfactual regret, which exploits the degree of incomplete information in an extensive game. We show how minimizing counterfactual regret minimizes overall regret, and therefore in self-play can be used to compute a Nash equilibrium. We demonstrate this technique in the domain of poker, showing we can solve abstractions of limit Texas Hold'em with as many as 1012 states, two orders of magnitude larger than previous methods.) <|cite_end|> use self-play with regret minimisation for achieving Nash equilibrium, an approach that led to superhuman performance in Poker <|cite_start|> (Reference: Superhuman AI for Heads-up No-limit Poker: Libratus Beats Top Professionals: Libratus versus humans Pitting artificial intelligence (AI) against top human players demonstrates just how far AI has come. Brown and Sandholm built a poker-playing AI called Libratus that decisively beat four leading human professionals in the two-player variant of poker called heads-up no-limit Texas hold'em (HUNL). Over nearly 3 weeks, Libratus played 120,000 hands of HUNL against the human professionals, using a three-pronged approach that included precomputing an overall strategy, adapting the strategy to actual gameplay, and learning from its opponent. Science, this issue p. 418 An artificial intelligence program called Libratus played 120,000 hands of a two-player variant of poker and beat four leading human professionals. No-limit Texas hold’em is the most popular form of poker. Despite artificial intelligence (AI) successes in perfect-information games, the private information and massive game tree have made no-limit poker difficult to tackle. We present Libratus, an AI that, in a 120,000-hand competition, defeated four top human specialist professionals in heads-up no-limit Texas hold’em, the leading benchmark and long-standing challenge problem in imperfect-information game solving. Our game-theoretic approach features application-independent techniques: an algorithm for computing a blueprint for the overall strategy, an algorithm that fleshes out the details of the strategy for subgames that are reached during play, and a self-improver algorithm that fixes potential weaknesses that opponents have identified in the blueprint strategy.) <|cite_end|> <|cite_start|> (Reference: Superhuman AI for multiplayer poker: AI now masters six-player poker Computer programs have shown superiority over humans in two-player games such as chess, Go, and heads-up, no-limit Texas hold'em poker. However, poker games usually include six players—a much trickier challenge for artificial intelligence than the two-player variant. Brown and Sandholm developed a program, dubbed Pluribus, that learned how to play six-player no-limit Texas hold'em by playing against five copies of itself (see the Perspective by Blair and Saffidine). When pitted against five elite professional poker players, or with five copies of Pluribus playing against one professional, the computer performed significantly better than humans over the course of 10,000 hands of poker. Science, this issue p. 885; see also p. 864 An AI dubbed Pluribus performs significantly better than human professionals in six-player no-limit Texas hold’em poker. In recent years there have been great strides in artificial intelligence (AI), with games often serving as challenge problems, benchmarks, and milestones for progress. Poker has served for decades as such a challenge problem. Past successes in such benchmarks, including poker, have been limited to two-player games. However, poker in particular is traditionally played with more than two players. Multiplayer games present fundamental additional issues beyond those in two-player games, and multiplayer poker is a recognized AI milestone. In this paper we present Pluribus, an AI that we show is stronger than top human professionals in six-player no-limit Texas hold’em poker, the most popular form of poker played by humans.) <|cite_end|>. \textit{Fictitious self-play} (FSP) learns a best-response to the uniform mixture of all previous versions of the agent <|cite_start|> (Reference: Generalised weakened fictitious play: ) <|cite_end|> <|cite_start|> (Reference: Fictitious self-play in extensive-form games: Fictitious play is a popular game-theoretic model of learning in games. However, it has received little attention in practical applications to large problems. This paper introduces two variants of fictitious play that are implemented in behavioural strategies of an extensive-form game. The first variant is a full-width process that is realization equivalent to its normal-form counterpart and therefore inherits its convergence guarantees. However, its computational requirements are linear in time and space rather than exponential. The second variant, Fictitious Self-Play, is a machine learning framework that implements fictitious play in a sample-based fashion. Experiments in imperfect-information poker games compare our approaches and demonstrate their convergence to approximate Nash equilibria.) <|cite_end|>. \textit{Prioritised fictitious self-play} \citep[PFSP,][]{alphastar} trains agents against a non-uniform mixture of policies based on the probability of winning against each policy. PFSP is a practical variant of \textit{Policy-Space Response Oracles}~\citep[PSRO,][]{lanctot17unified}, a general population learning framework, whereby new policies are trained as best responses to a mixture of previous policies. \method{} is related to PSRO but adapted for UPOSGs. In \method{}, the population meta-strategy is based on the student's regret when playing against policies on environments observed during training. Unlike our work, these prior autocurricula methods for competitive multi-agent environments do not directly consider variations of the environment itself. Several prior works have applied DR in multi-agent domains. Randomly modifying the environment has proven critical for the emergence of complex behaviours in Hide-and-Seek <|cite_start|> (Reference: Emergent Tool Use From Multi-Agent Autocurricula: Through multi-agent competition, the simple objective of hide-and-seek, and standard reinforcement learning algorithms at scale, we find that agents create a self-supervised autocurriculum inducing multiple distinct rounds of emergent strategy, many of which require sophisticated tool use and coordination. We find clear evidence of six emergent phases in agent strategy in our environment, each of which creates a new pressure for the opposing team to adapt; for instance, agents learn to build multi-object shelters using moveable boxes which in turn leads to agents discovering that they can overcome obstacles using ramps. We further provide evidence that multi-agent competition may scale better with increasing environment complexity and leads to behavior that centers around far more human-relevant skills than other self-supervised reinforcement learning methods such as intrinsic motivation. Finally, we propose transfer and fine-tuning as a way to quantitatively evaluate targeted capabilities, and we compare hide-and-seek agents to both intrinsic motivation and random initialization baselines in a suite of domain-specific intelligence tests.) <|cite_end|>, Capture the Flag <|cite_start|> (Reference: {Human-level performance in 3D multiplayer games with population-based reinforcement learning: Artificial teamwork Artificially intelligent agents are getting better and better at two-player games, but most real-world endeavors require teamwork. Jaderberg et al. designed a computer program that excels at playing the video game Quake III Arena in Capture the Flag mode, where two multiplayer teams compete in capturing the flags of the opposing team. The agents were trained by playing thousands of games, gradually learning successful strategies not unlike those favored by their human counterparts. Computer agents competed successfully against humans even when their reaction times were slowed to match those of humans. Science, this issue p. 859 Teams of artificial agents compete successfully against humans in the video game Quake III Arena in Capture the Flag mode. Reinforcement learning (RL) has shown great success in increasingly complex single-agent environments and two-player turn-based games. However, the real world contains multiple agents, each learning and acting independently to cooperate and compete with other agents. We used a tournament-style evaluation to demonstrate that an agent can achieve human-level performance in a three-dimensional multiplayer first-person video game, Quake III Arena in Capture the Flag mode, using only pixels and game points scored as input. We used a two-tier optimization process in which a population of independent RL agents are trained concurrently from thousands of parallel matches on randomly generated environments. Each agent learns its own internal reward signal and rich representation of the world. These results indicate the great potential of multiagent reinforcement learning for artificial intelligence research.) <|cite_end|>, and StarCraft II Unit Micromanagement <|cite_start|> (Reference: SMACv2: An Improved Benchmark for Cooperative Multi-Agent Reinforcement Learning: The availability of challenging benchmarks has played a key role in the recent progress of machine learning. In cooperative multi-agent reinforcement learning, the StarCraft Multi-Agent Challenge (SMAC) has become a popular testbed for centralised training with decentralised execution. However, after years of sustained improvement on SMAC, algorithms now achieve near-perfect performance. In this work, we conduct new analysis demonstrating that SMAC lacks the stochasticity and partial observability to require complex *closed-loop* policies. In particular, we show that an *open-loop* policy conditioned only on the timestep can achieve non-trivial win rates for many SMAC scenarios. To address this limitation, we introduce SMACv2, a new version of the benchmark where scenarios are procedurally generated and require agents to generalise to previously unseen settings (from the same distribution) during evaluation. We also introduce the extended partial observability challenge (EPO), which augments SMACv2 to ensure meaningful partial observability. We show that these changes ensure the benchmark requires the use of *closed-loop* policies. We evaluate state-of-the-art algorithms on SMACv2 and show that it presents significant challenges not present in the original benchmark. Our analysis illustrates that SMACv2 addresses the discovered deficiencies of SMAC and can help benchmark the next generation of MARL methods. Videos of training are available at https://sites.google.com/view/smacv2.) <|cite_end|>. In XLand <|cite_start|> (Reference: Open-Ended Learning Leads to Generally Capable Agents: In this work we create agents that can perform well beyond a single, individual task, that exhibit much wider generalisation of behaviour to a massive, rich space of challenges. We define a universe of tasks within an environment domain and demonstrate the ability to train agents that are generally capable across this vast space and beyond. The environment is natively multi-agent, spanning the continuum of competitive, cooperative, and independent games, which are situated within procedurally generated physical 3D worlds. The resulting space is exceptionally diverse in terms of the challenges posed to agents, and as such, even measuring the learning progress of an agent is an open research problem. We propose an iterative notion of improvement between successive generations of agents, rather than seeking to maximise a singular objective, allowing us to quantify progress despite tasks being incomparable in terms of achievable rewards. We show that through constructing an open-ended learning process, which dynamically changes the training task distributions and training objectives such that the agent never stops learning, we achieve consistent learning of new behaviours. The resulting agent is able to score reward in every one of our humanly solvable evaluation levels, with behaviour generalising to many held-out points in the universe of tasks. Examples of this zero-shot generalisation include good performance on Hide and Seek, Capture the Flag, and Tag. Through analysis and hand-authored probe tasks we characterise the behaviour of our agent, and find interesting emergent heuristic behaviours such as trial-and-error experimentation, simple tool use, option switching, and cooperation. Finally, we demonstrate that the general capabilities of this agent could unlock larger scale transfer of behaviour through cheap finetuning.) <|cite_end|>, a curriculum is provided over both environments and tasks to create general learners. This work differs from ours in multiple aspects. <|cite_start|> (Reference: Open-Ended Learning Leads to Generally Capable Agents: In this work we create agents that can perform well beyond a single, individual task, that exhibit much wider generalisation of behaviour to a massive, rich space of challenges. We define a universe of tasks within an environment domain and demonstrate the ability to train agents that are generally capable across this vast space and beyond. The environment is natively multi-agent, spanning the continuum of competitive, cooperative, and independent games, which are situated within procedurally generated physical 3D worlds. The resulting space is exceptionally diverse in terms of the challenges posed to agents, and as such, even measuring the learning progress of an agent is an open research problem. We propose an iterative notion of improvement between successive generations of agents, rather than seeking to maximise a singular objective, allowing us to quantify progress despite tasks being incomparable in terms of achievable rewards. We show that through constructing an open-ended learning process, which dynamically changes the training task distributions and training objectives such that the agent never stops learning, we achieve consistent learning of new behaviours. The resulting agent is able to score reward in every one of our humanly solvable evaluation levels, with behaviour generalising to many held-out points in the universe of tasks. Examples of this zero-shot generalisation include good performance on Hide and Seek, Capture the Flag, and Tag. Through analysis and hand-authored probe tasks we characterise the behaviour of our agent, and find interesting emergent heuristic behaviours such as trial-and-error experimentation, simple tool use, option switching, and cooperation. Finally, we demonstrate that the general capabilities of this agent could unlock larger scale transfer of behaviour through cheap finetuning.) <|cite_end|> uses handcrafted heuristics and rejection sampling for selecting environments for training and evaluating agents, while \method{} automatically selects environments based on regret rather than hand-coded heuristics. Furthermore, unlike the autocurricula used in XLand, \method{} does not rely on population-based training, a computationally expensive algorithm for tuning the autocurriculum hyperparameters. \vspace{-2mm} <|paper_end|>
[ "<|reference_start|> Neural Auto-Curricula in Two-Player Zero-Sum Games: <|reference_end|>", "<|reference_start|> A general reinforcement learning algorithm that\nmasters chess, shogi, and Go through self-play: One program to rule them all Computers can beat humans at increasingly complex games, including chess and Go. However, these programs are typically constructed for a particular game, exploiting its properties, such as the symmetries of the board on which it is played. Silver et al. developed a program called AlphaZero, which taught itself to play Go, chess, and shogi (a Japanese version of chess) (see the Editorial, and the Perspective by Campbell). AlphaZero managed to beat state-of-the-art programs specializing in these three games. The ability of AlphaZero to adapt to various game rules is a notable step toward achieving a general game-playing system. Science, this issue p. 1140; see also pp. 1087 and 1118 AlphaZero teaches itself to play three different board games and beats state-of-the-art programs in each. The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from self-play. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess), as well as Go. <|reference_end|>", "<|reference_start|> Emergent Tool Use From Multi-Agent Autocurricula: Through multi-agent competition, the simple objective of hide-and-seek, and standard reinforcement learning algorithms at scale, we find that agents create a self-supervised autocurriculum inducing multiple distinct rounds of emergent strategy, many of which require sophisticated tool use and coordination. We find clear evidence of six emergent phases in agent strategy in our environment, each of which creates a new pressure for the opposing team to adapt; for instance, agents learn to build multi-object shelters using moveable boxes which in turn leads to agents discovering that they can overcome obstacles using ramps. We further provide evidence that multi-agent competition may scale better with increasing environment complexity and leads to behavior that centers around far more human-relevant skills than other self-supervised reinforcement learning methods such as intrinsic motivation. Finally, we propose transfer and fine-tuning as a way to quantitatively evaluate targeted capabilities, and we compare hide-and-seek agents to both intrinsic motivation and random initialization baselines in a suite of domain-specific intelligence tests. <|reference_end|>", "<|reference_start|> {Human-level performance in 3D multiplayer games with population-based reinforcement learning: Artificial teamwork Artificially intelligent agents are getting better and better at two-player games, but most real-world endeavors require teamwork. Jaderberg et al. designed a computer program that excels at playing the video game Quake III Arena in Capture the Flag mode, where two multiplayer teams compete in capturing the flags of the opposing team. The agents were trained by playing thousands of games, gradually learning successful strategies not unlike those favored by their human counterparts. Computer agents competed successfully against humans even when their reaction times were slowed to match those of humans. Science, this issue p. 859 Teams of artificial agents compete successfully against humans in the video game Quake III Arena in Capture the Flag mode. Reinforcement learning (RL) has shown great success in increasingly complex single-agent environments and two-player turn-based games. However, the real world contains multiple agents, each learning and acting independently to cooperate and compete with other agents. We used a tournament-style evaluation to demonstrate that an agent can achieve human-level performance in a three-dimensional multiplayer first-person video game, Quake III Arena in Capture the Flag mode, using only pixels and game points scored as input. We used a two-tier optimization process in which a population of independent RL agents are trained concurrently from thousands of parallel matches on randomly generated environments. Each agent learns its own internal reward signal and rich representation of the world. These results indicate the great potential of multiagent reinforcement learning for artificial intelligence research. <|reference_end|>" ]
[ 13, 16, 45, 46 ]
{"<|multi_cite_1_1|>": "ss-679381", "<|multi_cite_1_2|>": "arxiv-239288", "<|multi_cite_1_3|>": "ss-814844", "<|multi_cite_2_1|>": "ss-805362", "<|multi_cite_2_2|>": "arxiv-235062", "<|multi_cite_3_1|>": "arxiv-193696", "<|multi_cite_3_2|>": "arxiv-321317", "<|cite_4|>": "arxiv-357444", "<|multi_cite_5_1|>": "arxiv-188610", "<|multi_cite_6_1|>": "arxiv-193696", "<|multi_cite_6_2|>": "arxiv-372533", "<|multi_cite_6_3|>": "arxiv-224043", "<|multi_cite_6_4|>": "arxiv-136934", "<|multi_cite_6_5|>": "ss-2127624", "<|cite_7|>": "ss-809792", "<|cite_8|>": "ss-679381", "<|multi_cite_9_1|>": "ss-809792", "<|multi_cite_9_2|>": "ss-998920", "<|multi_cite_10_2|>": "ss-1259971", "<|multi_cite_11_1|>": "arxiv-186727", "<|multi_cite_11_2|>": "arxiv-254461", "<|multi_cite_11_3|>": "arxiv-371800", "<|multi_cite_11_4|>": "arxiv-402891", "<|multi_cite_12_1|>": "arxiv-321317", "<|multi_cite_12_2|>": "arxiv-396082", "<|cite_13|>": "arxiv-138993", "<|cite_14|>": "arxiv-322306", "<|cite_15|>": "arxiv-371800", "<|multi_cite_16_1|>": "arxiv-119557", "<|multi_cite_16_2|>": "arxiv-128738", "<|cite_17|>": "ss-737262", "<|multi_cite_18_1|>": "arxiv-307602", "<|multi_cite_18_2|>": "arxiv-394062", "<|cite_19|>": "arxiv-402891", "<|multi_cite_20_1|>": "arxiv-229108", "<|multi_cite_20_2|>": "arxiv-128209", "<|multi_cite_20_3|>": "arxiv-227521", "<|multi_cite_20_4|>": "arxiv-347070", "<|cite_21|>": "ss-998920", "<|cite_22|>": "ss-805362", "<|cite_29|>": "ss-1259970", "<|multi_cite_23_1|>": "ss-1179645", "<|multi_cite_23_2|>": "ss-1516508", "<|multi_cite_24_2|>": "ss-771163", "<|multi_cite_24_3|>": "ss-1259971", "<|cite_25|>": "arxiv-224043", "<|cite_26|>": "ss-782939", "<|cite_27|>": "arxiv-469550", "<|cite_28|>": "arxiv-357444", "<|cite_30|>": "arxiv-357444"}
2312.10934
<|paper_start|> Title: APIDocBooster: An Extract-Then-Abstract Framework Leveraging Large Language Models for Augmenting API Documentation Abstract: APIDocBooster: An Extract-Then-Abstract Framework Leveraging Large Language Models for Augmenting API Documentation: API documentation is often the most trusted resource for programming. Many approaches have been proposed to augment API documentation by summarizing complementary information from external resources such as Stack Overflow. Existing extractive-based summarization approaches excel in producing faithful summaries that accurately represent the source content without input length restrictions. Nevertheless, they suffer from inherent readability limitations. On the other hand, our empirical study on the abstractive-based summarization method, i.e., GPT-4, reveals that GPT-4 can generate coherent and concise summaries but presents limitations in terms of informativeness and faithfulness. We introduce APIDocBooster, an extract-then-abstract framework that seamlessly fuses the advantages of both extractive (i.e., enabling faithful summaries without length limitation) and abstractive summarization (i.e., producing coherent and concise summaries). APIDocBooster consists of two stages: (1) \textbf{C}ontext-aware \textbf{S}entence \textbf{S}ection \textbf{C}lassification (CSSC) and (2) \textbf{UP}date \textbf{SUM}marization (UPSUM). CSSC classifies API-relevant information collected from multiple sources into API documentation sections. UPSUM first generates extractive summaries distinct from the original API documentation and then generates abstractive summaries guided by extractive summaries through in-context learning. To enable automatic evaluation of APIDocBooster, we construct the first dataset for API document augmentation. Our automatic evaluation results reveal that each stage in APIDocBooster outperforms its baselines by a large margin. Our human evaluation also demonstrates the superiority of APIDocBooster over GPT-4 and shows that it improves informativeness, relevance, and faithfulness by 13.89\%, 15.15\%, and 30.56\%, respectively. Introduction \label{sec:intro} The application programming interface (API) is one of the most vital components of modern application development. Software developers typically rely on API reference documentation (API documentation in short) to learn APIs <|cite_start|> (Reference: {Augmenting API Documentation with Insights from Stack Overflow: Software developers need access to different kinds of information which is often dispersed among different documentation sources, such as API documentation or Stack Overflow. We present an approach to automatically augment API documentation with "insight sentences" from Stack Overflow -- sentences that are related to a particular API type and that provide insight not contained in the API documentation of that type. Based on a development set of 1,574 sentences, we compare the performance of two state-of-the-art summarization techniques as well as a pattern-based approach for insight sentence extraction. We then present SISE, a novel machine learning based approach that uses as features the sentences themselves, their formatting, their question, their answer, and their authors as well as part-of-speech tags and the similarity of a sentence to the corresponding API documentation. With SISE, we were able to achieve a precision of 0.64 and a coverage of 0.7 on the development set. In a comparative study with eight software developers, we found that SISE resulted in the highest number of sentences that were considered to add useful information not found in the API documentation. These results indicate that taking into account the meta data available on Stack Overflow as well as part-of-speech tags can significantly improve unsupervised extraction approaches when applied to Stack Overflow data.) <|cite_end|> <|cite_start|> (Reference: An Empirical Study on API Usages: API libraries provide thousands of APIs, and are essential in daily programming tasks. To understand their usages, it has long been a hot research topic to mine specifications that formally define legal usages for APIs. Furthermore, researchers are working on many other research topics on APIs. Although the research on APIs is intensively studied, many fundamental questions on APIs are still open. For example, the answers to open questions, such as which format can naturally define API usages and in which case, are still largely unknown. We notice that many such open questions are not concerned with concrete usages of specific APIs, but usages that describe how to use different types of APIs. To explore these questions, in this paper, we conduct an empirical study on API usages, with an emphasis on how different types of APIs are used. Our empirical results lead to nine findings on API usages. For example, we find that single-type usages are mostly strict orders, but multi-type usages are more complicated since they include both strict orders and partial orders. Based on these findings, for the research on APIs, we provide our suggestions on the four key aspects such as the challenges, the importance of different API elements, usage patterns, and pitfalls in designing evaluations. Furthermore, we interpret our findings, and present our insights on data sources, extraction techniques, mining techniques, and formats of specifications for the research of mining specifications.) <|cite_end|> <|cite_start|> (Reference: Understanding How and Why Developers Seek and Analyze API-related Opinions: With the advent and proliferation of online developer forums as informal documentation, developers often share their opinions about the APIs they use. Thus, opinions of others often shape the developer's perception and decisions related to software development. For example, the choice of an API or how to reuse the functionality the API offers are, to a considerable degree, conditioned upon what other developers think about the API. While many developers refer to and rely on such opinion-rich information about APIs, we found little research that investigates the use and benefits of public opinions. To understand how developers seek and evaluate API opinions, we conducted two surveys involving a total of 178 software developers. We analyzed the data in two dimensions, each corresponding to specific needs related to API reviews: (1) Needs for seeking API reviews, and (2) Needs for automated tool support to assess the reviews. We observed that developers seek API reviews and often have to summarize those for diverse development needs (e.g., API suitability). Developers also make conscious efforts to judge the trustworthiness of the provided opinions and believe that automated tool support for API reviews analysis can assist in diverse development scenarios, including, for example, saving time in API selection as well as making informed decisions on a particular API features.) <|cite_end|>. API documentation is a set of documents indexed by the API name, where each document provides information about a specific API <|cite_start|> (Reference: {Patterns of knowledge in API reference documentation: Reading reference documentation is an important part of programming with application programming interfaces (APIs). Reference documentation complements the API by providing information not obvious from the API syntax. To improve the quality of reference documentation and the efficiency with which the relevant information it contains can be accessed, we must first understand its content. We report on a study of the nature and organization of knowledge contained in the reference documentation of the hundreds of APIs provided as a part of two major technology platforms: Java SDK 6 and .NET 4.0. Our study involved the development of a taxonomy of knowledge types based on grounded methods and independent empirical validation. Seventeen trained coders used the taxonomy to rate a total of 5,574 randomly sampled documentation units to assess the knowledge they contain. Our results provide a comprehensive perspective on the patterns of knowledge in API documentation: observations about the types of knowledge it contains and how this knowledge is distributed throughout the documentation. The taxonomy and patterns of knowledge we present in this paper can be used to help practitioners evaluate the content of their API documentation, better organize their documentation, and limit the amount of low-value content. They also provide a vocabulary that can help structure and facilitate discussions about the content of APIs.) <|cite_end|>. According to the 2023 \so Developer Survey, technical documentation is the most trusted resource for programming. However, creating and maintaining high-quality and readable API documentation still requires significant effort <|cite_start|> (Reference: Software documentation: The practitioners' perspective: In theory, (good) documentation is an invaluable asset to any software project, as it helps stakeholders to use, understand, maintain, and evolve a system. In practice, however, documentation is generally affected by numerous shortcomings and issues, such as insufficient and inadequate content and obsolete, ambiguous information. To counter this, researchers are investigating the development of advanced recommender systems that automatically suggest high-quality documentation, useful for a given task. A crucial first step is to understand what quality means for practitioners and what information is actually needed for specific tasks. We present two surveys performed with 146 practitioners to investigate (i) the documentation issues they perceive as more relevant together with solutions they apply when these issues arise; and (ii) the types of documentation considered as important in different tasks. Our findings can help researchers in designing the next generation of documentation recommender systems.) <|cite_end|>. Existing API documents are often incomplete and not always equally readable <|cite_start|> (Reference: {How API Documentation Fails: Formal documentation can be a crucial resource for learning to how to use an API. However, producing high-quality documentation can be nontrivial. Researchers investigated how 10 common documentation problems manifested themselves in practice. The results are based on two surveys of a total of 323 professional software developers and analysis of 179 API documentation units. The three severest problems were ambiguity, incompleteness, and incorrectness of content. The respondents often mentioned six of the 10 problems as "blockers"' that forced them to use another API.) <|cite_end|> <|cite_start|> (Reference: A field study of API learning obstacles: ) <|cite_end|> <|cite_start|> (Reference: {Software Documentation Issues Unveiled: (Good) Software documentation provides developers and users with a description of what a software system does, how it operates, and how it should be used. For example, technical documentation (e.g., an API reference guide) aids developers during evolution/maintenance activities, while a user manual explains how users are to interact with a system. Despite its intrinsic value, the creation and the maintenance of documentation is often neglected, negatively impacting its quality and usefulness, ultimately leading to a generally unfavourable take on documentation. Previous studies investigating documentation issues have been based on surveying developers, which naturally leads to a somewhat biased view of problems affecting documentation. We present a large scale empirical study, where we mined, analyzed, and categorized 878 documentation-related artifacts stemming from four different sources, namely mailing lists, Stack Overflow discussions, issue repositories, and pull requests. The result is a detailed taxonomy of documentation issues from which we infer a series of actionable proposals both for researchers and practitioners.) <|cite_end|> <|cite_start|> (Reference: Beyond Accuracy: Assessing Software Documentation Quality: Good software documentation encourages good software engineering, but the meaning of "good" documentation is vaguely defined in the software engineering literature. To clarify this ambiguity, we draw on work from the data and information quality community to propose a framework that decomposes documentation quality into ten dimensions of structure, content, and style. To demonstrate its application, we recruited technical editors to apply the framework when evaluating examples from several genres of software documentation. We summarise their assessments -- for example, reference documentation and README files excel in quality whereas blog articles have more problems -- and we describe our vision for reasoning about software documentation quality and for the expansion and potential of a unified quality framework.) <|cite_end|> <|cite_start|> (Reference: {Patterns of knowledge in API reference documentation: Reading reference documentation is an important part of programming with application programming interfaces (APIs). Reference documentation complements the API by providing information not obvious from the API syntax. To improve the quality of reference documentation and the efficiency with which the relevant information it contains can be accessed, we must first understand its content. We report on a study of the nature and organization of knowledge contained in the reference documentation of the hundreds of APIs provided as a part of two major technology platforms: Java SDK 6 and .NET 4.0. Our study involved the development of a taxonomy of knowledge types based on grounded methods and independent empirical validation. Seventeen trained coders used the taxonomy to rate a total of 5,574 randomly sampled documentation units to assess the knowledge they contain. Our results provide a comprehensive perspective on the patterns of knowledge in API documentation: observations about the types of knowledge it contains and how this knowledge is distributed throughout the documentation. The taxonomy and patterns of knowledge we present in this paper can be used to help practitioners evaluate the content of their API documentation, better organize their documentation, and limit the amount of low-value content. They also provide a vocabulary that can help structure and facilitate discussions about the content of APIs.) <|cite_end|>. A recent survey of professional developers shows that 60\% of the participants have suffered from inadequate API documentation in the last three months <|cite_start|> (Reference: Automatic detection of five api documentation smells: Practitioners’ perspectives: The learning and usage of an API is supported by official documentation. Like source code, API documentation is itself a software product. Several research results show that bad design in API documentation can make the reuse of API features difficult. Indeed, similar to code smells or code anti-patterns, poorly designed API documentation can also exhibit ‘smells’. Such documentation smells can be described as bad documentation styles that do not necessarily produce an incorrect documentation but nevertheless make the documentation difficult to properly understand and to use. Recent research on API documentation has focused on finding content inaccuracies in API documentation and to complement API documentation with external resources (e.g., crowd-shared code examples). We are aware of no research that focused on the automatic detection of API documentation smells. This paper makes two contributions. First, we produce a catalog of five API documentation smells by consulting literature on API documentation presentation problems. We create a benchmark dataset of 1,000 API documentation units by exhaustively and manually validating the presence of the five smells in Java official API reference and instruction documentation. Second, we conduct a survey of 21 professional software developers to validate the catalog. The developers agreed that they frequently encounter all five smells in API official documentation and 95.2% of them reported that the presence of the documentation smells negatively affects their productivity. The participants wished for tool support to automatically detect and fix the smells in API official documentation. We develop a suite of rule-based, deep and shallow machine learning classifiers to automatically detect the smells. The best performing classifier BERT, a deep learning model, achieves F1-scores of 0.75 - 0.97.) <|cite_end|>. Motivated by this, researchers have proposed many solutions to augment API documentation <|cite_start|> (Reference: {Augmenting API Documentation with Insights from Stack Overflow: Software developers need access to different kinds of information which is often dispersed among different documentation sources, such as API documentation or Stack Overflow. We present an approach to automatically augment API documentation with "insight sentences" from Stack Overflow -- sentences that are related to a particular API type and that provide insight not contained in the API documentation of that type. Based on a development set of 1,574 sentences, we compare the performance of two state-of-the-art summarization techniques as well as a pattern-based approach for insight sentence extraction. We then present SISE, a novel machine learning based approach that uses as features the sentences themselves, their formatting, their question, their answer, and their authors as well as part-of-speech tags and the similarity of a sentence to the corresponding API documentation. With SISE, we were able to achieve a precision of 0.64 and a coverage of 0.7 on the development set. In a comparative study with eight software developers, we found that SISE resulted in the highest number of sentences that were considered to add useful information not found in the API documentation. These results indicate that taking into account the meta data available on Stack Overflow as well as part-of-speech tags can significantly improve unsupervised extraction approaches when applied to Stack Overflow data.) <|cite_end|> <|cite_start|> (Reference: Extracting API tips from developer question and answer websites: The success of question and answer (Q&A) websites attracts massive user-generated content for using and learning APIs, which easily leads to information overload: many questions for APIs have a large number of answers containing useful and irrelevant information, and cannot all be consumed by developers. In this work, we develop DeepTip, a novel deep learning-based approach using different Convolutional Neural Network architectures, to extract short practical and useful tips from developer answers. Our extensive empirical experiments prove that DeepTip can extract useful tips from a large corpus of answers to questions with high precision (i.e., avg. 0.854) and coverage (i.e., 0.94), and it outperforms two state-of-the-art baselines by up to 56.7% and 162%, respectively, in terms of Precision. Furthermore, qualitatively, a user study is conducted with real Stack Overflow users and its results confirm that tip extraction is useful and our approach generates high-quality tips.) <|cite_end|> <|cite_start|> (Reference: Automatic summarization of API reviews: With the proliferation of online developer forums as informal documentation, developers often share their opinions about the APIs they use. However, given the plethora of opinions available for an API in various online developer forums, it can be challenging for a developer to make informed decisions about the APIs. While automatic summarization of opinions have been explored for other domains (e.g., cameras, cars), we found little research that investigates the benefits of summaries of public API reviews. In this paper, we present two algorithms (statistical and aspect-based) to summarize opinions about APIs. To investigate the usefulness of the techniques, we developed, Opiner, an online opinion summarization engine that presents summaries of opinions using both our proposed techniques and existing six off-the-shelf techniques. We investigated the usefulness of Opiner using two case studies, both involving professional software engineers. We found that developers were interested to use our proposed summaries much more frequently than other summaries (daily vs once a year) and that while combined with Stack Overflow, Opiner helped developers to make the right decision with more accuracy and confidence and in less time.) <|cite_end|> <|cite_start|> (Reference: Automated Documentation of Android Apps: Developers do not always have the knowledge needed to understand source code and must refer to different resources (e.g., teammates, documentation, the web). This non-trivial process, called program comprehension, is very time-consuming. While many approaches support the comprehension of a given code at hand, they are mostly focused on defining extractive summaries from the code (i.e., on selecting from a given piece of code the most important statements/comments to comprehend it). However, if the information needed to comprehend the code is not there, their usefulness is limited. We present ADANA, an approach to automatically inject comments describing a given piece of Android code. ADANA reuses the descriptions of similar and well-documented code snippets retrieved from various online resources. Our evaluation has shown that ADANA is able to aid the program comprehension process.) <|cite_end|> <|cite_start|> (Reference: Enriching API documentation with code samples and usage scenarios from crowd knowledge: As one key resource to learn Application Programming Interfaces (APIs), a lot of API reference documentation lacks code samples with usage scenarios, thus heavily hindering developers from programming with APIs. Although researchers have investigated how to enrich API documentation with code samples from general code search engines, two main challenges remain to be resolved, including the quality challenge of acquiring high-quality code samples and the mapping challenge of matching code samples to usage scenarios. In this study, we propose a novel approach named ADECK towards enriching API documentation with code samples and corresponding usage scenarios by leveraging crowd knowledge from Stack Overflow, a popular technical Question and Answer (Q&A) website attracting millions of developers. Given an API related Q&A pair, a code sample in the answer is extensively evaluated by developers and targeted towards resolving the question under the specified usage scenario. Hence, ADECK can obtain high-quality code samples and map them to corresponding usage scenarios to address the above challenges. Extensive experiments on the Java SE and Android API documentation show that the number of code-sample-illustrated API types in the ADECK-enriched API documentation is 3.35 and 5.76 times as many as that in the raw API documentation. Meanwhile, the quality of code samples obtained by ADECK is better than that of code samples by the baseline approach eXoaDocs in terms of correctness, conciseness, and usability, e.g., the average correctness values of representative code samples obtained by ADECK and eXoaDocs are 4.26 and 3.28 on a 5-point scale in the enriched Java SE API documentation. In addition, an empirical study investigating the impacts of different types of API documentation on the productivity of developers shows that, compared against the raw and the eXoaDocs-enriched API documentation, the ADECK-enriched API documentation can help developers complete 23.81 and 14.29 percent more programming tasks and reduce the average completion time by 9.43 and 11.03 percent.) <|cite_end|>. The state-of-the-art (SOTA) approaches SISE <|cite_start|> (Reference: {Augmenting API Documentation with Insights from Stack Overflow: Software developers need access to different kinds of information which is often dispersed among different documentation sources, such as API documentation or Stack Overflow. We present an approach to automatically augment API documentation with "insight sentences" from Stack Overflow -- sentences that are related to a particular API type and that provide insight not contained in the API documentation of that type. Based on a development set of 1,574 sentences, we compare the performance of two state-of-the-art summarization techniques as well as a pattern-based approach for insight sentence extraction. We then present SISE, a novel machine learning based approach that uses as features the sentences themselves, their formatting, their question, their answer, and their authors as well as part-of-speech tags and the similarity of a sentence to the corresponding API documentation. With SISE, we were able to achieve a precision of 0.64 and a coverage of 0.7 on the development set. In a comparative study with eight software developers, we found that SISE resulted in the highest number of sentences that were considered to add useful information not found in the API documentation. These results indicate that taking into account the meta data available on Stack Overflow as well as part-of-speech tags can significantly improve unsupervised extraction approaches when applied to Stack Overflow data.) <|cite_end|> and DeepTip <|cite_start|> (Reference: Extracting API tips from developer question and answer websites: The success of question and answer (Q&A) websites attracts massive user-generated content for using and learning APIs, which easily leads to information overload: many questions for APIs have a large number of answers containing useful and irrelevant information, and cannot all be consumed by developers. In this work, we develop DeepTip, a novel deep learning-based approach using different Convolutional Neural Network architectures, to extract short practical and useful tips from developer answers. Our extensive empirical experiments prove that DeepTip can extract useful tips from a large corpus of answers to questions with high precision (i.e., avg. 0.854) and coverage (i.e., 0.94), and it outperforms two state-of-the-art baselines by up to 56.7% and 162%, respectively, in terms of Precision. Furthermore, qualitatively, a user study is conducted with real Stack Overflow users and its results confirm that tip extraction is useful and our approach generates high-quality tips.) <|cite_end|> formulate this task as \textit{update extractive summarization}. Update summarization aims to generate complementary summaries to \doc, assuming readers are already familiar with the original API documentation <|cite_start|> (Reference: {Augmenting API Documentation with Insights from Stack Overflow: Software developers need access to different kinds of information which is often dispersed among different documentation sources, such as API documentation or Stack Overflow. We present an approach to automatically augment API documentation with "insight sentences" from Stack Overflow -- sentences that are related to a particular API type and that provide insight not contained in the API documentation of that type. Based on a development set of 1,574 sentences, we compare the performance of two state-of-the-art summarization techniques as well as a pattern-based approach for insight sentence extraction. We then present SISE, a novel machine learning based approach that uses as features the sentences themselves, their formatting, their question, their answer, and their authors as well as part-of-speech tags and the similarity of a sentence to the corresponding API documentation. With SISE, we were able to achieve a precision of 0.64 and a coverage of 0.7 on the development set. In a comparative study with eight software developers, we found that SISE resulted in the highest number of sentences that were considered to add useful information not found in the API documentation. These results indicate that taking into account the meta data available on Stack Overflow as well as part-of-speech tags can significantly improve unsupervised extraction approaches when applied to Stack Overflow data.) <|cite_end|>. Extractive summarization selects \textbf{insight sentences} for the target API from external sources to form extractive summaries. Insight sentences provide API-relevant insights not covered in the API documentation <|cite_start|> (Reference: {Augmenting API Documentation with Insights from Stack Overflow: Software developers need access to different kinds of information which is often dispersed among different documentation sources, such as API documentation or Stack Overflow. We present an approach to automatically augment API documentation with "insight sentences" from Stack Overflow -- sentences that are related to a particular API type and that provide insight not contained in the API documentation of that type. Based on a development set of 1,574 sentences, we compare the performance of two state-of-the-art summarization techniques as well as a pattern-based approach for insight sentence extraction. We then present SISE, a novel machine learning based approach that uses as features the sentences themselves, their formatting, their question, their answer, and their authors as well as part-of-speech tags and the similarity of a sentence to the corresponding API documentation. With SISE, we were able to achieve a precision of 0.64 and a coverage of 0.7 on the development set. In a comparative study with eight software developers, we found that SISE resulted in the highest number of sentences that were considered to add useful information not found in the API documentation. These results indicate that taking into account the meta data available on Stack Overflow as well as part-of-speech tags can significantly improve unsupervised extraction approaches when applied to Stack Overflow data.) <|cite_end|>. Nonetheless, extractive summaries tend to show a notable level of redundancy and typically have limited readability <|cite_start|> (Reference: Automatic text summarization: A comprehensive survey: ) <|cite_end|>. Notably, readability is one of the most important attributes of the desired software documentation <|cite_start|> (Reference: Software documentation: The practitioners' perspective: In theory, (good) documentation is an invaluable asset to any software project, as it helps stakeholders to use, understand, maintain, and evolve a system. In practice, however, documentation is generally affected by numerous shortcomings and issues, such as insufficient and inadequate content and obsolete, ambiguous information. To counter this, researchers are investigating the development of advanced recommender systems that automatically suggest high-quality documentation, useful for a given task. A crucial first step is to understand what quality means for practitioners and what information is actually needed for specific tasks. We present two surveys performed with 146 practitioners to investigate (i) the documentation issues they perceive as more relevant together with solutions they apply when these issues arise; and (ii) the types of documentation considered as important in different tasks. Our findings can help researchers in designing the next generation of documentation recommender systems.) <|cite_end|> <|cite_start|> (Reference: Usage and usefulness of technical software documentation: An industrial case study: ) <|cite_end|> <|cite_start|> (Reference: The Value of Software Documentation Quality: This paper presents the results of a study on software documentation quality in practice. Goal of this study is identifying the current state of software documentation quality and used analysis techniques for determining software documentation quality. Moreover, we aim at finding out, whether there is a demand for a tool-based software documentation quality analysis approach. This approach consists of a documentation quality model and a document checking tool, as proposed in previous work. We developed an online survey and asked about 300 experts to answer it. The survey was completed by 88 experts and the overall results confirm the importance of software documentation quality as well as the need for better tool support. The survey shows that the most important quality attributes with regard to documentation quality are accuracy, clarity, consistency, readability, structuredness, and understand ability. Most of these quality attributes are currently covered by our software documentation quality analysis approach, some of them (e.g., accuracy, structuredness) still need more attention, i.e. better support in our quality model and tool.) <|cite_end|>, highlighting the need to produce concise and coherent summaries <|cite_start|> (Reference: Software documentation: The practitioners' perspective: In theory, (good) documentation is an invaluable asset to any software project, as it helps stakeholders to use, understand, maintain, and evolve a system. In practice, however, documentation is generally affected by numerous shortcomings and issues, such as insufficient and inadequate content and obsolete, ambiguous information. To counter this, researchers are investigating the development of advanced recommender systems that automatically suggest high-quality documentation, useful for a given task. A crucial first step is to understand what quality means for practitioners and what information is actually needed for specific tasks. We present two surveys performed with 146 practitioners to investigate (i) the documentation issues they perceive as more relevant together with solutions they apply when these issues arise; and (ii) the types of documentation considered as important in different tasks. Our findings can help researchers in designing the next generation of documentation recommender systems.) <|cite_end|>. Recently, research in utilizing large language models for abstractive summarization has garnered significant attention <|cite_start|> (Reference: Benchmarking Large Language Models for News Summarization: Large language models (LLMs) have shown promise for automatic summarization but the reasons behind their successes are poorly understood. By conducting a human evaluation on ten LLMs across different pretraining methods, prompts, and model scales, we make two important observations. First, we find instruction tuning, and not model size, is the key to the LLM's zero-shot summarization capability. Second, existing studies have been limited by low-quality references, leading to underestimates of human performance and lower few-shot and finetuning performance. To better evaluate LLMs, we perform human evaluation over high-quality summaries we collect from freelance writers. Despite major stylistic differences such as the amount of paraphrasing, we find that LMM summaries are judged to be on par with human written summaries.) <|cite_end|> <|cite_start|> (Reference: Prompted opinion summarization with GPT-3.5: Large language models have shown impressive performance across a wide variety of tasks, including text summarization. In this paper, we show that this strong performance extends to opinion summarization. We explore several pipeline methods for applying GPT-3.5 to summarize a large collection of user reviews in a prompted fashion. To handle arbitrarily large numbers of user reviews, we explore recursive summarization as well as methods for selecting salient content to summarize through supervised clustering or extraction. On two datasets, an aspect-oriented summarization dataset of hotel reviews (SPACE) and a generic summarization dataset of Amazon and Yelp reviews (FewSum), we show that GPT-3.5 models achieve very strong performance in human evaluation. We argue that standard evaluation metrics do not reflect this, and introduce three new metrics targeting faithfulness, factuality, and genericity to contrast these different methods.) <|cite_end|> <|cite_start|> (Reference: Extractive Summarization via ChatGPT for Faithful Summary Generation: Extractive summarization is a crucial task in natural language processing that aims to condense long documents into shorter versions by directly extracting sentences. The recent introduction of large language models has attracted significant interest in the NLP community due to its remarkable performance on a wide range of downstream tasks. This paper first presents a thorough evaluation of ChatGPT's performance on extractive summarization and compares it with traditional fine-tuning methods on various benchmark datasets. Our experimental analysis reveals that ChatGPT exhibits inferior extractive summarization performance in terms of ROUGE scores compared to existing supervised systems, while achieving higher performance based on LLM-based evaluation metrics. In addition, we explore the effectiveness of in-context learning and chain-of-thought reasoning for enhancing its performance. Furthermore, we find that applying an extract-then-generate pipeline with ChatGPT yields significant performance improvements over abstractive baselines in terms of summary faithfulness. These observations highlight potential directions for enhancing ChatGPT's capabilities in faithful summarization using two-stage approaches.) <|cite_end|>. Abstractive summarization generates new sentences, in contrast to extractive summarization which selects sentences from input. The SOTA model GPT-4 demonstrates its potential in generating concise and coherent summaries <|cite_start|> (Reference: Prompted opinion summarization with GPT-3.5: Large language models have shown impressive performance across a wide variety of tasks, including text summarization. In this paper, we show that this strong performance extends to opinion summarization. We explore several pipeline methods for applying GPT-3.5 to summarize a large collection of user reviews in a prompted fashion. To handle arbitrarily large numbers of user reviews, we explore recursive summarization as well as methods for selecting salient content to summarize through supervised clustering or extraction. On two datasets, an aspect-oriented summarization dataset of hotel reviews (SPACE) and a generic summarization dataset of Amazon and Yelp reviews (FewSum), we show that GPT-3.5 models achieve very strong performance in human evaluation. We argue that standard evaluation metrics do not reflect this, and introduce three new metrics targeting faithfulness, factuality, and genericity to contrast these different methods.) <|cite_end|> <|cite_start|> (Reference: Extractive Summarization via ChatGPT for Faithful Summary Generation: Extractive summarization is a crucial task in natural language processing that aims to condense long documents into shorter versions by directly extracting sentences. The recent introduction of large language models has attracted significant interest in the NLP community due to its remarkable performance on a wide range of downstream tasks. This paper first presents a thorough evaluation of ChatGPT's performance on extractive summarization and compares it with traditional fine-tuning methods on various benchmark datasets. Our experimental analysis reveals that ChatGPT exhibits inferior extractive summarization performance in terms of ROUGE scores compared to existing supervised systems, while achieving higher performance based on LLM-based evaluation metrics. In addition, we explore the effectiveness of in-context learning and chain-of-thought reasoning for enhancing its performance. Furthermore, we find that applying an extract-then-generate pipeline with ChatGPT yields significant performance improvements over abstractive baselines in terms of summary faithfulness. These observations highlight potential directions for enhancing ChatGPT's capabilities in faithful summarization using two-stage approaches.) <|cite_end|>. However, its effectiveness on API documentation augmentation remains unknown. In this work, we conduct the first empirical study on GPT-4 for \doc augmentation and compare it with existing approaches <|cite_start|> (Reference: {Augmenting API Documentation with Insights from Stack Overflow: Software developers need access to different kinds of information which is often dispersed among different documentation sources, such as API documentation or Stack Overflow. We present an approach to automatically augment API documentation with "insight sentences" from Stack Overflow -- sentences that are related to a particular API type and that provide insight not contained in the API documentation of that type. Based on a development set of 1,574 sentences, we compare the performance of two state-of-the-art summarization techniques as well as a pattern-based approach for insight sentence extraction. We then present SISE, a novel machine learning based approach that uses as features the sentences themselves, their formatting, their question, their answer, and their authors as well as part-of-speech tags and the similarity of a sentence to the corresponding API documentation. With SISE, we were able to achieve a precision of 0.64 and a coverage of 0.7 on the development set. In a comparative study with eight software developers, we found that SISE resulted in the highest number of sentences that were considered to add useful information not found in the API documentation. These results indicate that taking into account the meta data available on Stack Overflow as well as part-of-speech tags can significantly improve unsupervised extraction approaches when applied to Stack Overflow data.) <|cite_end|> <|cite_start|> (Reference: Extracting API tips from developer question and answer websites: The success of question and answer (Q&A) websites attracts massive user-generated content for using and learning APIs, which easily leads to information overload: many questions for APIs have a large number of answers containing useful and irrelevant information, and cannot all be consumed by developers. In this work, we develop DeepTip, a novel deep learning-based approach using different Convolutional Neural Network architectures, to extract short practical and useful tips from developer answers. Our extensive empirical experiments prove that DeepTip can extract useful tips from a large corpus of answers to questions with high precision (i.e., avg. 0.854) and coverage (i.e., 0.94), and it outperforms two state-of-the-art baselines by up to 56.7% and 162%, respectively, in terms of Precision. Furthermore, qualitatively, a user study is conducted with real Stack Overflow users and its results confirm that tip extraction is useful and our approach generates high-quality tips.) <|cite_end|> (detailed in Section \ref{sec:pilot}). Our empirical study reveals that GPT-4 can generate coherent and concise summaries to augment \doc. However, GPT-4 gives rise to several drawbacks concerning informativeness and faithfulness. First, the input length limitation of GPT-4 leads to information loss due to the truncation of the prompt. We empirically discover that the token limit of GPT-4 allows for the inclusion of an average of only 6.35 \so threads relevant to an API to be embedded into the prompt. Meanwhile, the cost of the GPT-4 API may be a burden for individual developers generating documentation (e.g., each GPT-4 API call costs around \$0.3 if the token number reaches the 8k limit). Furthermore, GPT-4 occasionally generates non-faithful summaries, i.e., summaries that do not accurately reflect the original meaning of API-relevant resources <|cite_start|> (Reference: On Faithfulness and Factuality in Abstractive Summarization: It is well known that the standard likelihood training and approximate decoding objectives in neural text generation models lead to less human-like responses for open-ended tasks such as language modeling and story generation. In this paper we have analyzed limitations of these models for abstractive document summarization and found that these models are highly prone to hallucinate content that is unfaithful to the input document. We conducted a large scale human evaluation of several neural abstractive summarization systems to better understand the types of hallucinations they produce. Our human annotators found substantial amounts of hallucinated content in all model generated summaries. However, our analysis does show that pretrained models are better summarizers not only in terms of raw metrics, i.e., ROUGE, but also in generating faithful and factual summaries as evaluated by humans. Furthermore, we show that textual entailment measures better correlate with faithfulness than standard metrics, potentially leading the way to automatic evaluation metrics as well as training and decoding criteria.) <|cite_end|>. This phenomenon, known as hallucination <|cite_start|> (Reference: Survey of Hallucination in Natural Language Generation: Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation; and (3) hallucinations in large language models (LLMs). This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.) <|cite_end|>, has hindered the trustworthy usage of GPT-4 in industries like medicine <|cite_start|> (Reference: Benefits, Limits, and Risks of GPT-4 as an AI Chatbot for Medicine.: ) <|cite_end|> and law. We also observe that assessing the faithfulness of such hallucinations may require substantial time or, in some cases, might not even be feasible. Since the accuracy of the information in software documentation is considered important for \se{} practitioners <|cite_start|> (Reference: Software documentation: The practitioners' perspective: In theory, (good) documentation is an invaluable asset to any software project, as it helps stakeholders to use, understand, maintain, and evolve a system. In practice, however, documentation is generally affected by numerous shortcomings and issues, such as insufficient and inadequate content and obsolete, ambiguous information. To counter this, researchers are investigating the development of advanced recommender systems that automatically suggest high-quality documentation, useful for a given task. A crucial first step is to understand what quality means for practitioners and what information is actually needed for specific tasks. We present two surveys performed with 146 practitioners to investigate (i) the documentation issues they perceive as more relevant together with solutions they apply when these issues arise; and (ii) the types of documentation considered as important in different tasks. Our findings can help researchers in designing the next generation of documentation recommender systems.) <|cite_end|>, the existence of hallucinations also hinders the adoption of GPT-4 in this task. Moreover, our pilot study reveals that the summaries generated by existing extractive summarization approaches <|cite_start|> (Reference: {Augmenting API Documentation with Insights from Stack Overflow: Software developers need access to different kinds of information which is often dispersed among different documentation sources, such as API documentation or Stack Overflow. We present an approach to automatically augment API documentation with "insight sentences" from Stack Overflow -- sentences that are related to a particular API type and that provide insight not contained in the API documentation of that type. Based on a development set of 1,574 sentences, we compare the performance of two state-of-the-art summarization techniques as well as a pattern-based approach for insight sentence extraction. We then present SISE, a novel machine learning based approach that uses as features the sentences themselves, their formatting, their question, their answer, and their authors as well as part-of-speech tags and the similarity of a sentence to the corresponding API documentation. With SISE, we were able to achieve a precision of 0.64 and a coverage of 0.7 on the development set. In a comparative study with eight software developers, we found that SISE resulted in the highest number of sentences that were considered to add useful information not found in the API documentation. These results indicate that taking into account the meta data available on Stack Overflow as well as part-of-speech tags can significantly improve unsupervised extraction approaches when applied to Stack Overflow data.) <|cite_end|> <|cite_start|> (Reference: Extracting API tips from developer question and answer websites: The success of question and answer (Q&A) websites attracts massive user-generated content for using and learning APIs, which easily leads to information overload: many questions for APIs have a large number of answers containing useful and irrelevant information, and cannot all be consumed by developers. In this work, we develop DeepTip, a novel deep learning-based approach using different Convolutional Neural Network architectures, to extract short practical and useful tips from developer answers. Our extensive empirical experiments prove that DeepTip can extract useful tips from a large corpus of answers to questions with high precision (i.e., avg. 0.854) and coverage (i.e., 0.94), and it outperforms two state-of-the-art baselines by up to 56.7% and 162%, respectively, in terms of Precision. Furthermore, qualitatively, a user study is conducted with real Stack Overflow users and its results confirm that tip extraction is useful and our approach generates high-quality tips.) <|cite_end|> are usually more faithful to external resources, which is attributed to direct sentence extraction without modification. Besides, extractive summaries demonstrate a considerable degree of informativeness and relevance, since there is no limitation on input length. The success of extractive summaries in terms of informativeness and faithfulness inspires us to use extractive summarization to guide abstractive summarization, addressing the drawbacks of the latter. We leverage an extractive-then-abstractive pipeline including two phases: 1) Extract Phase: extract insight sentences from external resources to form extractive summaries, and 2) Abstract Phase: ask GPT-4 to generate abstractive summaries guided by extractive summaries. The Extract Phase allows input without length limitations, covering more information from external resources and minimizing the costs for API calls. The Abstract Phase ensures that abstractive summaries are aligned with extractive summaries, thereby enhancing faithfulness and facilitating data provenance. We apply GPT-4 as the summarizer for the Abstract Phase. We consider SOTA extractive summarization approaches <|cite_start|> (Reference: Extracting API tips from developer question and answer websites: The success of question and answer (Q&A) websites attracts massive user-generated content for using and learning APIs, which easily leads to information overload: many questions for APIs have a large number of answers containing useful and irrelevant information, and cannot all be consumed by developers. In this work, we develop DeepTip, a novel deep learning-based approach using different Convolutional Neural Network architectures, to extract short practical and useful tips from developer answers. Our extensive empirical experiments prove that DeepTip can extract useful tips from a large corpus of answers to questions with high precision (i.e., avg. 0.854) and coverage (i.e., 0.94), and it outperforms two state-of-the-art baselines by up to 56.7% and 162%, respectively, in terms of Precision. Furthermore, qualitatively, a user study is conducted with real Stack Overflow users and its results confirm that tip extraction is useful and our approach generates high-quality tips.) <|cite_end|> <|cite_start|> (Reference: {Augmenting API Documentation with Insights from Stack Overflow: Software developers need access to different kinds of information which is often dispersed among different documentation sources, such as API documentation or Stack Overflow. We present an approach to automatically augment API documentation with "insight sentences" from Stack Overflow -- sentences that are related to a particular API type and that provide insight not contained in the API documentation of that type. Based on a development set of 1,574 sentences, we compare the performance of two state-of-the-art summarization techniques as well as a pattern-based approach for insight sentence extraction. We then present SISE, a novel machine learning based approach that uses as features the sentences themselves, their formatting, their question, their answer, and their authors as well as part-of-speech tags and the similarity of a sentence to the corresponding API documentation. With SISE, we were able to achieve a precision of 0.64 and a coverage of 0.7 on the development set. In a comparative study with eight software developers, we found that SISE resulted in the highest number of sentences that were considered to add useful information not found in the API documentation. These results indicate that taking into account the meta data available on Stack Overflow as well as part-of-speech tags can significantly improve unsupervised extraction approaches when applied to Stack Overflow data.) <|cite_end|> on the API documentation augmentation task as candidates for implementing the Extract Phase. However, we identify four drawbacks of existing approaches, which may hinder their performance in the Extract Phase. Therefore, we build upon existing approaches, extending them to overcome their limitations. Notably, we are particularly concerned about the performance of the Extract Phase as the \textbf{extractive summaries directly determine the informativeness of the final \doc and their relevance to an API.} Specifically, existing approaches \textbf{1)} \textbf{Only take input from a single source.} They <|cite_start|> (Reference: {Augmenting API Documentation with Insights from Stack Overflow: Software developers need access to different kinds of information which is often dispersed among different documentation sources, such as API documentation or Stack Overflow. We present an approach to automatically augment API documentation with "insight sentences" from Stack Overflow -- sentences that are related to a particular API type and that provide insight not contained in the API documentation of that type. Based on a development set of 1,574 sentences, we compare the performance of two state-of-the-art summarization techniques as well as a pattern-based approach for insight sentence extraction. We then present SISE, a novel machine learning based approach that uses as features the sentences themselves, their formatting, their question, their answer, and their authors as well as part-of-speech tags and the similarity of a sentence to the corresponding API documentation. With SISE, we were able to achieve a precision of 0.64 and a coverage of 0.7 on the development set. In a comparative study with eight software developers, we found that SISE resulted in the highest number of sentences that were considered to add useful information not found in the API documentation. These results indicate that taking into account the meta data available on Stack Overflow as well as part-of-speech tags can significantly improve unsupervised extraction approaches when applied to Stack Overflow data.) <|cite_end|> <|cite_start|> (Reference: Extracting API tips from developer question and answer websites: The success of question and answer (Q&A) websites attracts massive user-generated content for using and learning APIs, which easily leads to information overload: many questions for APIs have a large number of answers containing useful and irrelevant information, and cannot all be consumed by developers. In this work, we develop DeepTip, a novel deep learning-based approach using different Convolutional Neural Network architectures, to extract short practical and useful tips from developer answers. Our extensive empirical experiments prove that DeepTip can extract useful tips from a large corpus of answers to questions with high precision (i.e., avg. 0.854) and coverage (i.e., 0.94), and it outperforms two state-of-the-art baselines by up to 56.7% and 162%, respectively, in terms of Precision. Furthermore, qualitatively, a user study is conducted with real Stack Overflow users and its results confirm that tip extraction is useful and our approach generates high-quality tips.) <|cite_end|> focus solely on \so, neglecting other sources such as tutorial videos and blogs that developers often refer to. \textbf{2)} \textbf{Unaware of the API documentation structure.} Existing approaches perform binary classification to identify insight sentences, neglecting the standard structure of API documentation. The structure is important for software documentation <|cite_start|> (Reference: The Value of Software Documentation Quality: This paper presents the results of a study on software documentation quality in practice. Goal of this study is identifying the current state of software documentation quality and used analysis techniques for determining software documentation quality. Moreover, we aim at finding out, whether there is a demand for a tool-based software documentation quality analysis approach. This approach consists of a documentation quality model and a document checking tool, as proposed in previous work. We developed an online survey and asked about 300 experts to answer it. The survey was completed by 88 experts and the overall results confirm the importance of software documentation quality as well as the need for better tool support. The survey shows that the most important quality attributes with regard to documentation quality are accuracy, clarity, consistency, readability, structuredness, and understand ability. Most of these quality attributes are currently covered by our software documentation quality analysis approach, some of them (e.g., accuracy, structuredness) still need more attention, i.e. better support in our quality model and tool.) <|cite_end|>, which commonly consists of multiple \textit{sections} (e.g., expected behavior, range of values, and cause of exceptions). \textbf{3) Neglect contextual dependencies.} Existing approaches analyze each sentence individually and fail to capture semantic dependencies between sentences. Consequently, insight sentences may be considered useless without necessary context, which leads to a 15\% output sentence confusion as reported by the existing approach SISE <|cite_start|> (Reference: {Augmenting API Documentation with Insights from Stack Overflow: Software developers need access to different kinds of information which is often dispersed among different documentation sources, such as API documentation or Stack Overflow. We present an approach to automatically augment API documentation with "insight sentences" from Stack Overflow -- sentences that are related to a particular API type and that provide insight not contained in the API documentation of that type. Based on a development set of 1,574 sentences, we compare the performance of two state-of-the-art summarization techniques as well as a pattern-based approach for insight sentence extraction. We then present SISE, a novel machine learning based approach that uses as features the sentences themselves, their formatting, their question, their answer, and their authors as well as part-of-speech tags and the similarity of a sentence to the corresponding API documentation. With SISE, we were able to achieve a precision of 0.64 and a coverage of 0.7 on the development set. In a comparative study with eight software developers, we found that SISE resulted in the highest number of sentences that were considered to add useful information not found in the API documentation. These results indicate that taking into account the meta data available on Stack Overflow as well as part-of-speech tags can significantly improve unsupervised extraction approaches when applied to Stack Overflow data.) <|cite_end|>. \textbf{4) Suffer from information redundancy.} \doc augmentation requires generated summaries to be distinct from the original \doc for readability <|cite_start|> (Reference: {Augmenting API Documentation with Insights from Stack Overflow: Software developers need access to different kinds of information which is often dispersed among different documentation sources, such as API documentation or Stack Overflow. We present an approach to automatically augment API documentation with "insight sentences" from Stack Overflow -- sentences that are related to a particular API type and that provide insight not contained in the API documentation of that type. Based on a development set of 1,574 sentences, we compare the performance of two state-of-the-art summarization techniques as well as a pattern-based approach for insight sentence extraction. We then present SISE, a novel machine learning based approach that uses as features the sentences themselves, their formatting, their question, their answer, and their authors as well as part-of-speech tags and the similarity of a sentence to the corresponding API documentation. With SISE, we were able to achieve a precision of 0.64 and a coverage of 0.7 on the development set. In a comparative study with eight software developers, we found that SISE resulted in the highest number of sentences that were considered to add useful information not found in the API documentation. These results indicate that taking into account the meta data available on Stack Overflow as well as part-of-speech tags can significantly improve unsupervised extraction approaches when applied to Stack Overflow data.) <|cite_end|>. However, existing approaches neglect to reduce redundancy between generated summaries and \doc. To address the drawbacks of existing approaches and fully leverage the capabilities of GPT-4, we propose \toolname, a novel framework with two stages: \textbf{C}ontext-aware \textbf{S}entence \textbf{S}ection \textbf{C}lassification (\compone) and \textbf{UP}date \textbf{SUM}marization (UPSUM). \compone takes as input documents relevant to a specific API from \textbf{multiple sources}. \compone identifies insight sentences and classifies them into suitable API documentation sections for \textbf{structure awareness} while considering the \textbf{contextual dependency} of sentences if necessary. \comptwo applies an extract-then-abstract pipeline, which inputs insight sentences concerning a target API and outputs abstractive summaries to augment \doc. In the Extract phase, we propose an \textbf{extractive update summarization algorithm} \extalgo that enables the generated summary to be semantically dissimilar from the API documentation. In the Abstract phase, we leverage GPT-4 to generate summaries guided by the output of \extalgo through in-context learning. To enable automatic evaluation of this task, we construct the first dataset \bench through two-phase labeling. We identify 4,344 API-relevant sentences from multiple sources and linked them to appropriate \doc sections. Additionally, we produce 48 extractive summaries to augment three sections of \doc, corresponding to 16 APIs in Java and Python. To assess \toolname, we conduct automatic evaluation using \bench on each stage of \toolname and human evaluation on end-to-end performance. Specifically, our automatic evaluation results reveal that \compone outperforms the best baseline by 18.18\%, 20.31\%, and 18.46\% in terms of precision, recall, and F1-scores. Besides, \extalgo outperforms the best baseline by 14.55\%, 24.35\%, and 16.67\% in terms of ROUGE-1, ROUGE-2, and ROUGE-L. Furthermore, our human evaluation results show that \toolname outperforms the GPT-4 baseline by 13.89\%, 15.15\%, and 30.56\% in terms of informativeness (i.e., quantity of insightful information), relevance (i.e., proportion of insightful information in generated summary), and faithfulness (i.e., to what extent the information in summaries aligns with corresponding information in external resources), as well as keeping on par performance on the readability (i.e., fluency and coherence) and the redundancy between the summaries and original API documentation. The contributions of this paper are the following: \begin{itemize} \item We conduct the first empirical study on GPT-4 performing abstractive summarization for API documentation augmentation. We identify three main drawbacks of GPT-4 and four drawbacks of existing extractive-based summarization approaches. \item We propose \toolname, a two-stage approach with an extract-then-abstract framework to address these challenges. \item We construct the first dataset \bench, which enables automatic evaluation of extractive summarization of \doc augmentation. \item We evaluate the performance of \toolname via both automatic evaluation and human evaluation. Both evaluation results show that \toolname outperforms the best-performing baseline by a large margin. \end{itemize} Related Work \vspace{0.1cm}\noindent{\bf API Documentation Issues.} Researchers often conduct surveys about software document issues with software developers. Uddin et al. <|cite_start|> (Reference: {How API Documentation Fails: Formal documentation can be a crucial resource for learning to how to use an API. However, producing high-quality documentation can be nontrivial. Researchers investigated how 10 common documentation problems manifested themselves in practice. The results are based on two surveys of a total of 323 professional software developers and analysis of 179 API documentation units. The three severest problems were ambiguity, incompleteness, and incorrectness of content. The respondents often mentioned six of the 10 problems as "blockers"' that forced them to use another API.) <|cite_end|> summarized and explored the frequency and severity of ten commonly encountered document issues. They highlighted that the most pressing problem is document content issues. Aghajani et al. <|cite_start|> (Reference: {Software Documentation Issues Unveiled: (Good) Software documentation provides developers and users with a description of what a software system does, how it operates, and how it should be used. For example, technical documentation (e.g., an API reference guide) aids developers during evolution/maintenance activities, while a user manual explains how users are to interact with a system. Despite its intrinsic value, the creation and the maintenance of documentation is often neglected, negatively impacting its quality and usefulness, ultimately leading to a generally unfavourable take on documentation. Previous studies investigating documentation issues have been based on surveying developers, which naturally leads to a somewhat biased view of problems affecting documentation. We present a large scale empirical study, where we mined, analyzed, and categorized 878 documentation-related artifacts stemming from four different sources, namely mailing lists, Stack Overflow discussions, issue repositories, and pull requests. The result is a detailed taxonomy of documentation issues from which we infer a series of actionable proposals both for researchers and practitioners.) <|cite_end|> built a taxonomy of software document issues, including 162 issue types. In their following work <|cite_start|> (Reference: Software documentation: The practitioners' perspective: In theory, (good) documentation is an invaluable asset to any software project, as it helps stakeholders to use, understand, maintain, and evolve a system. In practice, however, documentation is generally affected by numerous shortcomings and issues, such as insufficient and inadequate content and obsolete, ambiguous information. To counter this, researchers are investigating the development of advanced recommender systems that automatically suggest high-quality documentation, useful for a given task. A crucial first step is to understand what quality means for practitioners and what information is actually needed for specific tasks. We present two surveys performed with 146 practitioners to investigate (i) the documentation issues they perceive as more relevant together with solutions they apply when these issues arise; and (ii) the types of documentation considered as important in different tasks. Our findings can help researchers in designing the next generation of documentation recommender systems.) <|cite_end|>, they investigated 1) the types of document considered useful for each specific development task and 2) the types of documentation issues considered more relevant to developing practitioners. Furthermore, researchers also employ mining-based strategies to capture software document issues that developers <|cite_start|> (Reference: What are mobile developers asking about? A large scale study using stack overflow: ) <|cite_end|> or end users <|cite_start|> (Reference: What do mobile app users complain about?: Mobile-app quality is becoming an increasingly important issue. These apps are generally delivered through app stores that let users post reviews. These reviews provide a rich data source you can leverage to understand user-reported issues. Researchers qualitatively studied 6,390 low-rated user reviews for 20 free-to-download iOS apps. They uncovered 12 types of user complaints. The most frequent complaints were functional errors, feature requests, and app crashes. Complaints about privacy and ethical issues and hidden app costs most negatively affected ratings. In 11 percent of the reviews, users attributed their complaints to a recent app update. This study provides insight into the user-reported issues of iOS apps, along with their frequency and impact, which can help developers better prioritize their limited quality assurance resources.) <|cite_end|> frequently discuss. Our work is inspired by previous empirical studies on API documentation issues and aims to augment inadequate API documentation. \vspace{0.1cm}\noindent{\bf API Document Augmentation.} Some approaches enhance the API documentation by integrating usage code examples and corresponding descriptions <|cite_start|> (Reference: Automatic API Usage Scenario Documentation from Technical Q&A Sites: The online technical Q&A site Stack Overflow (SO) is popular among developers to support their coding and diverse development needs. To address shortcomings in API official documentation resources, several research has thus focused on augmenting official API documentation with insights (e.g., code examples) from SO. The techniques propose to add code examples/insights about APIs into its official documentation. Reviews are opinionated sentences with positive/negative sentiments. However, we are aware of no previous research that attempts to automatically produce API documentation from SO by considering both API code examples and reviews. In this paper, we present two novel algorithms that can be used to automatically produce API documentation from SO by combining code examples and reviews towards those examples. The first algorithm is called statistical documentation, which shows the distribution of positivity and negativity around the code examples of an API using different metrics (e.g., star ratings). The second algorithm is called concept-based documentation, which clusters similar and conceptually relevant usage scenarios. An API usage scenario contains a code example, a textual description of the underlying task addressed by the code example, and the reviews (i.e., opinions with positive and negative sentiments) from other developers towards the code example. We deployed the algorithms in Opiner, a web-based platform to aggregate information about APIs from online forums. We evaluated the algorithms by mining all Java JSON-based posts in SO and by conducting three user studies based on produced documentation from the posts.) <|cite_end|> <|cite_start|> (Reference: Enriching API documentation with code samples and usage scenarios from crowd knowledge: As one key resource to learn Application Programming Interfaces (APIs), a lot of API reference documentation lacks code samples with usage scenarios, thus heavily hindering developers from programming with APIs. Although researchers have investigated how to enrich API documentation with code samples from general code search engines, two main challenges remain to be resolved, including the quality challenge of acquiring high-quality code samples and the mapping challenge of matching code samples to usage scenarios. In this study, we propose a novel approach named ADECK towards enriching API documentation with code samples and corresponding usage scenarios by leveraging crowd knowledge from Stack Overflow, a popular technical Question and Answer (Q&A) website attracting millions of developers. Given an API related Q&A pair, a code sample in the answer is extensively evaluated by developers and targeted towards resolving the question under the specified usage scenario. Hence, ADECK can obtain high-quality code samples and map them to corresponding usage scenarios to address the above challenges. Extensive experiments on the Java SE and Android API documentation show that the number of code-sample-illustrated API types in the ADECK-enriched API documentation is 3.35 and 5.76 times as many as that in the raw API documentation. Meanwhile, the quality of code samples obtained by ADECK is better than that of code samples by the baseline approach eXoaDocs in terms of correctness, conciseness, and usability, e.g., the average correctness values of representative code samples obtained by ADECK and eXoaDocs are 4.26 and 3.28 on a 5-point scale in the enriched Java SE API documentation. In addition, an empirical study investigating the impacts of different types of API documentation on the productivity of developers shows that, compared against the raw and the eXoaDocs-enriched API documentation, the ADECK-enriched API documentation can help developers complete 23.81 and 14.29 percent more programming tasks and reduce the average completion time by 9.43 and 11.03 percent.) <|cite_end|>. Uddin et al. <|cite_start|> (Reference: Automatic API Usage Scenario Documentation from Technical Q&A Sites: The online technical Q&A site Stack Overflow (SO) is popular among developers to support their coding and diverse development needs. To address shortcomings in API official documentation resources, several research has thus focused on augmenting official API documentation with insights (e.g., code examples) from SO. The techniques propose to add code examples/insights about APIs into its official documentation. Reviews are opinionated sentences with positive/negative sentiments. However, we are aware of no previous research that attempts to automatically produce API documentation from SO by considering both API code examples and reviews. In this paper, we present two novel algorithms that can be used to automatically produce API documentation from SO by combining code examples and reviews towards those examples. The first algorithm is called statistical documentation, which shows the distribution of positivity and negativity around the code examples of an API using different metrics (e.g., star ratings). The second algorithm is called concept-based documentation, which clusters similar and conceptually relevant usage scenarios. An API usage scenario contains a code example, a textual description of the underlying task addressed by the code example, and the reviews (i.e., opinions with positive and negative sentiments) from other developers towards the code example. We deployed the algorithms in Opiner, a web-based platform to aggregate information about APIs from online forums. We evaluated the algorithms by mining all Java JSON-based posts in SO and by conducting three user studies based on produced documentation from the posts.) <|cite_end|> treat pairs of API usage examples and corresponding comments extracted from \so as a new form of API documentation. Zhang et al. <|cite_start|> (Reference: Enriching API documentation with code samples and usage scenarios from crowd knowledge: As one key resource to learn Application Programming Interfaces (APIs), a lot of API reference documentation lacks code samples with usage scenarios, thus heavily hindering developers from programming with APIs. Although researchers have investigated how to enrich API documentation with code samples from general code search engines, two main challenges remain to be resolved, including the quality challenge of acquiring high-quality code samples and the mapping challenge of matching code samples to usage scenarios. In this study, we propose a novel approach named ADECK towards enriching API documentation with code samples and corresponding usage scenarios by leveraging crowd knowledge from Stack Overflow, a popular technical Question and Answer (Q&A) website attracting millions of developers. Given an API related Q&A pair, a code sample in the answer is extensively evaluated by developers and targeted towards resolving the question under the specified usage scenario. Hence, ADECK can obtain high-quality code samples and map them to corresponding usage scenarios to address the above challenges. Extensive experiments on the Java SE and Android API documentation show that the number of code-sample-illustrated API types in the ADECK-enriched API documentation is 3.35 and 5.76 times as many as that in the raw API documentation. Meanwhile, the quality of code samples obtained by ADECK is better than that of code samples by the baseline approach eXoaDocs in terms of correctness, conciseness, and usability, e.g., the average correctness values of representative code samples obtained by ADECK and eXoaDocs are 4.26 and 3.28 on a 5-point scale in the enriched Java SE API documentation. In addition, an empirical study investigating the impacts of different types of API documentation on the productivity of developers shows that, compared against the raw and the eXoaDocs-enriched API documentation, the ADECK-enriched API documentation can help developers complete 23.81 and 14.29 percent more programming tasks and reduce the average completion time by 9.43 and 11.03 percent.) <|cite_end|> proposed ADECK to extract code examples from \so and map them to a specific API usage scenario. Unlike prior works that focus on providing code examples, our work targets augmenting the natural language part of API documentation, which is considered a complementary approach to the above approaches. Many works aim to enrich the textual part of API documentation. Treude et al. <|cite_start|> (Reference: {Augmenting API Documentation with Insights from Stack Overflow: Software developers need access to different kinds of information which is often dispersed among different documentation sources, such as API documentation or Stack Overflow. We present an approach to automatically augment API documentation with "insight sentences" from Stack Overflow -- sentences that are related to a particular API type and that provide insight not contained in the API documentation of that type. Based on a development set of 1,574 sentences, we compare the performance of two state-of-the-art summarization techniques as well as a pattern-based approach for insight sentence extraction. We then present SISE, a novel machine learning based approach that uses as features the sentences themselves, their formatting, their question, their answer, and their authors as well as part-of-speech tags and the similarity of a sentence to the corresponding API documentation. With SISE, we were able to achieve a precision of 0.64 and a coverage of 0.7 on the development set. In a comparative study with eight software developers, we found that SISE resulted in the highest number of sentences that were considered to add useful information not found in the API documentation. These results indicate that taking into account the meta data available on Stack Overflow as well as part-of-speech tags can significantly improve unsupervised extraction approaches when applied to Stack Overflow data.) <|cite_end|> try to identify insightful sentences to specific API from \so to augment Java API documentation. Then DeepTip <|cite_start|> (Reference: Extracting API tips from developer question and answer websites: The success of question and answer (Q&A) websites attracts massive user-generated content for using and learning APIs, which easily leads to information overload: many questions for APIs have a large number of answers containing useful and irrelevant information, and cannot all be consumed by developers. In this work, we develop DeepTip, a novel deep learning-based approach using different Convolutional Neural Network architectures, to extract short practical and useful tips from developer answers. Our extensive empirical experiments prove that DeepTip can extract useful tips from a large corpus of answers to questions with high precision (i.e., avg. 0.854) and coverage (i.e., 0.94), and it outperforms two state-of-the-art baselines by up to 56.7% and 162%, respectively, in terms of Precision. Furthermore, qualitatively, a user study is conducted with real Stack Overflow users and its results confirm that tip extraction is useful and our approach generates high-quality tips.) <|cite_end|> extract \so sentences as API tips by training a CNN-based classifier. All previous works position the task as a binary classification to classify insightful sentences solely from \so. Differently, our approach positions the task as a multi-class classification to map each sentence to an API document section, followed by an updating summarization task to improve the readability of API documentation. In addition, we consider the context of the sentence as well as the structure of \doc. Finally, we are the first to leverage the abstraction summarization approaches and GPT in this task. \balance <|paper_end|>
[ "<|reference_start|> Beyond Accuracy: Assessing Software Documentation Quality: Good software documentation encourages good software engineering, but the meaning of \"good\" documentation is vaguely defined in the software engineering literature. To clarify this ambiguity, we draw on work from the data and information quality community to propose a framework that decomposes documentation quality into ten dimensions of structure, content, and style. To demonstrate its application, we recruited technical editors to apply the framework when evaluating examples from several genres of software documentation. We summarise their assessments -- for example, reference documentation and README files excel in quality whereas blog articles have more problems -- and we describe our vision for reasoning about software documentation quality and for the expansion and potential of a unified quality framework. <|reference_end|>", "<|reference_start|> Software documentation: The practitioners' perspective: In theory, (good) documentation is an invaluable asset to any software project, as it helps stakeholders to use, understand, maintain, and evolve a system. In practice, however, documentation is generally affected by numerous shortcomings and issues, such as insufficient and inadequate content and obsolete, ambiguous information. To counter this, researchers are investigating the development of advanced recommender systems that automatically suggest high-quality documentation, useful for a given task. A crucial first step is to understand what quality means for practitioners and what information is actually needed for specific tasks. We present two surveys performed with 146 practitioners to investigate (i) the documentation issues they perceive as more relevant together with solutions they apply when these issues arise; and (ii) the types of documentation considered as important in different tasks. Our findings can help researchers in designing the next generation of documentation recommender systems. <|reference_end|>", "<|reference_start|> Extractive Summarization via ChatGPT for Faithful Summary Generation: Extractive summarization is a crucial task in natural language processing that aims to condense long documents into shorter versions by directly extracting sentences. The recent introduction of large language models has attracted significant interest in the NLP community due to its remarkable performance on a wide range of downstream tasks. This paper first presents a thorough evaluation of ChatGPT's performance on extractive summarization and compares it with traditional fine-tuning methods on various benchmark datasets. Our experimental analysis reveals that ChatGPT exhibits inferior extractive summarization performance in terms of ROUGE scores compared to existing supervised systems, while achieving higher performance based on LLM-based evaluation metrics. In addition, we explore the effectiveness of in-context learning and chain-of-thought reasoning for enhancing its performance. Furthermore, we find that applying an extract-then-generate pipeline with ChatGPT yields significant performance improvements over abstractive baselines in terms of summary faithfulness. These observations highlight potential directions for enhancing ChatGPT's capabilities in faithful summarization using two-stage approaches. <|reference_end|>", "<|reference_start|> On Faithfulness and Factuality in Abstractive Summarization: It is well known that the standard likelihood training and approximate decoding objectives in neural text generation models lead to less human-like responses for open-ended tasks such as language modeling and story generation. In this paper we have analyzed limitations of these models for abstractive document summarization and found that these models are highly prone to hallucinate content that is unfaithful to the input document. We conducted a large scale human evaluation of several neural abstractive summarization systems to better understand the types of hallucinations they produce. Our human annotators found substantial amounts of hallucinated content in all model generated summaries. However, our analysis does show that pretrained models are better summarizers not only in terms of raw metrics, i.e., ROUGE, but also in generating faithful and factual summaries as evaluated by humans. Furthermore, we show that textual entailment measures better correlate with faithfulness than standard metrics, potentially leading the way to automatic evaluation metrics as well as training and decoding criteria. <|reference_end|>" ]
[ 8, 24, 27, 32 ]
{"<|multi_cite_1_1|>": "ss-723028", "<|multi_cite_1_2|>": "ss-938338", "<|multi_cite_1_3|>": "arxiv-321699", "<|cite_2|>": "ss-1065125", "<|cite_4|>": "ss-713585", "<|multi_cite_5_1|>": "ss-1914545", "<|multi_cite_5_2|>": "ss-723023", "<|multi_cite_5_3|>": "ss-907621", "<|multi_cite_5_4|>": "arxiv-279936", "<|multi_cite_5_5|>": "ss-1065125", "<|cite_6|>": "ss-2248443", "<|multi_cite_7_1|>": "ss-723028", "<|multi_cite_7_2|>": "ss-958932", "<|multi_cite_7_3|>": "ss-688941", "<|multi_cite_7_4|>": "ss-2133930", "<|multi_cite_7_5|>": "ss-2248444", "<|cite_8|>": "ss-723028", "<|cite_9|>": "ss-958932", "<|cite_10|>": "ss-723028", "<|cite_11|>": "ss-723028", "<|cite_12|>": "ss-966560", "<|multi_cite_13_1|>": "ss-713585", "<|multi_cite_13_2|>": "ss-1266056", "<|multi_cite_13_3|>": "ss-824259", "<|cite_14|>": "ss-713585", "<|multi_cite_15_1|>": "arxiv-478153", "<|multi_cite_15_2|>": "ss-2081115", "<|multi_cite_15_3|>": "arxiv-495665", "<|multi_cite_16_1|>": "ss-2081115", "<|multi_cite_16_2|>": "arxiv-495665", "<|multi_cite_17_1|>": "ss-723028", "<|multi_cite_17_2|>": "ss-958932", "<|cite_18|>": "arxiv-262978", "<|cite_19|>": "arxiv-397651", "<|cite_20|>": "ss-836439", "<|cite_22|>": "ss-713585", "<|multi_cite_23_1|>": "ss-723028", "<|multi_cite_23_2|>": "ss-958932", "<|multi_cite_24_1|>": "ss-958932", "<|multi_cite_24_2|>": "ss-723028", "<|multi_cite_25_1|>": "ss-723028", "<|multi_cite_25_2|>": "ss-958932", "<|cite_27|>": "ss-824259", "<|cite_29|>": "ss-723028", "<|cite_30|>": "ss-723028", "<|cite_31|>": "ss-1914545", "<|cite_32|>": "ss-907621", "<|cite_33|>": "ss-713585", "<|cite_34|>": "ss-1017271", "<|cite_35|>": "ss-2316292", "<|multi_cite_36_1|>": "arxiv-321703", "<|multi_cite_36_2|>": "ss-2248444", "<|cite_37|>": "arxiv-321703", "<|cite_38|>": "ss-2248444", "<|cite_39|>": "ss-723028", "<|cite_40|>": "ss-958932"}
2106.04689
<|paper_start|> Title: Learning to Price Against a Moving Target Abstract: Learning to Price Against a Moving Target: In the Learning to Price setting, a seller posts prices over time with the goal of maximizing revenue while learning the buyer's valuation. This problem is very well understood when values are stationary (fixed or iid). Here we study the problem where the buyer's value is a moving target, i.e., they change over time either by a stochastic process or adversarially with bounded variation. In either case, we provide matching upper and lower bounds on the optimal revenue loss. Since the target is moving, any information learned soon becomes out-dated, which forces the algorithms to keep switching between exploring and exploiting phases. Introduction Inspired by applications in electronic commerce, we study a problem where a seller repeatedly interacts with a buyer by setting prices for an item and observing whether the buyer purchases or not. These problems are characterized by two salient features: (i) binary feedback: we only observe if the buyer purchased or not, at the price we posted; (ii) discontinuous loss function: pricing just below the buyer's valuation incurs a small loss while pricing just above it incurs a large loss since it results in a no-sale. This problem has been studied with many different assumptions on how the buyer valuation $v_t$ changes over time: fixed over time and i.i.d. draws each round were studied in <|cite_start|> (Reference: The value of knowing a demand curve: bounds on regret for online posted-price auctions: We consider price-setting algorithms for a simple market in which a seller has an unlimited supply of identical copies of some good, and interacts sequentially with a pool of n buyers, each of whom wants at most one copy of the good. In each transaction, the seller offers a price between 0 and 1, and the buyer decides whether or not to buy, by comparing the offered price to his privately-held valuation for the good. The price offered to a given buyer may be influenced by the outcomes of prior transactions, but each individual buyer participates only once. In this setting, what is the value of knowing the demand curve? In other words, how much revenue can an uninformed seller expect to obtain, relative to a seller with prior information about the buyers' valuations? The answer depends on how the buyers' valuations are modeled. We analyze three cases - identical, random, and worst-case valuations - in each case deriving upper and lower bounds which match within a sublogarithmic factor.) <|cite_end|> <|cite_start|> (Reference: Perfect Bayesian Equilibria in Repeated Sales: A special case of Myerson's classic result describes the revenue-optimal equilibrium when a seller offers a single item to a buyer. We study a repeated sales extension of this model: a seller offers to sell a single fresh copy of an item to the same buyer every day via a posted price. The buyer's private value for the item is drawn initially from a publicly known distribution $F$ and remains the same throughout. A key aspect of this game is that the seller might try to learn the buyer's private value to extract more revenue, while the buyer is motivated to hide it. We study the Perfect Bayesian Equilibria (PBE) in this setting with varying levels of commitment power to the seller. We find that the seller having the commitment power to not raise prices subsequent to a purchase significantly improves revenue in a PBE.) <|cite_end|> <|cite_start|> (Reference: Dynamic Pricing with Finitely Many Unknown Valuations: Motivated by posted price auctions where buyers are grouped in an unknown number of latent types characterized by their private values for the good on sale, we investigate revenue maximization in stochastic dynamic pricing when the distribution of buyers' private values is supported on an unknown set of points in [0,1] of unknown cardinality $K$. This setting can be viewed as an instance of a stochastic $K$-armed bandit problem where the location of the arms (the $K$ unknown valuations) must be learned as well. In the distribution-free case, we prove that our setting is just as hard as $K$-armed stochastic bandits: no algorithm can achieve a regret significantly better than $\sqrt{KT}$, (where T is the time horizon); we present an efficient algorithm matching this lower bound up to logarithmic factors. In the distribution-dependent case, we show that for all $K>2$ our setting is strictly harder than $K$-armed stochastic bandits by proving that it is impossible to obtain regret bounds that grow logarithmically in time or slower. On the other hand, when a lower bound $\gamma>0$ on the smallest drop in the demand curve is known, we prove an upper bound on the regret of order $(1/\Delta+(\log \log T)/\gamma^2)(K\log T)$. This is a significant improvement on previously known regret bounds for discontinuous demand curves, that are at best of order $(K^{12}/\gamma^8)\sqrt{T}$. When $K=2$ in the distribution-dependent case, the hardness of our setting reduces to that of a stochastic $2$-armed bandit: we prove that an upper bound of order $(\log T)/\Delta$ (up to $\log\log$ factors) on the regret can be achieved with no information on the demand curve. Finally, we show a $O(\sqrt{T})$ upper bound on the regret for the setting in which the buyers' decisions are nonstochastic, and the regret is measured with respect to the best between two fixed valuations one of which is known to the seller.) <|cite_end|>, deterministic contextual <|cite_start|> (Reference: Repeated contextual auctions with strategic buyers: Motivated by real-time advertising exchanges, we analyze the problem of pricing inventory in a repeated posted-price auction. We consider both the cases of a truthful and surplus-maximizing buyer, where the former makes decisions myopically on every round, and the latter may strategically react to our algorithm, forgoing short-term surplus in order to trick the algorithm into setting better prices in the future. We further assume a buyer's valuation of a good is a function of a context vector that describes the good being sold. We give the first algorithm attaining sublinear (O(T2/3)) regret in the contextual setting against a surplus-maximizing buyer. We also extend this result to repeated second-price auctions with multiple buyers.) <|cite_end|> <|cite_start|> (Reference: Feature-based dynamic pricing: We consider the problem faced by a firm that receives highly differentiated products in an online fashion and needs to price them in order to sell them to its customer base. Products are described by vectors of features and the market value of each product is linear in the values of the features. The firm does not initially know the values of the different features, but it can learn the values of the features based on whether products were sold at the posted prices in the past. This model is motivated by a question in online advertising, where impressions arrive over time and can be described by vectors of features. We first consider a multi-dimensional version of binary search over polyhedral sets, and show that it has exponential worst-case regret. We then propose a modification of the prior algorithm where uncertainty sets are replaced by their Lowner-John ellipsoids. We show that this algorithm has a worst-case regret that is quadratic in the dimensionality of the feature space and logarithmic in the time horizon.) <|cite_end|> <|cite_start|> (Reference: Multidimensional Binary Search for Contextual Decision-Making: We consider a multidimensional search problem that is motivated by questions in contextual decision-making, such as dynamic pricing and personalized medicine. Nature selects a state from a $d$-dimensional unit ball and then generates a sequence of $d$-dimensional directions. We are given access to the directions, but not access to the state. After receiving a direction, we have to guess the value of the dot product between the state and the direction. Our goal is to minimize the number of times when our guess is more than $\epsilon$ away from the true answer. We construct a polynomial time algorithm that we call Projected Volume achieving regret $O(d\log(d/\epsilon))$, which is optimal up to a $\log d$ factor. The algorithm combines a volume cutting strategy with a new geometric technique that we call cylindrification.) <|cite_end|> <|cite_start|> (Reference: Contextual Search via Intrinsic Volumes: We study the problem of contextual search, a multidimensional generalization of binary search that captures many problems in contextual decision-making. In contextual search, a learner is trying to learn the value of a hidden vector $v \in [0,1]^d$. Every round the learner is provided an adversarially-chosen context $u_t \in \mathbb{R}^d$, submits a guess $p_t$ for the value of $\langle u_t, v\rangle$, learns whether $p_t < \langle u_t, v\rangle$, and incurs loss $\ell(\langle u_t, v\rangle, p_t)$ (for some loss function $\ell$). The learner's goal is to minimize their total loss over the course of $T$ rounds. We present an algorithm for the contextual search problem for the symmetric loss function $\ell(\theta, p) = |\theta - p|$ that achieves $O_{d}(1)$ total loss. We present a new algorithm for the dynamic pricing problem (which can be realized as a special case of the contextual search problem) that achieves $O_{d}(\log \log T)$ total loss, improving on the previous best known upper bounds of $O_{d}(\log T)$ and matching the known lower bounds (up to a polynomial dependence on $d$). Both algorithms make significant use of ideas from the field of integral geometry, most notably the notion of intrinsic volumes of a convex set. To the best of our knowledge this is the first application of intrinsic volumes to algorithm design.) <|cite_end|> <|cite_start|> (Reference: Optimal Contextual Pricing and Extensions: In the contextual pricing problem a seller repeatedly obtains products described by an adversarially chosen feature vector in $\mathbb{R}^d$ and only observes the purchasing decisions of a buyer with a fixed but unknown linear valuation over the products. The regret measures the difference between the revenue the seller could have obtained knowing the buyer valuation and what can be obtained by the learning algorithm. We give a poly-time algorithm for contextual pricing with $O(d \log \log T + d \log d)$ regret which matches the $\Omega(d \log \log T)$ lower bound up to the $d \log d$ additive factor. If we replace pricing loss by the symmetric loss, we obtain an algorithm with nearly optimal regret of $O(d \log d)$ matching the $\Omega(d)$ lower bound up to $\log d$. These algorithms are based on a novel technique of bounding the value of the Steiner polynomial of a convex region at various scales. The Steiner polynomial is a degree $d$ polynomial with intrinsic volumes as the coefficients. We also study a generalized version of contextual search where the hidden linear function over the Euclidean space is replaced by a hidden function $f : \mathcal{X} \rightarrow \mathcal{Y}$ in a certain hypothesis class $\mathcal{H}$. We provide a generic algorithm with $O(d^2)$ regret where $d$ is the covering dimension of this class. This leads in particular to a $\tilde{O}(s^2)$ regret algorithm for linear contextual search if the linear function is guaranteed to be $s$-sparse. Finally we also extend our results to the noisy feedback model, where each round our feedback is flipped with a fixed probability $p < 1/2$.) <|cite_end|>, contextual with parametric noise <|cite_start|> (Reference: Dynamic Pricing in High-dimensions: We study the pricing problem faced by a firm that sells a large number of products, described via a wide range of features, to customers that arrive over time. Customers independently make purchasing decisions according to a general choice model that includes products features and customers' characteristics, encoded as $d$-dimensional numerical vectors, as well as the price offered. The parameters of the choice model are a priori unknown to the firm, but can be learned as the (binary-valued) sales data accrues over time. The firm's objective is to minimize the regret, i.e., the expected revenue loss against a clairvoyant policy that knows the parameters of the choice model in advance, and always offers the revenue-maximizing price. This setting is motivated in part by the prevalence of online marketplaces that allow for real-time pricing. We assume a structured choice model, parameters of which depend on $s_0$ out of the $d$ product features. We propose a dynamic policy, called Regularized Maximum Likelihood Pricing (RMLP) that leverages the (sparsity) structure of the high-dimensional model and obtains a logarithmic regret in $T$. More specifically, the regret of our algorithm is of $O(s_0 \log d \cdot \log T)$. Furthermore, we show that no policy can obtain regret better than $O(s_0 (\log d + \log T))$.) <|cite_end|> and contextual with non-parametric noise <|cite_start|> (Reference: Semi-parametric dynamic contextual pricing: Motivated by the application of real-time pricing in e-commerce platforms, we consider the problem of revenue-maximization in a setting where the seller can leverage contextual information describing the customer's history and the product's type to predict her valuation of the product. However, her true valuation is unobservable to the seller, only binary outcome in the form of success-failure of a transaction is observed. Unlike in usual contextual bandit settings, the optimal price/arm given a covariate in our setting is sensitive to the detailed characteristics of the residual uncertainty distribution. We develop a semi-parametric model in which the residual distribution is non-parametric and provide the first algorithm which learns both regression parameters and residual distribution with $\tilde O(\sqrt{n})$ regret. We empirically test a scalable implementation of our algorithm and observe good performance.) <|cite_end|> <|cite_start|> (Reference: Contextual Search in the Presence of Irrational Agents: We study contextual search, a generalization of binary search in higher dimensions, which captures settings such as feature-based dynamic pricing. Standard game-theoretic formulations of this problem assume that agents act in accordance with a specific behavioral model. In practice, some agents may not subscribe to the dominant behavioral model or may act in ways that are seemingly arbitrarily irrational. Existing algorithms heavily depend on the behavioral model being (approximately) accurate for all agents and have poor performance even with a few arbitrarily irrational agents. We initiate the study of contextual search when some of the agents can behave in ways inconsistent with the underlying behavioral model. In particular, we provide two algorithms, one based on multidimensional binary search methods and one based on gradient descent. Our techniques draw inspiration from learning theory, game theory, high-dimensional geometry, and convex analysis.) <|cite_end|>. All those models are stationary in the sense that the buyer's model is i.i.d. across time. The exceptions to this are algorithms that consider valuations that are drawn adversarially <|cite_start|> (Reference: The value of knowing a demand curve: bounds on regret for online posted-price auctions: We consider price-setting algorithms for a simple market in which a seller has an unlimited supply of identical copies of some good, and interacts sequentially with a pool of n buyers, each of whom wants at most one copy of the good. In each transaction, the seller offers a price between 0 and 1, and the buyer decides whether or not to buy, by comparing the offered price to his privately-held valuation for the good. The price offered to a given buyer may be influenced by the outcomes of prior transactions, but each individual buyer participates only once. In this setting, what is the value of knowing the demand curve? In other words, how much revenue can an uninformed seller expect to obtain, relative to a seller with prior information about the buyers' valuations? The answer depends on how the buyers' valuations are modeled. We analyze three cases - identical, random, and worst-case valuations - in each case deriving upper and lower bounds which match within a sublogarithmic factor.) <|cite_end|>, but that work still compares with the best single price in hindsight. I.e., even though the buyer model is non-stationary, the benchmark still is. Our main goal in this paper is to explore settings where both the buyer model and the benchmark are non-stationary. We will compare our revenue with the first-best benchmark, namely, the sum of the buyer's value at every single step. We will however assume that the buyer's valuation moves slowly. \paragraph{Motivation} Our main motivation for this study is online advertising. Display ads are mostly sold through first price auctions with reserve prices <|cite_start|> (Reference: Why Do Competitive Markets Converge to First-Price Auctions?: We consider a setting in which bidders participate in multiple auctions run by different sellers, and optimize their bids for the \emph{aggregate} auction. We analyze this setting by formulating a game between sellers, where a seller's strategy is to pick an auction to run. Our analysis aims to shed light on the recent change in the Display Ads market landscape: here, ad exchanges (sellers) were mostly running second-price auctions earlier and over time they switched to variants of the first-price auction, culminating in Google's Ad Exchange moving to a first-price auction in 2019. Our model and results offer an explanation for why the first-price auction occurs as a natural equilibrium in such competitive markets.) <|cite_end|>. In many market segments, the auctions are thin, i.e., there is just one buyer, who bids just above the reserve when his value exceeds the reserve (to both pay as little as possible, and not reveal their true value) and doesn't bid otherwise. This scenario effectively offers just binary feedback, and also makes reserve price the only pricing tool (i.e., not much auction competition). To see why buyer value changes are typically slow, and unknown to the seller: the effective value of a buyer, even for two identical queries, is similar but not exactly the same due to factors such as remaining budget. A common scenario is that a buyer has a spend target stating a target $\theta_t$ of their daily budget to be spent by time $t$. Bids often become a function of the ratio between the actual spend and the target spend. The auction platform doesn't know the targets/bidding formula, but it can use the fact that both target and actual spend, and hence the bids, will change smoothly over time. Another important motivation is to effectively price buyers who are learning about their own valuation. This is a common setup in finance <|cite_start|> (Reference: {Stochastic Calculus for Finance II: Continuous-Time Models: Need a terrific e-book? stochastic calculus for finance ii continuous time models springer finance by , the best one! Wan na get it? Locate this excellent e-book by right here now. Download and install or check out online is available. Why we are the best site for downloading this stochastic calculus for finance ii continuous time models springer finance Obviously, you could pick the book in different data kinds as well as media. Try to find ppt, txt, pdf, word, rar, zip, as well as kindle? Why not? Obtain them below, now!) <|cite_end|> where traders constantly acquire new information about the products they are trading, and update their valuations accordingly. Our results and techniques are presented in Section \ref{sec:results} after we formally define our model in Section \ref{sec:setting}. \paragraph{Related Work} Our work is situated in the intersection of two lines of work in online learning: online learning for pricing (discussed earlier in the introduction) and online learning with stronger benchmarks, such as tracking regret <|cite_start|> (Reference: Tracking the best linear predictor: In most on-line learning research the total on-line loss of the algorithm is compared to the total loss of the best o¬-line predictor u from a comparison class of predictors. We call such bounds static bounds. The interesting feature of these bounds is that they hold for an arbitrary sequence of examples. Recently some work has been done where the predictor ut at each trial t is allowed to change with time, and the total on-line loss of the algorithm is compared to the sum of the losses of ut at each trial plus the total \cost" for shifting to successive predictors. This is to model situations in which the examples change over time, and di¬erent predictors from the comparison class are best for di¬erent segments of the sequence of examples. We call such bounds shifting bounds. They hold for arbitrary sequences of examples and arbitrary sequences of predictors. Naturally shifting bounds are much harder to prove. The only known bounds are for the case when the comparison class consists of a sequences of experts or boolean disjunctions. In this paper we develop the methodology for lifting known static bounds to the shifting case. In particular we obtain bounds when the comparison class consists of linear neurons (linear combinations of experts). Our essential technique is to project the hypothesis of the static algorithm at the end of each trial into a suitably chosen convex region. This keeps the hypothesis of the algorithm well-behaved and the static bounds can be converted to shifting bounds.) <|cite_end|> <|cite_start|> (Reference: Achieving all with no parameters: Adanormalhedge: We study the classic online learning problem of predicting with expert advice, and propose a truly parameter-free and adaptive algorithm that achieves several objectives simultaneously without using any prior information. The main component of this work is an improved version of the NormalHedge.DT algorithm (Luo and Schapire, 2014), called AdaNormalHedge. On one hand, this new algorithm ensures small regret when the competitor has small loss and almost constant regret when the losses are stochastic. On the other hand, the algorithm is able to compete with any convex combination of the experts simultaneously, with a regret in terms of the relative entropy of the prior and the competitor. This resolves an open problem proposed by Chaudhuri et al. (2009) and Chernov and Vovk (2010). Moreover, we extend the results to the sleeping expert setting and provide two applications to illustrate the power of AdaNormalHedge: 1) competing with time-varying unknown competitors and 2) predicting almost as well as the best pruning tree. Our results on these applications significantly improve previous work from different aspects, and a special case of the first application resolves another open problem proposed by Warmuth and Koolen (2014) on whether one can simultaneously achieve optimal shifting regret for both adversarial and stochastic losses.) <|cite_end|>, adaptive regret <|cite_start|> (Reference: {Adaptive Algorithms for Online Decision Problems: We study the notion of learning in an oblivious changing environment. Existing online learning algorithms which minimize regret are shown to converge to the average of all locally optimal solutions. We propose a new performance metric, strengthening the standard metric of regret, to capture convergence to locally optimal solutions, and propose efficient algorithms which provably converge at the optimal rate. One application is the portfolio management problem, for which we show that all previous algorithms behave suboptimally under dynamic market conditions. Another application is online routing, for which our adaptive algorithm exploits local congestion patterns and runs in near-linear time. We also give an algorithm for the tree update problem that is statically optimal for every sufficiently long contiguous subsequence of accesses. Our algorithm combines techniques from data streaming algorithms, composition of learning algorithms, and a twist on the standard experts framework.) <|cite_end|>, strongly adaptive online learning <|cite_start|> (Reference: Strongly Adaptive Online Learning: Strongly adaptive algorithms are algorithms whose performance on every time interval is close to optimal. We present a reduction that can transform standard low-regret algorithms to strongly adaptive. As a consequence, we derive simple, yet efficient, strongly adaptive algorithms for a handful of problems.) <|cite_end|> and shifting bandits <|cite_start|> (Reference: Learning in Games: Robustness of Fast Convergence: We show that learning algorithms satisfying a $\textit{low approximate regret}$ property experience fast convergence to approximate optimality in a large class of repeated games. Our property, which simply requires that each learner has small regret compared to a $(1+\epsilon)$-multiplicative approximation to the best action in hindsight, is ubiquitous among learning algorithms; it is satisfied even by the vanilla Hedge forecaster. Our results improve upon recent work of Syrgkanis et al. [SALS15] in a number of ways. We require only that players observe payoffs under other players' realized actions, as opposed to expected payoffs. We further show that convergence occurs with high probability, and show convergence under bandit feedback. Finally, we improve upon the speed of convergence by a factor of $n$, the number of players. Both the scope of settings and the class of algorithms for which our analysis provides fast convergence are considerably broader than in previous work. Our framework applies to dynamic population games via a low approximate regret property for shifting experts. Here we strengthen the results of Lykouris et al. [LST16] in two ways: We allow players to select learning algorithms from a larger class, which includes a minor variant of the basic Hedge algorithm, and we increase the maximum churn in players for which approximate optimality is achieved. In the bandit setting we present a new algorithm which provides a "small loss"-type bound with improved dependence on the number of actions in utility settings, and is both simple and efficient. This result may be of independent interest.) <|cite_end|> <|cite_start|> (Reference: Small-loss bounds for online learning with partial information: We consider the problem of adversarial (non-stochastic) online learning with partial information feedback, where at each round, a decision maker selects an action from a finite set of alternatives. We develop a black-box approach for such problems where the learner observes as feedback only losses of a subset of the actions that includes the selected action. When losses of actions are non-negative, under the graph-based feedback model introduced by Mannor and Shamir, we offer algorithms that attain the so called "small-loss" $o(\alpha L^{\star})$ regret bounds with high probability, where $\alpha$ is the independence number of the graph, and $L^{\star}$ is the loss of the best action. Prior to our work, there was no data-dependent guarantee for general feedback graphs even for pseudo-regret (without dependence on the number of actions, i.e. utilizing the increased information feedback). Taking advantage of the black-box nature of our technique, we extend our results to many other applications such as semi-bandits (including routing in networks), contextual bandits (even with an infinite comparator class), as well as learning with slowly changing (shifting) comparators. In the special case of classical bandit and semi-bandit problems, we provide optimal small-loss, high-probability guarantees of $\tilde{O}(\sqrt{dL^{\star}})$ for actual regret, where $d$ is the number of actions, answering open questions of Neu. Previous bounds for bandits and semi-bandits were known only for pseudo-regret and only in expectation. We also offer an optimal $\tilde{O}(\sqrt{\kappa L^{\star}})$ regret guarantee for fixed feedback graphs with clique-partition number at most $\kappa$.) <|cite_end|>. The difficulty in applying this line of work to pricing problems is that even when the valuation $v_t$ changes slightly, the loss function itself will change dramatically for certain prices. Instead here, we exploit the special structure to the revenue loss to obtain better regret bounds. There is another line of work that studies revenue maximization in the presence of evolving buyer values <|cite_start|> (Reference: Optimal dynamic mechanism design and the virtual-pivot mechanism: We consider the problem of designing optimal mechanisms for settings where agents have dynamic private information. We present the virtual-pivot mechanism, which is optimal in a large class of environments that satisfy a separability condition. The mechanism satisfies a rather strong equilibrium notion (it is periodic ex post incentive compatible and individually rational). We provide both necessary and sufficient conditions for immediate incentive compatibility for mechanisms that satisfy periodic ex post incentive compatibility in future periods. The result also yields a strikingly simple mechanism for selling a sequence of items to a single buyer. We also show that the allocation rule of the virtual-pivot mechanism has a very simple structure (a virtual index) in multiarmed bandit settings. Finally, we show through examples that the relaxation technique we use does not produce optimal dynamic mechanisms in general nonseparable environments.) <|cite_end|> <|cite_start|> (Reference: Dynamic Mechanism Design: A Myersonian Approach — Supplementary Material: This document contains additional results and an omitted proof for the manuscript Dynamic Mechanism Design: A Myersonian Approach. Section S.1 contains the proof of the one-stagedeviation principle used in the proof of Theorem 2 in the main text. Section S.2 contains a detailed analysis of Example 5 from the main text. Section S.3 establishes conditions under which the allocation rule maximizing expected virtual surplus distorts allocations downwards compared to the first-best rule. Section S.4 discusses distortions in discrete type models using the language of impulse responses. Section S.5 considers optimal mechanisms in some classes of non-Markov environments. All numbered items (i.e., sections, definitions, results, and equations) in this document contain the prefix S. Any numbered reference without a prefix refers to an item in the main text. Please refer to the main text for notation and definitions.) <|cite_end|> <|cite_start|> (Reference: Simple Pricing Schemes For Consumers With Evolving Values: We consider a pricing problem where a buyer is interested in purchasing/using a good, such as an app or music or software, repeatedly over time. The consumer discovers his value for the good only as he uses it, and the value evolves with each use. Optimizing for the seller's revenue in such dynamic settings is a complex problem and requires assumptions about how the buyer behaves before learning his future value(s), and in particular, how he reacts to risk. We explore the performance of a class of pricing mechanisms that are extremely simple for both the buyer and the seller to use: the buyer reacts to prices myopically without worrying about how his value evolves in the future; the seller needs to optimize for revenue over a space of only two parameters, and can do so without knowing the buyer's risk profile or fine details of the value evolution process. We present simple-versus-optimal type results, namely that under certain assumptions, simple pricing mechanisms of the above form are approximately optimal regardless of the buyer's risk profile. Our results assume that the buyer's value per usage evolves as a martingale. For our main result, we consider pricing mechanisms in which the seller offers the product for free for a certain number of uses, and then charges an appropriate fixed price per usage. We assume that the buyer responds by buying the product for as long as his value exceeds the fixed price. Importantly, the buyer does not need to know anything about how his future value will evolve, only how much he wants to use the product right now. Regardless of the buyers' initial value, our pricing captures as revenue a constant fraction of the total value that the buyers accumulate in expectation over time.) <|cite_end|>. While all these works consider the cumulative value over time as benchmark, there are important differences, the first two papers have full feedback: they design mechanisms that solicit buyer bids. <|cite_start|> (Reference: Simple Pricing Schemes For Consumers With Evolving Values: We consider a pricing problem where a buyer is interested in purchasing/using a good, such as an app or music or software, repeatedly over time. The consumer discovers his value for the good only as he uses it, and the value evolves with each use. Optimizing for the seller's revenue in such dynamic settings is a complex problem and requires assumptions about how the buyer behaves before learning his future value(s), and in particular, how he reacts to risk. We explore the performance of a class of pricing mechanisms that are extremely simple for both the buyer and the seller to use: the buyer reacts to prices myopically without worrying about how his value evolves in the future; the seller needs to optimize for revenue over a space of only two parameters, and can do so without knowing the buyer's risk profile or fine details of the value evolution process. We present simple-versus-optimal type results, namely that under certain assumptions, simple pricing mechanisms of the above form are approximately optimal regardless of the buyer's risk profile. Our results assume that the buyer's value per usage evolves as a martingale. For our main result, we consider pricing mechanisms in which the seller offers the product for free for a certain number of uses, and then charges an appropriate fixed price per usage. We assume that the buyer responds by buying the product for as long as his value exceeds the fixed price. Importantly, the buyer does not need to know anything about how his future value will evolve, only how much he wants to use the product right now. Regardless of the buyers' initial value, our pricing captures as revenue a constant fraction of the total value that the buyers accumulate in expectation over time.) <|cite_end|> shoot for simple pricing schemes yielding constant factor approximations, while we seek to obtain much closer to the optimal. Moreover, in their model the values evolve only when the buyer purchases the good. <|paper_end|>
[ "<|reference_start|> Feature-based dynamic pricing: We consider the problem faced by a firm that receives highly differentiated products in an online fashion and needs to price them in order to sell them to its customer base. Products are described by vectors of features and the market value of each product is linear in the values of the features. The firm does not initially know the values of the different features, but it can learn the values of the features based on whether products were sold at the posted prices in the past. This model is motivated by a question in online advertising, where impressions arrive over time and can be described by vectors of features. We first consider a multi-dimensional version of binary search over polyhedral sets, and show that it has exponential worst-case regret. We then propose a modification of the prior algorithm where uncertainty sets are replaced by their Lowner-John ellipsoids. We show that this algorithm has a worst-case regret that is quadratic in the dimensionality of the feature space and logarithmic in the time horizon. <|reference_end|>", "<|reference_start|> Semi-parametric dynamic contextual pricing: Motivated by the application of real-time pricing in e-commerce platforms, we consider the problem of revenue-maximization in a setting where the seller can leverage contextual information describing the customer's history and the product's type to predict her valuation of the product. However, her true valuation is unobservable to the seller, only binary outcome in the form of success-failure of a transaction is observed. Unlike in usual contextual bandit settings, the optimal price/arm given a covariate in our setting is sensitive to the detailed characteristics of the residual uncertainty distribution. We develop a semi-parametric model in which the residual distribution is non-parametric and provide the first algorithm which learns both regression parameters and residual distribution with $\\tilde O(\\sqrt{n})$ regret. We empirically test a scalable implementation of our algorithm and observe good performance. <|reference_end|>", "<|reference_start|> Tracking the best linear predictor: In most on-line learning research the total on-line loss of the algorithm is compared to the total loss of the best o¬-line predictor u from a comparison class of predictors. We call such bounds static bounds. The interesting feature of these bounds is that they hold for an arbitrary sequence of examples. Recently some work has been done where the predictor ut at each trial t is allowed to change with time, and the total on-line loss of the algorithm is compared to the sum of the losses of ut at each trial plus the total \\cost\" for shifting to successive predictors. This is to model situations in which the examples change over time, and di¬erent predictors from the comparison class are best for di¬erent segments of the sequence of examples. We call such bounds shifting bounds. They hold for arbitrary sequences of examples and arbitrary sequences of predictors. Naturally shifting bounds are much harder to prove. The only known bounds are for the case when the comparison class consists of a sequences of experts or boolean disjunctions. In this paper we develop the methodology for lifting known static bounds to the shifting case. In particular we obtain bounds when the comparison class consists of linear neurons (linear combinations of experts). Our essential technique is to project the hypothesis of the static algorithm at the end of each trial into a suitably chosen convex region. This keeps the hypothesis of the algorithm well-behaved and the static bounds can be converted to shifting bounds. <|reference_end|>", "<|reference_start|> Optimal dynamic mechanism design and the virtual-pivot mechanism: We consider the problem of designing optimal mechanisms for settings where agents have dynamic private information. We present the virtual-pivot mechanism, which is optimal in a large class of environments that satisfy a separability condition. The mechanism satisfies a rather strong equilibrium notion (it is periodic ex post incentive compatible and individually rational). We provide both necessary and sufficient conditions for immediate incentive compatibility for mechanisms that satisfy periodic ex post incentive compatibility in future periods. The result also yields a strikingly simple mechanism for selling a sequence of items to a single buyer. We also show that the allocation rule of the virtual-pivot mechanism has a very simple structure (a virtual index) in multiarmed bandit settings. Finally, we show through examples that the relaxation technique we use does not produce optimal dynamic mechanisms in general nonseparable environments. <|reference_end|>" ]
[ 4, 9, 14, 20 ]
{"<|multi_cite_9_1|>": "ss-1253899", "<|multi_cite_9_2|>": "arxiv-65909", "<|multi_cite_9_3|>": "arxiv-165293", "<|multi_cite_10_1|>": "ss-1253918", "<|multi_cite_10_2|>": "ss-1253919", "<|multi_cite_10_3|>": "arxiv-109236", "<|multi_cite_10_4|>": "arxiv-154313", "<|multi_cite_10_5|>": "arxiv-251826", "<|cite_1|>": "arxiv-106486", "<|multi_cite_2_1|>": "arxiv-186820", "<|multi_cite_2_2|>": "ss-1328248", "<|cite_11|>": "ss-1253899", "<|cite_3|>": "arxiv-243763", "<|cite_4|>": "ss-1722652", "<|multi_cite_5_1|>": "ss-1002796", "<|multi_cite_5_2|>": "ss-1523922", "<|cite_6|>": "ss-1201775", "<|cite_7|>": "arxiv-73618", "<|multi_cite_8_1|>": "arxiv-100543", "<|multi_cite_8_2|>": "arxiv-139686", "<|multi_cite_12_1|>": "ss-1271321", "<|multi_cite_12_2|>": "ss-827154", "<|multi_cite_12_3|>": "ss-1271322", "<|cite_13|>": "ss-1271322"}
2310.03814-0
<|paper_start|> Title: Optimal Control of District Cooling Energy Plant with Reinforcement Learning and MPC Abstract: Optimal Control of District Cooling Energy Plant with Reinforcement Learning and MPC: We consider the problem of optimal control of district cooling energy plants (DCEPs) consisting of multiple chillers, a cooling tower, and a thermal energy storage (TES), in the presence of time-varying electricity price. A straightforward application of model predictive control (MPC) requires solving a challenging mixed-integer nonlinear program (MINLP) because of the on/off of chillers and the complexity of the DCEP model. Reinforcement learning (RL) is an attractive alternative since its real-time control computation is much simpler. But designing an RL controller is challenging due to myriad design choices and computationally intensive training. In this paper, we propose an RL controller and an MPC controller for minimizing the electricity cost of a DCEP, and compare them via simulations. The two controllers are designed to be comparable in terms of objective and information requirements. The RL controller uses a novel Q-learning algorithm that is based on least-squares policy iteration. We describe the design choices for the RL controller, including the choice of state space and basis functions, that are found to be effective. The proposed MPC controller does not need a mixed integer solver for implementation, but only a nonlinear program (NLP) solver. A rule-based baseline controller is also proposed to aid in comparison. Simulation results show that the proposed RL and MPC controllers achieve similar savings over the baseline controller, about 17%. Introduction In the U.S., 75\% of the electricity is consumed by buildings, and a large part of that is due to heating, ventilation, and air conditioning (HVAC) systems. In university campuses and large hotels, a large portion of the HVAC's share of electricity is consumed by \plantFullname s (\plants), especially in hot and humid climates. A \plant\ produces and supplies chilled water to a group of buildings it serves (hence the moniker ``district''), and the air handling units in those buildings use the chilled water to cool and dehumidify air before supplying it to building interiors. Figure~\ref{fig:DCEP} shows a schematic of such a plant, which consists of multiple chillers that produce chilled water, a cooling tower that rejects the heat extracted from chillers to the environment, and a thermal energy storage system (TES) for storing chilled water. Chillers - the most electricity intensive equipment in the \plant\ - can produce more chilled water than buildings' needs when electricity price is low. The extra chilled water is then stored in the TES, and used during periods of high electricity price to reduce the total electricity cost. The \plantFullname s are also called central plants or chiller plants. \plants\ are traditionally operated with rule-based control algorithms that use heuristics to reduce electricity cost while meeting the load, such as ``chiller priority'', ``storage priority'', and additional control sequencing for the cooling tower operation <|cite_start|> (Reference: Rule-Based Control of Battery Energy Storage for Dispatching Intermittent Renewable Sources: Integrating a battery energy storage system (BESS) with a solar photovoltaic (PV) system or a wind farm can make these intermittent renewable energy sources more dispatchable. This paper focuses on the development of a control strategy for optimal use of the BESS for this purpose. The paper considers a rule-based control scheme, which is the solution of the optimal control problem defined, to incorporate the operating constraints of the BESS, such as state of charge limits, charge/discharge current limits, and lifetime. The goal of the control is to have the BESS provide as much smoothing as possible so that the renewable resource can be dispatched on an hourly basis based on the forecasted solar/wind conditions. The effectiveness of this control strategy has been tested by using an actual PV system and wind farm data and it is shown that the BESS can indeed help to cope with variability in wind's and solar's generation.) <|cite_end|> <|cite_start|> (Reference: Development of a Generalized Control Strategy for Thermal Energy Storage in Residential Buildings: In recent years, variable electricity pricing has become available to residential consumers to incentivize load shifting and peak demand reductions during traditional midday peak hours. This is especially important in hot climates where air-conditioning (A/C) use is the primary cause for peak electricity demand. Thermal storage allows consumers to store “cooling” when demand is low and minimize operation of the A/C during peak periods. This paper considers a packaged A/C integrated with thermal energy storage using ice for residential cooling applications. The focus of the paper is the development and validation of a generalized control strategy that can be used for available residential utility rate structures that include different combinations of time-of-use energy and demand charges. The generalized control strategy is based on a unique combination of different heuristic strategies for charging and discharging of storage that are typically applied to commercial-scale A/C systems with integrated thermal energy storage. In order to evaluate overall performance, a model of the proposed system is developed and used to calculate cooling season operating costs for different geographic locations and utility rates. The performance of the generalized strategy is evaluated in comparison to the most commonly employed control strategy for commercial ice storage systems, called chiller-priority control. A range of unit capacities, storage sizes, geographic locations, and residential utility rates are considered. The resulting decrease in operating cost with the generalized control strategy, when compared to chillerpriority control, was as much as 50% based on the utility rates considered in this paper.) <|cite_end|> <|cite_start|> (Reference: Rule-based control strategy to increase photovoltaic self-consumption of a modulating heat pump using water storages and building mass activation: The use of photovoltaic (PV) energy in combination with heat pump systems for heating and cooling of residential buildings can lead to renewable energy self-consumption, reducing the energy required from the grid and the carbon footprint of the building uses. However, energy storage technologies and control strategies are essential to enhance the self-consumption level. This paper proposes and analyzes a new control strategy for the operation of a modulating air-source heat pump, based on the actual PV availability. The solar energy surplus is stored as thermal energy by the use of water tanks and the activation of the thermal capacitance of the building. The efficacy of the control strategy is evaluated considering different rule-based strategies, and different boundary conditions. The effect of climate data, building insulation level and thermal inertia are investigated and compared. The results show the efficacy of the proposed strategy to decrease up to 17% the amount of electricity purchased from the grid and to increase the self-consumption by 22%, considering a high-insulated building in Bolzano, Northern Italy. The thermal mass activation is found effective to increase the self-consumption of the system. Nonetheless, the achievable energy reduction depends largely on the building characteristics and the boundary conditions.) <|cite_end|> <|cite_start|> (Reference: Experimental evaluation of simple thermal storage control strategies in low-energy solar houses to reduce electricity consumption during grid on-peak periods: There is growing interest in zero-energy and low-energy buildings, which have a net energy consumption (on an annual basis) of almost zero. Because they can generate both electricity and thermal energy through the use of solar photovoltaic (PV) and solar thermal collectors, and with the help of reduced building thermal demand, low-energy buildings can not only make a significant contribution to energy conservation on an annual basis, but also reduce energy consumption and peak demand. This study focused on electricity consumption during the on-peak period in a low-energy residential solar building and considers the use of a building’s thermal mass and thermal storage to reduce electricity consumption in summer and winter by modulation of temperature setpoints for heat pump and indoor thermostats in summer and additional use of a solar heating loop in winter. Experiments were performed at a low-energy solar demonstration house that has solar collectors, hot water storage, a ground-coupled heat pump, and a thermal storage tank. It was assumed that the on-peak periods were from 2 pm to 5 pm on hot summer days and from 5 pm to 8 pm on cold winter days. To evaluate the potential for utilizing the building’s thermal storage capacity in space cooling and heating, the use of simple control strategies on three test days in summer and two test days in the early spring were compared in terms of net electricity consumption and peak demand, which also considered the electricity generation from solar PV modules on the roof of the house.) <|cite_end|> <|cite_start|> (Reference: Demand response management by means of heat pumps controlled via real time pricing: ) <|cite_end|>. But making the best use of the chillers and the TES to keep the electricity cost at the minimum requires non-trivial decision making due to the discrete nature of some control commands, such as chiller on/off actuation, and highly nonlinear dynamics of the equipment in \plants. A growing body of work has proposed algorithms for optimal real-time control of \plants. Both Model Predictive Control (MPC) <|cite_start|> (Reference: Predictive control for energy efficient buildings with thermal storage: Modeling, stimulation, and experiments: The building sector is the largest energy consumer in the world. Therefore, it is economically, socially, and environmentally significant to reduce the energy consumption of buildings. Achieving substantial energy reduction in buildings may require rethinking the whole processes of design, construction, and operation of a building. This article focuses on the specific issue of advanced control system design for energy efficient buildings.) <|cite_end|> <|cite_start|> (Reference: Use of model predictive control to enhance the flexibility of thermal energy storage cooling systems: This paper investigates the application of a model predictive controller (MPC) to both a traditional and a novel chilled water thermal energy storage system over for an Austin, Texas, climate. In the novel system, the thermal storage discharges during peak electricity times to meet building cooling load and to supply reduced temperature water for heat rejection in the chiller's condenser. Chiller efficiency improves as the condenser water temperature decreases, shifting more electrical usage to off-peak hours, but may increase overall electrical usage. The MPC is designed to optimize the discharge and recharge of the thermal storage in order to minimize operation costs or energy consumption over a 24-hour prediction horizon. The ability of MPC to level the electrical load profile is also considered. The way in which demand charges are considered in the objective function can greatly influence the system's electrical load profile.) <|cite_end|> <|cite_start|> (Reference: Integrating scheduling and control for economic MPC of buildings with energy storage: ) <|cite_end|> <|cite_start|> (Reference: Virtual testbed for model predictive control development in district cooling systems: ) <|cite_end|> <|cite_start|> (Reference: A mixed-integer linear programming model for real-time cost optimization of building heating, ventilation, and air conditioning equipment: ) <|cite_end|> <|cite_start|> (Reference: Economic MPC and real-time decision making with application to large-scale HVAC energy systems: ) <|cite_end|> <|cite_start|> (Reference: A case study of economic optimization of HVAC systems based on the Stanford University campus airside and waterside systems: Commercial buildings account for $200 billion per year in energy expenditures, with heating, ventilation, and air conditioning (HVAC) systems accounting for most of these costs. In energy markets with time-varying prices and peak demand charges, a significant potential for cost savings is provided by using thermal energy storage to shift energy loads. Since most implementations of HVAC control systems do not optimize energy costs, they have become a primary focus for new strategies aimed at economic optimization. However, some industrial applications, such as large research centers or university campuses, are too large to be solved in a single MPC instance. Decompositions have been proposed in the literature, but it is difficult to evaluate and to compare decompositions against one another when using different systems. In this paper, we present a large-scale industrially relevant case study where solving a single MPC optimization problem is not feasible for real-time implementations. The study is loosely based on the Stanford University campus, consisting of both an airside and waterside system. The airside system includes 500 zones spread throughout 25 campus buildings along with the air handler units and regulatory building automation system used for temperature regulation. The waterside system includes the central plant equipment, such as chillers, that is used to meet the load from the buildings. Active thermal energy storage is also available to the campus. The models from this case study are made publicly available for other researchers interested in designing alternative control strategies for managing chilled water production to meet airside loads. The aim of the case study release is to provide a standardized problem for the research community and a benchmark for evaluating performance.) <|cite_end|> <|cite_start|> (Reference: Model predictive control of central chiller plant with thermal energy storage via dynamic programming and mixed-integer linear programming: This work considers the optimal scheduling problem for a campus central plant equipped with a bank of multiple electrical chillers and a thermal energy storage (TES). Typically, the chillers are operated in ON/OFF modes to charge TES and supply chilled water to satisfy the campus cooling demands. A bilinear model is established to describe the system dynamics of the central plant. A model predictive control (MPC) problem is formulated to obtain optimal set-points to satisfy the campus cooling demands and minimize daily electricity cost. At each time step, the MPC problem is represented as a large-scale mixed-integer nonlinear programming problem. We propose a heuristic algorithm to obtain suboptimal solutions for it via dynamic programming (DP) and mixed integer linear programming (MILP). The system dynamics is linearized along the simulated trajectories of the system. The optimal TES operation profile is obtained by solving a DP problem at every horizon, and the optimal chiller operations are obtained by solving an MILP problem at every time step with a fixed TES operation profile. Simulation results show desired performance and computational tractability of the proposed algorithm. This work was motivated by the supervisory control need for a campus central plant. Plant operators have to decide a scheduling strategy to mix and match various chillers with a thermal energy storage to satisfy the campus cooling demands, while minimizing the operation cost. This work mathematically characterizes the system dynamics of a campus central plant and establishes a linear model to predict campus cooling load. It proposes a model predictive control (MPC) strategy to optimally schedule the campus central plant based on plant system dynamics and predicted campus cooling load. A heuristic algorithm is proposed to obtain suboptimal solutions for the MPC problem. The effectiveness and efficiency of the proposed approach are well demonstrated for the central plant at the University of California, Irvine.) <|cite_end|> <|cite_start|> (Reference: Site demonstration and performance evaluation of MPC for a large chiller plant with TES for renewable energy integration and grid decarbonization: ) <|cite_end|>and Reinforcement Learning (RL) <|cite_start|> (Reference: Learn to chill: Intelligent chiller scheduling using meta-learning and deep reinforcement learning: Centralized chiller plants with multiple chillers are typically over-provisioned. Therefore, intelligent scheduling is required for the supply (operating chillers) to efficiently meet the demand (actual cooling load of buildings). Traditional cooling-load based control (CLC) may result in poor part-loaded efficiency. Recent data-driven approaches to chiller control either unrealistically assume perfect knowledge of individual chiller power at various leaving chilled water temperatures (LWTs) or control all chillers with same LWT. We complement existing work with iChill, an end-to-end learning-based intelligent chiller power prediction and scheduling strategy. First, given a dataset of chillers of varying capacities, each of which operates at a fixed LWT and varying loads, iChill meta-learns a model for power prediction. Specifically, for an unseen target chiller, the meta-learned model is re-trained with known LWT to predict power at unseen LWT. Second, given the configuration of a chiller plant and a cooling load profile, iChill learns to schedule individual chillers by jointly deciding the ON/OFF status and LWT; using deep reinforcement learning (DRL). We train and evaluate iChill in a simulated environment with real-world data from a chiller plant of 22 chillers. Specifically, we compare iChill's (1) meta-learned power model with regular transfer learning; and (2) DRL scheduling with multiple baselines including CLC and an oracle model-based predictive control (MPC) strategy with perfect knowledge. We find that iChill's (1) meta-learning improves over transfer learning by up to 15.5%; and (2) DRL scheduling saves 11.5% energy over CLC and is comparable with oracle MPC (12% over CLC). Finally, off-line pre-training of iChill's DRL on the meta-learned chiller models reduces the need for real-world training experimentation by 11x from 3 years to 96 days.) <|cite_end|> <|cite_start|> (Reference: Model-free optimal chiller loading method based on q-learning: Chillers consume considerable energy in building HVAC systems, and the optimal operation of chillers is essential for energy conservation in buildings. This article proposes a model-free optimal chiller loading (OCL) method for optimizing chiller operation. Unlike model-based OCL methods, the proposed method does not require accurate chiller performance models as a priori knowledge. The proposed method is based on the Q-learning method, a classical reinforcement learning method. With the comprehensive coefficient of performance (COP) of chillers as the environmental feedback, the model-free loading controller can learn autonomously and optimize the chiller loading by adjusting the set points of the chilled water outlet temperature. A central chiller plant in an office building located in Shanghai is selected as a case system to investigate the energy conservation performance of the proposed method through simulations. The simulation results suggest that the proposed method can save 4.36% of chiller energy during the first cooling season compared to the baseline control, which is slightly inferior to the value for the model-based loading method (4.95%). Owing to its acceptable energy-saving capability, the proposed method can be applied to central chiller plants that lack a system model and historical data.) <|cite_end|> <|cite_start|> (Reference: Chilled water temperature resetting using model-free reinforcement learning: Engineering application: ) <|cite_end|> <|cite_start|> (Reference: Marco - multi-agent reinforcement learning based control of building hvac systems: Optimal control of building heating, ventilation, air-conditioning (HVAC) equipment has typically been based on rules and model-based predictive control (MPC). Challenges in developing accurate models of buildings render these approaches sub-optimal and unstable in real-life operations. Model-free Deep Reinforcement Learning (DRL) approaches have been proposed very recently to address this. However, existing works on DRL for HVAC suffer from some limitations. First, they consider buildings with few HVAC units, thus leaving open the question of scale. Second, they consider only air-side control of air-handling-units (AHUs) without taking into the water-side chiller control, though chillers account for a significant portion of HVAC energy. Third, they use a single learning agent that adjusts multiple set-points of the HVAC system. We present MARCO - Multi-Agent Reinforcement learning COntrol for HVACs that addresses these challenges. Our approach achieves scale by transfer of learning across HVAC sub-systems. MARCO uses separate DRL agents that control both the AHUs and chillers to jointly optimize HVAC operations. We train and evaluate MARCO on a simulation environment with real-world configurations. We show that MARCO performs better than the as-is HVAC control strategy. We find that MARCO achieves performance comparable to an MPC Oracle that has perfect system knowledge; and better than MPC suffering from systemic calibration uncertainties. Other key findings from our evaluation studies include the following: 1) distributed agents perform significantly better than a central agent for HVAC control; 2) cooperative agents improve over competing agents; and 3) domain knowledge can be exploited to reduce the training time significantly.) <|cite_end|> <|cite_start|> (Reference: Soft Actor-Critic Deep Reinforcement Learning with Hybrid Mixed-Integer Actions for Demand Responsive Scheduling of Energy Systems: ) <|cite_end|> <|cite_start|> (Reference: Application of deep q-networks for model-free optimal control balancing between different hvac systems: A deep Q-network (DQN) was applied for model-free optimal control balancing between different HVAC systems. The DQN was coupled to a reference office building: an EnergyPlus simulation model provided by the U.S. Department of Energy. The building was air-conditioned with four air-handling units (AHUs), two electric chillers, a cooling tower, and two pumps. EnergyPlus simulation results for eleven days (July 1–11) and three subsequent days (July 12–14) were used to improve the DQN policy and test the optimal control. The optimization goal was to minimize the building’s energy use while maintaining the indoor CO2 concentration below 1,000 ppm. It was revealed that the DQN—a reinforcement learning method—can improve its control policy based on prior actions, states, and rewards. The DQN lowered the total energy usage by 15.7% in comparison with the baseline operation while maintaining the indoor CO2 concentration below 1,000 ppm. Compared to model predictive control, the DQN does not require a simulation model, or a predetermined prediction horizon, thus delivering model-free optimal control. Furthermore, it was demonstrated that the DQN can find balanced control actions between different energy consumers in the building, such as chillers, pumps, and AHUs.) <|cite_end|> <|cite_start|> (Reference: Model-free control method based on reinforcement learning for building cooling water systems: Validation by measured data-based simulation: ) <|cite_end|> <|cite_start|> (Reference: Evaluation of reinforcement learning control for thermal energy storage systems: This paper describes a simulation-based investigation of machine-learning control for the supervisory control of building energy systems. Model-free reinforcement learning control is investigated for the operation of electrically driven cool thermal energy storage systems in commercial buildings. The reinforcement learning controller learns to charge and discharge a thermal storage tank based on the feedback it receives from past control actions. The learning agent interacts with its environment by commanding the thermal energy storage system and extracts cues about the environment solely based on the reinforcement feedback it receives, which in this study is the monetary cost of each control action. No prediction or system model is required. Over time and by exploring the environment, the reinforcement learning controller establishes a statistical summary of plant operation, which is continuously updated as operation continues. The controller learns to account for the time-dependent cost of electricity (both time-of-use and real-time pricing), the availability of thermal storage, part-load performance of the central chilled water plant, and weather conditions. Though reinforcement learning control proved sensitive to the selection of state variables, level of discretization, and learning rate, it effectively learns a difficult task of controlling thermal energy storage and displays good performance. The cost savings compare favorably with conventional cool storage control strategies but do not reach the level of predictive optimal control.) <|cite_end|> <|cite_start|> (Reference: Evaluation of reinforcement learning for optimal control of building active and passive thermal storage inventory: This paper describes an investigation of machine learning for supervisory control of active and passive thermal storage capacity in buildings. Previous studies show that the utilization of active or passive thermal storage, or both, can yield significant peak cooling load reduction and associated electrical demand and operational cost savings. In this study, a model-free learning control is investigated for the operation of electrically driven chilled water systems in heavy-mass commercial buildings. The reinforcement learning controller learns to operate the building and cooling plant based on the reinforcement feedback (monetary cost of each action, in this study) it receives for past control actions. The learning agent interacts with its environment by commanding the global zone temperature setpoints and thermal energy storage charging/discharging rate. The controller extracts information about the environment based solely on the reinforcement signal; the controller does not contain a predictive or system model. Over time and by exploring the environment, the reinforcement learning controller establishes a statistical summary of plant operation, which is continuously updated as operation continues. The present analysis shows that learning control is a feasible methodology to find a near-optimal control strategy for exploiting the active and passive building thermal storage capacity, and also shows that the learning performance is affected by the dimensionality of the action and state space, the learning rate and several other factors. It is found that it takes a long time to learn control strategies for tasks associated with large state and action spaces.) <|cite_end|>have been studied. For MPC, a direct implementation requires solving a high dimension mixed-integer linear program (MINLP) that is quite challenging to solve. Various substitutive approaches are thus used, which can be categorized into two groups: NLP approximations <|cite_start|> (Reference: Predictive control for energy efficient buildings with thermal storage: Modeling, stimulation, and experiments: The building sector is the largest energy consumer in the world. Therefore, it is economically, socially, and environmentally significant to reduce the energy consumption of buildings. Achieving substantial energy reduction in buildings may require rethinking the whole processes of design, construction, and operation of a building. This article focuses on the specific issue of advanced control system design for energy efficient buildings.) <|cite_end|> <|cite_start|> (Reference: Use of model predictive control to enhance the flexibility of thermal energy storage cooling systems: This paper investigates the application of a model predictive controller (MPC) to both a traditional and a novel chilled water thermal energy storage system over for an Austin, Texas, climate. In the novel system, the thermal storage discharges during peak electricity times to meet building cooling load and to supply reduced temperature water for heat rejection in the chiller's condenser. Chiller efficiency improves as the condenser water temperature decreases, shifting more electrical usage to off-peak hours, but may increase overall electrical usage. The MPC is designed to optimize the discharge and recharge of the thermal storage in order to minimize operation costs or energy consumption over a 24-hour prediction horizon. The ability of MPC to level the electrical load profile is also considered. The way in which demand charges are considered in the objective function can greatly influence the system's electrical load profile.) <|cite_end|> <|cite_start|> (Reference: Integrating scheduling and control for economic MPC of buildings with energy storage: ) <|cite_end|> <|cite_start|> (Reference: Virtual testbed for model predictive control development in district cooling systems: ) <|cite_end|>and MILP approximations <|cite_start|> (Reference: A mixed-integer linear programming model for real-time cost optimization of building heating, ventilation, and air conditioning equipment: ) <|cite_end|> <|cite_start|> (Reference: Economic MPC and real-time decision making with application to large-scale HVAC energy systems: ) <|cite_end|> <|cite_start|> (Reference: Model predictive control of central chiller plant with thermal energy storage via dynamic programming and mixed-integer linear programming: This work considers the optimal scheduling problem for a campus central plant equipped with a bank of multiple electrical chillers and a thermal energy storage (TES). Typically, the chillers are operated in ON/OFF modes to charge TES and supply chilled water to satisfy the campus cooling demands. A bilinear model is established to describe the system dynamics of the central plant. A model predictive control (MPC) problem is formulated to obtain optimal set-points to satisfy the campus cooling demands and minimize daily electricity cost. At each time step, the MPC problem is represented as a large-scale mixed-integer nonlinear programming problem. We propose a heuristic algorithm to obtain suboptimal solutions for it via dynamic programming (DP) and mixed integer linear programming (MILP). The system dynamics is linearized along the simulated trajectories of the system. The optimal TES operation profile is obtained by solving a DP problem at every horizon, and the optimal chiller operations are obtained by solving an MILP problem at every time step with a fixed TES operation profile. Simulation results show desired performance and computational tractability of the proposed algorithm. This work was motivated by the supervisory control need for a campus central plant. Plant operators have to decide a scheduling strategy to mix and match various chillers with a thermal energy storage to satisfy the campus cooling demands, while minimizing the operation cost. This work mathematically characterizes the system dynamics of a campus central plant and establishes a linear model to predict campus cooling load. It proposes a model predictive control (MPC) strategy to optimally schedule the campus central plant based on plant system dynamics and predicted campus cooling load. A heuristic algorithm is proposed to obtain suboptimal solutions for the MPC problem. The effectiveness and efficiency of the proposed approach are well demonstrated for the central plant at the University of California, Irvine.) <|cite_end|> <|cite_start|> (Reference: A case study of economic optimization of HVAC systems based on the Stanford University campus airside and waterside systems: Commercial buildings account for $200 billion per year in energy expenditures, with heating, ventilation, and air conditioning (HVAC) systems accounting for most of these costs. In energy markets with time-varying prices and peak demand charges, a significant potential for cost savings is provided by using thermal energy storage to shift energy loads. Since most implementations of HVAC control systems do not optimize energy costs, they have become a primary focus for new strategies aimed at economic optimization. However, some industrial applications, such as large research centers or university campuses, are too large to be solved in a single MPC instance. Decompositions have been proposed in the literature, but it is difficult to evaluate and to compare decompositions against one another when using different systems. In this paper, we present a large-scale industrially relevant case study where solving a single MPC optimization problem is not feasible for real-time implementations. The study is loosely based on the Stanford University campus, consisting of both an airside and waterside system. The airside system includes 500 zones spread throughout 25 campus buildings along with the air handler units and regulatory building automation system used for temperature regulation. The waterside system includes the central plant equipment, such as chillers, that is used to meet the load from the buildings. Active thermal energy storage is also available to the campus. The models from this case study are made publicly available for other researchers interested in designing alternative control strategies for managing chilled water production to meet airside loads. The aim of the case study release is to provide a standardized problem for the research community and a benchmark for evaluating performance.) <|cite_end|> <|cite_start|> (Reference: Site demonstration and performance evaluation of MPC for a large chiller plant with TES for renewable energy integration and grid decarbonization: ) <|cite_end|>. NLP approximations generally leave the discrete commands for some predetermined control logic and only deal with continuous control commands, which may limit the potential of their savings. MILP approximations mostly adopt a linear \plant\ model so that the problem is tractable, though solving large MILPs is also challenging. An alternative to MPC is Reinforcement Learning (RL): an umbrella term for a set of tools used to approximate an optimal policy using data collected from a physical system, or more frequently, its simulation. Despite a burdensome design and learning phase, real-time control is simpler since control computation is an evaluation of a state-feedback policy. However, designing an RL controller for a \plant\ is quite challenging. The performance of an RL controller depends on many design choices and training an RL controller is computationally onerous. In this paper we propose an RL controller and an MPC controller for a \plant, and compare their performance with that of a rule-based baseline (BL) controller through simulations. All three controllers are designed to minimize total energy cost while meeting the required cooling load. The main source of flexibility is the TES, which allows a well-design controller to charge the TES in periods of low electricity price. The proposed RL controller is based on a new learning algorithm that is inspired by the ``convex Q-learning'' proposed in recent work <|cite_start|> (Reference: Convex Q-Learning: It is well known that the extension of Watkins' algorithm to general function approximation settings is challenging: does the “projected Bellman equation” have a solution? If so, is the solution useful in the sense of generating a good policy? And, if the preceding questions are answered in the affirmative, is the algorithm consistent? These questions are unanswered even in the special case of Q-function approximations that are linear in the parameter. The challenge seems paradoxical, given the long history of convex analytic approaches to dynamic programming. Our main contributions are summarized as follows: (i)A new class of convex Q-learning algorithms is introduced based on a convex relaxation of the Bellman equation. Convergence is established under general conditions for linear function approximation. (ii)A batch implementation appears similar to LSPI and DQN algorithms, but the difference is substantial: while convex Q-learning solves a convex program that approximates the Bellman equation, theory for DQN is no stronger than for Watkins algorithm with function approximation. These results are obtained for deterministic nonlinear systems with total cost criterion. Extensions are proposed.) <|cite_end|>and the classical least squares policy iteration (LSPI) algorithm <|cite_start|> (Reference: Least-squares policy iteration: We propose a new approach to reinforcement learning for control problems which combines value-function approximation with linear architectures and approximate policy iteration. This new approach is motivated by the least-squares temporal-difference learning algorithm (LSTD) for prediction problems, which is known for its efficient use of sample experiences compared to pure temporal-difference algorithms. Heretofore, LSTD has not had a straightforward application to control problems mainly because LSTD learns the state value function of a fixed policy which cannot be used for action selection and control without a model of the underlying process. Our new algorithm, least-squares policy iteration (LSPI), learns the state-action value function which allows for action selection without a model and for incremental policy improvement within a policy-iteration framework. LSPI is a model-free, off-policy method which can use efficiently (and reuse in each iteration) sample experiences collected in any manner. By separating the sample collection method, the choice of the linear approximation architecture, and the solution method, LSPI allows for focused attention on the distinct elements that contribute to practical reinforcement learning. LSPI is tested on the simple task of balancing an inverted pendulum and the harder task of balancing and riding a bicycle to a target location. In both cases, LSPI learns to control the pendulum or the bicycle by merely observing a relatively small number of trials where actions are selected randomly. LSPI is also compared against Q-learning (both with and without experience replay) using the same value function architecture. While LSPI achieves good performance fairly consistently on the difficult bicycle task, Q-learning variants were rarely able to balance for more than a small fraction of the time needed to reach the target location.) <|cite_end|>. Basis functions are carefully designed to reduce computational burden in training the RL controller. The proposed MPC controller solves a two-fold non-linear program (NLP) that is transformed from the original MINLP via heuristics. Hence the MPC controller is ``stand-in'' for a true optimal controller and provides a sub-optimal solution to the original MINLP. The baseline controller that is used for comparison is designed to utilize the TES and time varying electricity prices (to the extent possible with heuristics) to reduce energy costs. The RL controller and baseline controller have the same information about electricity price: the current price and a backward moving average. The objective behind this work is to compare the performance of the two complementary approaches, MPC and RL, for the optimal control of all the principal actuators in a \plant. The two controllers are designed to be comparable, in terms of objective and information requirements. We are not aware of many works that has performed such a comparison; the only exceptions are <|cite_start|> (Reference: Evaluation of reinforcement learning for optimal control of building active and passive thermal storage inventory: This paper describes an investigation of machine learning for supervisory control of active and passive thermal storage capacity in buildings. Previous studies show that the utilization of active or passive thermal storage, or both, can yield significant peak cooling load reduction and associated electrical demand and operational cost savings. In this study, a model-free learning control is investigated for the operation of electrically driven chilled water systems in heavy-mass commercial buildings. The reinforcement learning controller learns to operate the building and cooling plant based on the reinforcement feedback (monetary cost of each action, in this study) it receives for past control actions. The learning agent interacts with its environment by commanding the global zone temperature setpoints and thermal energy storage charging/discharging rate. The controller extracts information about the environment based solely on the reinforcement signal; the controller does not contain a predictive or system model. Over time and by exploring the environment, the reinforcement learning controller establishes a statistical summary of plant operation, which is continuously updated as operation continues. The present analysis shows that learning control is a feasible methodology to find a near-optimal control strategy for exploiting the active and passive building thermal storage capacity, and also shows that the learning performance is affected by the dimensionality of the action and state space, the learning rate and several other factors. It is found that it takes a long time to learn control strategies for tasks associated with large state and action spaces.) <|cite_end|> <|cite_start|> (Reference: Evaluation of reinforcement learning control for thermal energy storage systems: This paper describes a simulation-based investigation of machine-learning control for the supervisory control of building energy systems. Model-free reinforcement learning control is investigated for the operation of electrically driven cool thermal energy storage systems in commercial buildings. The reinforcement learning controller learns to charge and discharge a thermal storage tank based on the feedback it receives from past control actions. The learning agent interacts with its environment by commanding the thermal energy storage system and extracts cues about the environment solely based on the reinforcement feedback it receives, which in this study is the monetary cost of each control action. No prediction or system model is required. Over time and by exploring the environment, the reinforcement learning controller establishes a statistical summary of plant operation, which is continuously updated as operation continues. The controller learns to account for the time-dependent cost of electricity (both time-of-use and real-time pricing), the availability of thermal storage, part-load performance of the central chilled water plant, and weather conditions. Though reinforcement learning control proved sensitive to the selection of state variables, level of discretization, and learning rate, it effectively learns a difficult task of controlling thermal energy storage and displays good performance. The cost savings compare favorably with conventional cool storage control strategies but do not reach the level of predictive optimal control.) <|cite_end|>, but the decision making is limited to a TES or temperature setpoints. Since both RL and MPC approaches have merits and weaknesses, designing a controller with one approach and showing it performs well leaves open the question: would the other have performed better? This paper takes a first step in addressing such questions. To aid in this comparison, both the controllers are designed to be approximations of the same intractable infinite horizon optimal control problem. Due to the large difference in the respective approaches (MPC and RL), it is not possible to ensure exact parallels for an``apples--to--apples'' comparison. But the design problems for RL and MPC controllers have been formulated to be similar to the possible extent. Simulations results show that both the controllers, RL and MPC, leads to significant and similar cost savings (16-18\%) over a rule-based baseline controller. These values are comparable to that of MPC controllers with mixed-integer formulation reported in the literature, which vary from 10\% to 17\% <|cite_start|> (Reference: A mixed-integer linear programming model for real-time cost optimization of building heating, ventilation, and air conditioning equipment: ) <|cite_end|> <|cite_start|> (Reference: Economic MPC and real-time decision making with application to large-scale HVAC energy systems: ) <|cite_end|> <|cite_start|> (Reference: Model predictive control of central chiller plant with thermal energy storage via dynamic programming and mixed-integer linear programming: This work considers the optimal scheduling problem for a campus central plant equipped with a bank of multiple electrical chillers and a thermal energy storage (TES). Typically, the chillers are operated in ON/OFF modes to charge TES and supply chilled water to satisfy the campus cooling demands. A bilinear model is established to describe the system dynamics of the central plant. A model predictive control (MPC) problem is formulated to obtain optimal set-points to satisfy the campus cooling demands and minimize daily electricity cost. At each time step, the MPC problem is represented as a large-scale mixed-integer nonlinear programming problem. We propose a heuristic algorithm to obtain suboptimal solutions for it via dynamic programming (DP) and mixed integer linear programming (MILP). The system dynamics is linearized along the simulated trajectories of the system. The optimal TES operation profile is obtained by solving a DP problem at every horizon, and the optimal chiller operations are obtained by solving an MILP problem at every time step with a fixed TES operation profile. Simulation results show desired performance and computational tractability of the proposed algorithm. This work was motivated by the supervisory control need for a campus central plant. Plant operators have to decide a scheduling strategy to mix and match various chillers with a thermal energy storage to satisfy the campus cooling demands, while minimizing the operation cost. This work mathematically characterizes the system dynamics of a campus central plant and establishes a linear model to predict campus cooling load. It proposes a model predictive control (MPC) strategy to optimally schedule the campus central plant based on plant system dynamics and predicted campus cooling load. A heuristic algorithm is proposed to obtain suboptimal solutions for the MPC problem. The effectiveness and efficiency of the proposed approach are well demonstrated for the central plant at the University of California, Irvine.) <|cite_end|> <|cite_start|> (Reference: A case study of economic optimization of HVAC systems based on the Stanford University campus airside and waterside systems: Commercial buildings account for $200 billion per year in energy expenditures, with heating, ventilation, and air conditioning (HVAC) systems accounting for most of these costs. In energy markets with time-varying prices and peak demand charges, a significant potential for cost savings is provided by using thermal energy storage to shift energy loads. Since most implementations of HVAC control systems do not optimize energy costs, they have become a primary focus for new strategies aimed at economic optimization. However, some industrial applications, such as large research centers or university campuses, are too large to be solved in a single MPC instance. Decompositions have been proposed in the literature, but it is difficult to evaluate and to compare decompositions against one another when using different systems. In this paper, we present a large-scale industrially relevant case study where solving a single MPC optimization problem is not feasible for real-time implementations. The study is loosely based on the Stanford University campus, consisting of both an airside and waterside system. The airside system includes 500 zones spread throughout 25 campus buildings along with the air handler units and regulatory building automation system used for temperature regulation. The waterside system includes the central plant equipment, such as chillers, that is used to meet the load from the buildings. Active thermal energy storage is also available to the campus. The models from this case study are made publicly available for other researchers interested in designing alternative control strategies for managing chilled water production to meet airside loads. The aim of the case study release is to provide a standardized problem for the research community and a benchmark for evaluating performance.) <|cite_end|> <|cite_start|> (Reference: Site demonstration and performance evaluation of MPC for a large chiller plant with TES for renewable energy integration and grid decarbonization: ) <|cite_end|>. The cooling load tracking performance is similar between them. The real-time computation burden of the RL controller is trivial compared to that of the MPC controller, but the RL controller leads to higher chiller switches (from off to on and vice versa). However, the MPC controller enjoys the advantage of error-free forecasts in the simulations, something the RL controller does not. \begin{figure} \centering \includegraphics[width=1\columnwidth]{IntuitiveSysDiagram.jpg} \caption{Layout of \plantFullname.} \label{fig:DCEP} \end{figure} The rest of the manuscript is organized as follows. The contribution of the paper over the related literature is discussed in detail in Section~\ref{sec:lit}. Section~\ref{sec:sysDesc} describes the \plantFullname\ and its simulation model as well as the control problem. Section~\ref{sec:RL} describes the proposed RL controller, Section~\ref{sec:MPC} the proposed MPC controller, and Section~\ref{sec:ruleBased} describes the baseline controller. Section~\ref{sec:eval} provides simulation evaluation of the controllers. Section~\ref{sec:under-the-hood} provides an ``under-the-hood'' view of the design choices for the RL controller. Section~\ref{sec:conclusion} concludes the paper. \subsection{Literature Review and Contributions}\label{sec:lit} \subsubsection{Prior work on RL for \plant} There is a large and growing body of work in this area, e.g. <|cite_start|> (Reference: Evaluation of reinforcement learning control for thermal energy storage systems: This paper describes a simulation-based investigation of machine-learning control for the supervisory control of building energy systems. Model-free reinforcement learning control is investigated for the operation of electrically driven cool thermal energy storage systems in commercial buildings. The reinforcement learning controller learns to charge and discharge a thermal storage tank based on the feedback it receives from past control actions. The learning agent interacts with its environment by commanding the thermal energy storage system and extracts cues about the environment solely based on the reinforcement feedback it receives, which in this study is the monetary cost of each control action. No prediction or system model is required. Over time and by exploring the environment, the reinforcement learning controller establishes a statistical summary of plant operation, which is continuously updated as operation continues. The controller learns to account for the time-dependent cost of electricity (both time-of-use and real-time pricing), the availability of thermal storage, part-load performance of the central chilled water plant, and weather conditions. Though reinforcement learning control proved sensitive to the selection of state variables, level of discretization, and learning rate, it effectively learns a difficult task of controlling thermal energy storage and displays good performance. The cost savings compare favorably with conventional cool storage control strategies but do not reach the level of predictive optimal control.) <|cite_end|> <|cite_start|> (Reference: Learn to chill: Intelligent chiller scheduling using meta-learning and deep reinforcement learning: Centralized chiller plants with multiple chillers are typically over-provisioned. Therefore, intelligent scheduling is required for the supply (operating chillers) to efficiently meet the demand (actual cooling load of buildings). Traditional cooling-load based control (CLC) may result in poor part-loaded efficiency. Recent data-driven approaches to chiller control either unrealistically assume perfect knowledge of individual chiller power at various leaving chilled water temperatures (LWTs) or control all chillers with same LWT. We complement existing work with iChill, an end-to-end learning-based intelligent chiller power prediction and scheduling strategy. First, given a dataset of chillers of varying capacities, each of which operates at a fixed LWT and varying loads, iChill meta-learns a model for power prediction. Specifically, for an unseen target chiller, the meta-learned model is re-trained with known LWT to predict power at unseen LWT. Second, given the configuration of a chiller plant and a cooling load profile, iChill learns to schedule individual chillers by jointly deciding the ON/OFF status and LWT; using deep reinforcement learning (DRL). We train and evaluate iChill in a simulated environment with real-world data from a chiller plant of 22 chillers. Specifically, we compare iChill's (1) meta-learned power model with regular transfer learning; and (2) DRL scheduling with multiple baselines including CLC and an oracle model-based predictive control (MPC) strategy with perfect knowledge. We find that iChill's (1) meta-learning improves over transfer learning by up to 15.5%; and (2) DRL scheduling saves 11.5% energy over CLC and is comparable with oracle MPC (12% over CLC). Finally, off-line pre-training of iChill's DRL on the meta-learned chiller models reduces the need for real-world training experimentation by 11x from 3 years to 96 days.) <|cite_end|> <|cite_start|> (Reference: Model-free optimal chiller loading method based on q-learning: Chillers consume considerable energy in building HVAC systems, and the optimal operation of chillers is essential for energy conservation in buildings. This article proposes a model-free optimal chiller loading (OCL) method for optimizing chiller operation. Unlike model-based OCL methods, the proposed method does not require accurate chiller performance models as a priori knowledge. The proposed method is based on the Q-learning method, a classical reinforcement learning method. With the comprehensive coefficient of performance (COP) of chillers as the environmental feedback, the model-free loading controller can learn autonomously and optimize the chiller loading by adjusting the set points of the chilled water outlet temperature. A central chiller plant in an office building located in Shanghai is selected as a case system to investigate the energy conservation performance of the proposed method through simulations. The simulation results suggest that the proposed method can save 4.36% of chiller energy during the first cooling season compared to the baseline control, which is slightly inferior to the value for the model-based loading method (4.95%). Owing to its acceptable energy-saving capability, the proposed method can be applied to central chiller plants that lack a system model and historical data.) <|cite_end|> <|cite_start|> (Reference: Chilled water temperature resetting using model-free reinforcement learning: Engineering application: ) <|cite_end|> <|cite_start|> (Reference: Marco - multi-agent reinforcement learning based control of building hvac systems: Optimal control of building heating, ventilation, air-conditioning (HVAC) equipment has typically been based on rules and model-based predictive control (MPC). Challenges in developing accurate models of buildings render these approaches sub-optimal and unstable in real-life operations. Model-free Deep Reinforcement Learning (DRL) approaches have been proposed very recently to address this. However, existing works on DRL for HVAC suffer from some limitations. First, they consider buildings with few HVAC units, thus leaving open the question of scale. Second, they consider only air-side control of air-handling-units (AHUs) without taking into the water-side chiller control, though chillers account for a significant portion of HVAC energy. Third, they use a single learning agent that adjusts multiple set-points of the HVAC system. We present MARCO - Multi-Agent Reinforcement learning COntrol for HVACs that addresses these challenges. Our approach achieves scale by transfer of learning across HVAC sub-systems. MARCO uses separate DRL agents that control both the AHUs and chillers to jointly optimize HVAC operations. We train and evaluate MARCO on a simulation environment with real-world configurations. We show that MARCO performs better than the as-is HVAC control strategy. We find that MARCO achieves performance comparable to an MPC Oracle that has perfect system knowledge; and better than MPC suffering from systemic calibration uncertainties. Other key findings from our evaluation studies include the following: 1) distributed agents perform significantly better than a central agent for HVAC control; 2) cooperative agents improve over competing agents; and 3) domain knowledge can be exploited to reduce the training time significantly.) <|cite_end|> <|cite_start|> (Reference: Soft Actor-Critic Deep Reinforcement Learning with Hybrid Mixed-Integer Actions for Demand Responsive Scheduling of Energy Systems: ) <|cite_end|> <|cite_start|> (Reference: Application of deep q-networks for model-free optimal control balancing between different hvac systems: A deep Q-network (DQN) was applied for model-free optimal control balancing between different HVAC systems. The DQN was coupled to a reference office building: an EnergyPlus simulation model provided by the U.S. Department of Energy. The building was air-conditioned with four air-handling units (AHUs), two electric chillers, a cooling tower, and two pumps. EnergyPlus simulation results for eleven days (July 1–11) and three subsequent days (July 12–14) were used to improve the DQN policy and test the optimal control. The optimization goal was to minimize the building’s energy use while maintaining the indoor CO2 concentration below 1,000 ppm. It was revealed that the DQN—a reinforcement learning method—can improve its control policy based on prior actions, states, and rewards. The DQN lowered the total energy usage by 15.7% in comparison with the baseline operation while maintaining the indoor CO2 concentration below 1,000 ppm. Compared to model predictive control, the DQN does not require a simulation model, or a predetermined prediction horizon, thus delivering model-free optimal control. Furthermore, it was demonstrated that the DQN can find balanced control actions between different energy consumers in the building, such as chillers, pumps, and AHUs.) <|cite_end|> <|cite_start|> (Reference: Model-free control method based on reinforcement learning for building cooling water systems: Validation by measured data-based simulation: ) <|cite_end|> <|cite_start|> (Reference: Evaluation of reinforcement learning for optimal control of building active and passive thermal storage inventory: This paper describes an investigation of machine learning for supervisory control of active and passive thermal storage capacity in buildings. Previous studies show that the utilization of active or passive thermal storage, or both, can yield significant peak cooling load reduction and associated electrical demand and operational cost savings. In this study, a model-free learning control is investigated for the operation of electrically driven chilled water systems in heavy-mass commercial buildings. The reinforcement learning controller learns to operate the building and cooling plant based on the reinforcement feedback (monetary cost of each action, in this study) it receives for past control actions. The learning agent interacts with its environment by commanding the global zone temperature setpoints and thermal energy storage charging/discharging rate. The controller extracts information about the environment based solely on the reinforcement signal; the controller does not contain a predictive or system model. Over time and by exploring the environment, the reinforcement learning controller establishes a statistical summary of plant operation, which is continuously updated as operation continues. The present analysis shows that learning control is a feasible methodology to find a near-optimal control strategy for exploiting the active and passive building thermal storage capacity, and also shows that the learning performance is affected by the dimensionality of the action and state space, the learning rate and several other factors. It is found that it takes a long time to learn control strategies for tasks associated with large state and action spaces.) <|cite_end|>. Most of these papers limit the problem to controlling part of a \plant. For instance, the \plant s considered in <|cite_start|> (Reference: Learn to chill: Intelligent chiller scheduling using meta-learning and deep reinforcement learning: Centralized chiller plants with multiple chillers are typically over-provisioned. Therefore, intelligent scheduling is required for the supply (operating chillers) to efficiently meet the demand (actual cooling load of buildings). Traditional cooling-load based control (CLC) may result in poor part-loaded efficiency. Recent data-driven approaches to chiller control either unrealistically assume perfect knowledge of individual chiller power at various leaving chilled water temperatures (LWTs) or control all chillers with same LWT. We complement existing work with iChill, an end-to-end learning-based intelligent chiller power prediction and scheduling strategy. First, given a dataset of chillers of varying capacities, each of which operates at a fixed LWT and varying loads, iChill meta-learns a model for power prediction. Specifically, for an unseen target chiller, the meta-learned model is re-trained with known LWT to predict power at unseen LWT. Second, given the configuration of a chiller plant and a cooling load profile, iChill learns to schedule individual chillers by jointly deciding the ON/OFF status and LWT; using deep reinforcement learning (DRL). We train and evaluate iChill in a simulated environment with real-world data from a chiller plant of 22 chillers. Specifically, we compare iChill's (1) meta-learned power model with regular transfer learning; and (2) DRL scheduling with multiple baselines including CLC and an oracle model-based predictive control (MPC) strategy with perfect knowledge. We find that iChill's (1) meta-learning improves over transfer learning by up to 15.5%; and (2) DRL scheduling saves 11.5% energy over CLC and is comparable with oracle MPC (12% over CLC). Finally, off-line pre-training of iChill's DRL on the meta-learned chiller models reduces the need for real-world training experimentation by 11x from 3 years to 96 days.) <|cite_end|> <|cite_start|> (Reference: Model-free optimal chiller loading method based on q-learning: Chillers consume considerable energy in building HVAC systems, and the optimal operation of chillers is essential for energy conservation in buildings. This article proposes a model-free optimal chiller loading (OCL) method for optimizing chiller operation. Unlike model-based OCL methods, the proposed method does not require accurate chiller performance models as a priori knowledge. The proposed method is based on the Q-learning method, a classical reinforcement learning method. With the comprehensive coefficient of performance (COP) of chillers as the environmental feedback, the model-free loading controller can learn autonomously and optimize the chiller loading by adjusting the set points of the chilled water outlet temperature. A central chiller plant in an office building located in Shanghai is selected as a case system to investigate the energy conservation performance of the proposed method through simulations. The simulation results suggest that the proposed method can save 4.36% of chiller energy during the first cooling season compared to the baseline control, which is slightly inferior to the value for the model-based loading method (4.95%). Owing to its acceptable energy-saving capability, the proposed method can be applied to central chiller plants that lack a system model and historical data.) <|cite_end|> <|cite_start|> (Reference: Chilled water temperature resetting using model-free reinforcement learning: Engineering application: ) <|cite_end|> <|cite_start|> (Reference: Marco - multi-agent reinforcement learning based control of building hvac systems: Optimal control of building heating, ventilation, air-conditioning (HVAC) equipment has typically been based on rules and model-based predictive control (MPC). Challenges in developing accurate models of buildings render these approaches sub-optimal and unstable in real-life operations. Model-free Deep Reinforcement Learning (DRL) approaches have been proposed very recently to address this. However, existing works on DRL for HVAC suffer from some limitations. First, they consider buildings with few HVAC units, thus leaving open the question of scale. Second, they consider only air-side control of air-handling-units (AHUs) without taking into the water-side chiller control, though chillers account for a significant portion of HVAC energy. Third, they use a single learning agent that adjusts multiple set-points of the HVAC system. We present MARCO - Multi-Agent Reinforcement learning COntrol for HVACs that addresses these challenges. Our approach achieves scale by transfer of learning across HVAC sub-systems. MARCO uses separate DRL agents that control both the AHUs and chillers to jointly optimize HVAC operations. We train and evaluate MARCO on a simulation environment with real-world configurations. We show that MARCO performs better than the as-is HVAC control strategy. We find that MARCO achieves performance comparable to an MPC Oracle that has perfect system knowledge; and better than MPC suffering from systemic calibration uncertainties. Other key findings from our evaluation studies include the following: 1) distributed agents perform significantly better than a central agent for HVAC control; 2) cooperative agents improve over competing agents; and 3) domain knowledge can be exploited to reduce the training time significantly.) <|cite_end|> <|cite_start|> (Reference: Application of deep q-networks for model-free optimal control balancing between different hvac systems: A deep Q-network (DQN) was applied for model-free optimal control balancing between different HVAC systems. The DQN was coupled to a reference office building: an EnergyPlus simulation model provided by the U.S. Department of Energy. The building was air-conditioned with four air-handling units (AHUs), two electric chillers, a cooling tower, and two pumps. EnergyPlus simulation results for eleven days (July 1–11) and three subsequent days (July 12–14) were used to improve the DQN policy and test the optimal control. The optimization goal was to minimize the building’s energy use while maintaining the indoor CO2 concentration below 1,000 ppm. It was revealed that the DQN—a reinforcement learning method—can improve its control policy based on prior actions, states, and rewards. The DQN lowered the total energy usage by 15.7% in comparison with the baseline operation while maintaining the indoor CO2 concentration below 1,000 ppm. Compared to model predictive control, the DQN does not require a simulation model, or a predetermined prediction horizon, thus delivering model-free optimal control. Furthermore, it was demonstrated that the DQN can find balanced control actions between different energy consumers in the building, such as chillers, pumps, and AHUs.) <|cite_end|>do not have a TES. Refs. <|cite_start|> (Reference: Learn to chill: Intelligent chiller scheduling using meta-learning and deep reinforcement learning: Centralized chiller plants with multiple chillers are typically over-provisioned. Therefore, intelligent scheduling is required for the supply (operating chillers) to efficiently meet the demand (actual cooling load of buildings). Traditional cooling-load based control (CLC) may result in poor part-loaded efficiency. Recent data-driven approaches to chiller control either unrealistically assume perfect knowledge of individual chiller power at various leaving chilled water temperatures (LWTs) or control all chillers with same LWT. We complement existing work with iChill, an end-to-end learning-based intelligent chiller power prediction and scheduling strategy. First, given a dataset of chillers of varying capacities, each of which operates at a fixed LWT and varying loads, iChill meta-learns a model for power prediction. Specifically, for an unseen target chiller, the meta-learned model is re-trained with known LWT to predict power at unseen LWT. Second, given the configuration of a chiller plant and a cooling load profile, iChill learns to schedule individual chillers by jointly deciding the ON/OFF status and LWT; using deep reinforcement learning (DRL). We train and evaluate iChill in a simulated environment with real-world data from a chiller plant of 22 chillers. Specifically, we compare iChill's (1) meta-learned power model with regular transfer learning; and (2) DRL scheduling with multiple baselines including CLC and an oracle model-based predictive control (MPC) strategy with perfect knowledge. We find that iChill's (1) meta-learning improves over transfer learning by up to 15.5%; and (2) DRL scheduling saves 11.5% energy over CLC and is comparable with oracle MPC (12% over CLC). Finally, off-line pre-training of iChill's DRL on the meta-learned chiller models reduces the need for real-world training experimentation by 11x from 3 years to 96 days.) <|cite_end|> <|cite_start|> (Reference: Model-free optimal chiller loading method based on q-learning: Chillers consume considerable energy in building HVAC systems, and the optimal operation of chillers is essential for energy conservation in buildings. This article proposes a model-free optimal chiller loading (OCL) method for optimizing chiller operation. Unlike model-based OCL methods, the proposed method does not require accurate chiller performance models as a priori knowledge. The proposed method is based on the Q-learning method, a classical reinforcement learning method. With the comprehensive coefficient of performance (COP) of chillers as the environmental feedback, the model-free loading controller can learn autonomously and optimize the chiller loading by adjusting the set points of the chilled water outlet temperature. A central chiller plant in an office building located in Shanghai is selected as a case system to investigate the energy conservation performance of the proposed method through simulations. The simulation results suggest that the proposed method can save 4.36% of chiller energy during the first cooling season compared to the baseline control, which is slightly inferior to the value for the model-based loading method (4.95%). Owing to its acceptable energy-saving capability, the proposed method can be applied to central chiller plants that lack a system model and historical data.) <|cite_end|> <|cite_start|> (Reference: Chilled water temperature resetting using model-free reinforcement learning: Engineering application: ) <|cite_end|> <|cite_start|> (Reference: Soft Actor-Critic Deep Reinforcement Learning with Hybrid Mixed-Integer Actions for Demand Responsive Scheduling of Energy Systems: ) <|cite_end|> <|cite_start|> (Reference: Marco - multi-agent reinforcement learning based control of building hvac systems: Optimal control of building heating, ventilation, air-conditioning (HVAC) equipment has typically been based on rules and model-based predictive control (MPC). Challenges in developing accurate models of buildings render these approaches sub-optimal and unstable in real-life operations. Model-free Deep Reinforcement Learning (DRL) approaches have been proposed very recently to address this. However, existing works on DRL for HVAC suffer from some limitations. First, they consider buildings with few HVAC units, thus leaving open the question of scale. Second, they consider only air-side control of air-handling-units (AHUs) without taking into the water-side chiller control, though chillers account for a significant portion of HVAC energy. Third, they use a single learning agent that adjusts multiple set-points of the HVAC system. We present MARCO - Multi-Agent Reinforcement learning COntrol for HVACs that addresses these challenges. Our approach achieves scale by transfer of learning across HVAC sub-systems. MARCO uses separate DRL agents that control both the AHUs and chillers to jointly optimize HVAC operations. We train and evaluate MARCO on a simulation environment with real-world configurations. We show that MARCO performs better than the as-is HVAC control strategy. We find that MARCO achieves performance comparable to an MPC Oracle that has perfect system knowledge; and better than MPC suffering from systemic calibration uncertainties. Other key findings from our evaluation studies include the following: 1) distributed agents perform significantly better than a central agent for HVAC control; 2) cooperative agents improve over competing agents; and 3) domain knowledge can be exploited to reduce the training time significantly.) <|cite_end|>optimize only the chilled water loop but not the cooling water loop (at the cooling tower), while <|cite_start|> (Reference: Model-free control method based on reinforcement learning for building cooling water systems: Validation by measured data-based simulation: ) <|cite_end|>only optimize the cooling water loop. The reported energy savings are in the 10-20\% range over rule-based baseline controllers; e.g. 15.7\% in <|cite_start|> (Reference: Application of deep q-networks for model-free optimal control balancing between different hvac systems: A deep Q-network (DQN) was applied for model-free optimal control balancing between different HVAC systems. The DQN was coupled to a reference office building: an EnergyPlus simulation model provided by the U.S. Department of Energy. The building was air-conditioned with four air-handling units (AHUs), two electric chillers, a cooling tower, and two pumps. EnergyPlus simulation results for eleven days (July 1–11) and three subsequent days (July 12–14) were used to improve the DQN policy and test the optimal control. The optimization goal was to minimize the building’s energy use while maintaining the indoor CO2 concentration below 1,000 ppm. It was revealed that the DQN—a reinforcement learning method—can improve its control policy based on prior actions, states, and rewards. The DQN lowered the total energy usage by 15.7% in comparison with the baseline operation while maintaining the indoor CO2 concentration below 1,000 ppm. Compared to model predictive control, the DQN does not require a simulation model, or a predetermined prediction horizon, thus delivering model-free optimal control. Furthermore, it was demonstrated that the DQN can find balanced control actions between different energy consumers in the building, such as chillers, pumps, and AHUs.) <|cite_end|>, 11.5\% in <|cite_start|> (Reference: Learn to chill: Intelligent chiller scheduling using meta-learning and deep reinforcement learning: Centralized chiller plants with multiple chillers are typically over-provisioned. Therefore, intelligent scheduling is required for the supply (operating chillers) to efficiently meet the demand (actual cooling load of buildings). Traditional cooling-load based control (CLC) may result in poor part-loaded efficiency. Recent data-driven approaches to chiller control either unrealistically assume perfect knowledge of individual chiller power at various leaving chilled water temperatures (LWTs) or control all chillers with same LWT. We complement existing work with iChill, an end-to-end learning-based intelligent chiller power prediction and scheduling strategy. First, given a dataset of chillers of varying capacities, each of which operates at a fixed LWT and varying loads, iChill meta-learns a model for power prediction. Specifically, for an unseen target chiller, the meta-learned model is re-trained with known LWT to predict power at unseen LWT. Second, given the configuration of a chiller plant and a cooling load profile, iChill learns to schedule individual chillers by jointly deciding the ON/OFF status and LWT; using deep reinforcement learning (DRL). We train and evaluate iChill in a simulated environment with real-world data from a chiller plant of 22 chillers. Specifically, we compare iChill's (1) meta-learned power model with regular transfer learning; and (2) DRL scheduling with multiple baselines including CLC and an oracle model-based predictive control (MPC) strategy with perfect knowledge. We find that iChill's (1) meta-learning improves over transfer learning by up to 15.5%; and (2) DRL scheduling saves 11.5% energy over CLC and is comparable with oracle MPC (12% over CLC). Finally, off-line pre-training of iChill's DRL on the meta-learned chiller models reduces the need for real-world training experimentation by 11x from 3 years to 96 days.) <|cite_end|>and around 17\% in <|cite_start|> (Reference: Marco - multi-agent reinforcement learning based control of building hvac systems: Optimal control of building heating, ventilation, air-conditioning (HVAC) equipment has typically been based on rules and model-based predictive control (MPC). Challenges in developing accurate models of buildings render these approaches sub-optimal and unstable in real-life operations. Model-free Deep Reinforcement Learning (DRL) approaches have been proposed very recently to address this. However, existing works on DRL for HVAC suffer from some limitations. First, they consider buildings with few HVAC units, thus leaving open the question of scale. Second, they consider only air-side control of air-handling-units (AHUs) without taking into the water-side chiller control, though chillers account for a significant portion of HVAC energy. Third, they use a single learning agent that adjusts multiple set-points of the HVAC system. We present MARCO - Multi-Agent Reinforcement learning COntrol for HVACs that addresses these challenges. Our approach achieves scale by transfer of learning across HVAC sub-systems. MARCO uses separate DRL agents that control both the AHUs and chillers to jointly optimize HVAC operations. We train and evaluate MARCO on a simulation environment with real-world configurations. We show that MARCO performs better than the as-is HVAC control strategy. We find that MARCO achieves performance comparable to an MPC Oracle that has perfect system knowledge; and better than MPC suffering from systemic calibration uncertainties. Other key findings from our evaluation studies include the following: 1) distributed agents perform significantly better than a central agent for HVAC control; 2) cooperative agents improve over competing agents; and 3) domain knowledge can be exploited to reduce the training time significantly.) <|cite_end|>. The ref. <|cite_start|> (Reference: Evaluation of reinforcement learning control for thermal energy storage systems: This paper describes a simulation-based investigation of machine-learning control for the supervisory control of building energy systems. Model-free reinforcement learning control is investigated for the operation of electrically driven cool thermal energy storage systems in commercial buildings. The reinforcement learning controller learns to charge and discharge a thermal storage tank based on the feedback it receives from past control actions. The learning agent interacts with its environment by commanding the thermal energy storage system and extracts cues about the environment solely based on the reinforcement feedback it receives, which in this study is the monetary cost of each control action. No prediction or system model is required. Over time and by exploring the environment, the reinforcement learning controller establishes a statistical summary of plant operation, which is continuously updated as operation continues. The controller learns to account for the time-dependent cost of electricity (both time-of-use and real-time pricing), the availability of thermal storage, part-load performance of the central chilled water plant, and weather conditions. Though reinforcement learning control proved sensitive to the selection of state variables, level of discretization, and learning rate, it effectively learns a difficult task of controlling thermal energy storage and displays good performance. The cost savings compare favorably with conventional cool storage control strategies but do not reach the level of predictive optimal control.) <|cite_end|>considers a complete \plant, but the control command computed by the RL agent is limited to TES charging and discharging. It is not clear what control law is used to decide chiller commands and cooling water loop setpoints. The work <|cite_start|> (Reference: Evaluation of reinforcement learning for optimal control of building active and passive thermal storage inventory: This paper describes an investigation of machine learning for supervisory control of active and passive thermal storage capacity in buildings. Previous studies show that the utilization of active or passive thermal storage, or both, can yield significant peak cooling load reduction and associated electrical demand and operational cost savings. In this study, a model-free learning control is investigated for the operation of electrically driven chilled water systems in heavy-mass commercial buildings. The reinforcement learning controller learns to operate the building and cooling plant based on the reinforcement feedback (monetary cost of each action, in this study) it receives for past control actions. The learning agent interacts with its environment by commanding the global zone temperature setpoints and thermal energy storage charging/discharging rate. The controller extracts information about the environment based solely on the reinforcement signal; the controller does not contain a predictive or system model. Over time and by exploring the environment, the reinforcement learning controller establishes a statistical summary of plant operation, which is continuously updated as operation continues. The present analysis shows that learning control is a feasible methodology to find a near-optimal control strategy for exploiting the active and passive building thermal storage capacity, and also shows that the learning performance is affected by the dimensionality of the action and state space, the learning rate and several other factors. It is found that it takes a long time to learn control strategies for tasks associated with large state and action spaces.) <|cite_end|>also considers a complete \plant, with two chillers, a TES, and a large building with an air handling unit. The RL controller is tasked with commanding only the zone temperature setpoint and TES charging/discharging flowrate whilst the control of the chillers or the cooling tower is not considered. Besides, trajectories of external inputs, e.g., outside air temperature and electricity price, are the same for all training days in
[ "<|reference_start|> Learn to chill: Intelligent chiller scheduling using meta-learning and deep reinforcement learning: Centralized chiller plants with multiple chillers are typically over-provisioned. Therefore, intelligent scheduling is required for the supply (operating chillers) to efficiently meet the demand (actual cooling load of buildings). Traditional cooling-load based control (CLC) may result in poor part-loaded efficiency. Recent data-driven approaches to chiller control either unrealistically assume perfect knowledge of individual chiller power at various leaving chilled water temperatures (LWTs) or control all chillers with same LWT. We complement existing work with iChill, an end-to-end learning-based intelligent chiller power prediction and scheduling strategy. First, given a dataset of chillers of varying capacities, each of which operates at a fixed LWT and varying loads, iChill meta-learns a model for power prediction. Specifically, for an unseen target chiller, the meta-learned model is re-trained with known LWT to predict power at unseen LWT. Second, given the configuration of a chiller plant and a cooling load profile, iChill learns to schedule individual chillers by jointly deciding the ON/OFF status and LWT; using deep reinforcement learning (DRL). We train and evaluate iChill in a simulated environment with real-world data from a chiller plant of 22 chillers. Specifically, we compare iChill's (1) meta-learned power model with regular transfer learning; and (2) DRL scheduling with multiple baselines including CLC and an oracle model-based predictive control (MPC) strategy with perfect knowledge. We find that iChill's (1) meta-learning improves over transfer learning by up to 15.5%; and (2) DRL scheduling saves 11.5% energy over CLC and is comparable with oracle MPC (12% over CLC). Finally, off-line pre-training of iChill's DRL on the meta-learned chiller models reduces the need for real-world training experimentation by 11x from 3 years to 96 days. <|reference_end|>", "<|reference_start|> Model-free optimal chiller loading method based on q-learning: Chillers consume considerable energy in building HVAC systems, and the optimal operation of chillers is essential for energy conservation in buildings. This article proposes a model-free optimal chiller loading (OCL) method for optimizing chiller operation. Unlike model-based OCL methods, the proposed method does not require accurate chiller performance models as a priori knowledge. The proposed method is based on the Q-learning method, a classical reinforcement learning method. With the comprehensive coefficient of performance (COP) of chillers as the environmental feedback, the model-free loading controller can learn autonomously and optimize the chiller loading by adjusting the set points of the chilled water outlet temperature. A central chiller plant in an office building located in Shanghai is selected as a case system to investigate the energy conservation performance of the proposed method through simulations. The simulation results suggest that the proposed method can save 4.36% of chiller energy during the first cooling season compared to the baseline control, which is slightly inferior to the value for the model-based loading method (4.95%). Owing to its acceptable energy-saving capability, the proposed method can be applied to central chiller plants that lack a system model and historical data. <|reference_end|>", "<|reference_start|> Learn to chill: Intelligent chiller scheduling using meta-learning and deep reinforcement learning: Centralized chiller plants with multiple chillers are typically over-provisioned. Therefore, intelligent scheduling is required for the supply (operating chillers) to efficiently meet the demand (actual cooling load of buildings). Traditional cooling-load based control (CLC) may result in poor part-loaded efficiency. Recent data-driven approaches to chiller control either unrealistically assume perfect knowledge of individual chiller power at various leaving chilled water temperatures (LWTs) or control all chillers with same LWT. We complement existing work with iChill, an end-to-end learning-based intelligent chiller power prediction and scheduling strategy. First, given a dataset of chillers of varying capacities, each of which operates at a fixed LWT and varying loads, iChill meta-learns a model for power prediction. Specifically, for an unseen target chiller, the meta-learned model is re-trained with known LWT to predict power at unseen LWT. Second, given the configuration of a chiller plant and a cooling load profile, iChill learns to schedule individual chillers by jointly deciding the ON/OFF status and LWT; using deep reinforcement learning (DRL). We train and evaluate iChill in a simulated environment with real-world data from a chiller plant of 22 chillers. Specifically, we compare iChill's (1) meta-learned power model with regular transfer learning; and (2) DRL scheduling with multiple baselines including CLC and an oracle model-based predictive control (MPC) strategy with perfect knowledge. We find that iChill's (1) meta-learning improves over transfer learning by up to 15.5%; and (2) DRL scheduling saves 11.5% energy over CLC and is comparable with oracle MPC (12% over CLC). Finally, off-line pre-training of iChill's DRL on the meta-learned chiller models reduces the need for real-world training experimentation by 11x from 3 years to 96 days. <|reference_end|>", "<|reference_start|> Soft Actor-Critic Deep Reinforcement Learning with Hybrid Mixed-Integer Actions for Demand Responsive Scheduling of Energy Systems: <|reference_end|>" ]
[ 50, 51, 55, 58 ]
{"<|multi_cite_2_3|>": "ss-1172481", "<|multi_cite_2_4|>": "ss-827833", "<|multi_cite_2_5|>": "ss-827834", "<|multi_cite_2_6|>": "ss-827835", "<|multi_cite_2_7|>": "ss-827836", "<|multi_cite_3_1|>": "ss-915927", "<|multi_cite_3_2|>": "ss-1667700", "<|multi_cite_3_3|>": "ss-1414915", "<|multi_cite_3_4|>": "ss-827837", "<|multi_cite_3_5|>": "ss-1667699", "<|multi_cite_3_6|>": "ss-1525123", "<|multi_cite_3_7|>": "ss-827838", "<|multi_cite_3_8|>": "ss-2135021", "<|multi_cite_3_9|>": "ss-827839", "<|multi_cite_4_1|>": "ss-827840", "<|multi_cite_4_2|>": "ss-1667704", "<|multi_cite_4_3|>": "ss-827841", "<|multi_cite_4_4|>": "ss-827842", "<|multi_cite_4_5|>": "ss-827843", "<|multi_cite_4_6|>": "ss-827844", "<|multi_cite_4_7|>": "ss-1667705", "<|multi_cite_4_8|>": "ss-1281484", "<|multi_cite_4_9|>": "ss-1166516", "<|multi_cite_5_1|>": "ss-915927", "<|multi_cite_5_2|>": "ss-1667700", "<|multi_cite_5_3|>": "ss-1414915", "<|multi_cite_5_4|>": "ss-827837", "<|multi_cite_6_1|>": "ss-1667699", "<|multi_cite_6_2|>": "ss-1525123", "<|multi_cite_6_3|>": "ss-2135021", "<|multi_cite_6_4|>": "ss-827838", "<|multi_cite_6_5|>": "ss-827839", "<|cite_7|>": "ss-1667701", "<|cite_8|>": "ss-1254344", "<|multi_cite_9_1|>": "ss-1166516", "<|multi_cite_9_2|>": "ss-1281484", "<|multi_cite_10_1|>": "ss-1667699", "<|multi_cite_10_2|>": "ss-1525123", "<|multi_cite_10_3|>": "ss-2135021", "<|multi_cite_10_4|>": "ss-827838", "<|multi_cite_10_5|>": "ss-827839", "<|multi_cite_11_1|>": "ss-1281484", "<|multi_cite_11_2|>": "ss-827840", "<|multi_cite_11_3|>": "ss-1667704", "<|multi_cite_11_4|>": "ss-827841", "<|multi_cite_11_5|>": "ss-827842", "<|multi_cite_11_6|>": "ss-827843", "<|multi_cite_11_7|>": "ss-827844", "<|multi_cite_11_8|>": "ss-1667705", "<|multi_cite_11_9|>": "ss-1166516", "<|multi_cite_12_1|>": "ss-827840", "<|multi_cite_12_2|>": "ss-1667704", "<|multi_cite_12_3|>": "ss-827841", "<|multi_cite_12_4|>": "ss-827842", "<|multi_cite_12_5|>": "ss-827844", "<|multi_cite_13_1|>": "ss-827840", "<|multi_cite_13_2|>": "ss-1667704", "<|multi_cite_13_3|>": "ss-827841", "<|multi_cite_13_4|>": "ss-827843", "<|multi_cite_13_5|>": "ss-827842", "<|cite_14|>": "ss-1667705", "<|cite_15|>": "ss-827844", "<|cite_16|>": "ss-827840", "<|cite_17|>": "ss-827842", "<|cite_18|>": "ss-1281484", "<|cite_19|>": "ss-1166516", "<|cite_20|>": "ss-1166516", "<|multi_cite_21_1|>": "ss-1166516", "<|multi_cite_21_2|>": "ss-1281484", "<|multi_cite_22_1|>": "ss-1667699", "<|multi_cite_22_2|>": "ss-827838", "<|multi_cite_22_3|>": "ss-1525123", "<|cite_23|>": "ss-1667699", "<|cite_24|>": "ss-1525123", "<|cite_25|>": "ss-827838", "<|cite_26|>": "ss-2135021", "<|cite_27|>": "ss-2135021", "<|cite_28|>": "ss-827839", "<|multi_cite_29_1|>": "ss-915927", "<|multi_cite_29_2|>": "ss-827837", "<|multi_cite_29_3|>": "ss-1667700", "<|multi_cite_29_4|>": "ss-1414915", "<|multi_cite_30_1|>": "ss-915927", "<|multi_cite_30_2|>": "ss-1667700", "<|multi_cite_31_1|>": "ss-827837", "<|multi_cite_31_2|>": "ss-1414915", "<|multi_cite_32_1|>": "ss-827840", "<|multi_cite_32_2|>": "ss-1667704", "<|multi_cite_32_3|>": "ss-827841", "<|multi_cite_32_4|>": "ss-827842", "<|multi_cite_32_5|>": "ss-827843", "<|multi_cite_32_6|>": "ss-827844", "<|multi_cite_32_7|>": "ss-1667705", "<|cite_33|>": "ss-1166516", "<|multi_cite_34_1|>": "ss-1667704", "<|multi_cite_34_2|>": "ss-827844", "<|multi_cite_34_3|>": "ss-827842", "<|cite_35|>": "ss-1166516", "<|cite_36|>": "ss-827843", "<|cite_37|>": "ss-827843", "<|cite_38|>": "ss-827845", "<|multi_cite_39_1|>": "ss-1519694", "<|multi_cite_39_2|>": "ss-1667702", "<|cite_40|>": "ss-1667703", "<|multi_cite_41_1|>": "ss-2135021", "<|multi_cite_41_2|>": "ss-827838", "<|multi_cite_41_3|>": "ss-1525123", "<|multi_cite_41_4|>": "ss-1667699", "<|multi_cite_41_5|>": "ss-827839", "<|multi_cite_42_1|>": "ss-915927", "<|multi_cite_42_2|>": "ss-827837", "<|multi_cite_42_3|>": "ss-1667700", "<|multi_cite_42_4|>": "ss-1414915", "<|cite_43|>": "arxiv-405547", "<|cite_44|>": "arxiv-405547", "<|cite_45|>": "arxiv-405547", "<|cite_46|>": "arxiv-405547"}
2108.00637
<|paper_start|> Title: From "study with me" to study with you: how activities of Study With Me livestream on Bilibili facilitate SRL community Abstract: From "study with me" to study with you: how activities of Study With Me livestream on Bilibili facilitate SRL community: It has become a trend to use study with me (SWM) Livestream to create a personalized study ambiance. However, we still have little understanding of the activities of SWM livestream and the streamer's motivation to produce SWM livestream. This paper provides an overview of the activities and how streamers regulate these activities of SWM livestream on a Chinese popular User Generated Content(UGC) website, Bilibili. We observed the number and popularity of the SWM livestreams and analyzed 800 livestreams to understand the streamers' study goals. We analyzed 20 SWM livestreams in detail and interviewed 12 streamers and 10 viewers to understand the activities and the streamer's motivation. We found that streamers produced SWM livestream to seek supervision, find like-minded study partners and help and company others. Streamers don't interact or instruct with the viewers directly but use chat-bot and autonomous interaction to alleviated the interaction burden. Unique sessions like checking-in and study progress reporting promote the viewers' social presence, promoting SOC, and enhancing their engagement. Strict rules and punishment are widely used to concentrate the members on study and contribute to positive atmosphere. We also found that SWM livestream often disappears when the examination is done and the streamer faces doubts on motivation and appearance. These findings suggest that SRL community can provide cognitive and socioemotional support for lonely learners to stick to a long-term study. The activities and streamer's practice inspired how streamers can focus on contemplative efforts while controlling the interaction. Introduction Books on the desk, writing or typing hands, an electronic timer, a highly focused streamer, and no direct interaction nor instruction, these are the daily live session a “study with me” (SWM) streamer would show on the screen. In 2018, a South Korean streamer nicknamed the Bot - No - Jam (t) has attracted more than 321000 subscribers on his YouTube channel. He posted SWM livestream on YouTube, and he called this "Study with Me". The duration of this livestream is very long. Each learning session or livestream lasts an average of six hours. On YouTube, SWM livestream are gaining increasing attention. Another similar livestream on YouTube named "Lofi hip hop radio" is a looping animation with chilling music showing a girl studying at home <|cite_start|> (Reference: lofi hip hop radio - beats to relax/study to: “lofi hip-hop radio: beats to relax/study to” by Justin Wang is both a synopsis for and an analysis of a subgenre of hiphop which has only recently risen to popularity. In his analysis, Wang discusses some of lo-fi music's greatest influences, mainly anime and jazz. Using an analysis of lofi channel titles, comments, and graphics on YouTube, Wang theorizes the connections between the recent and rapid ascent of the subgenre, young Americans’ mental health, and recent events in the public sphere, including the election of President Donald Trump.) <|cite_end|>. Surprisingly, in all kinds of reports, streamers and viewers of SWM livestream said it improved their study motivation and provided them with supervision, a sense of companionship, and competition. In China, the YouTube-like user generated content (UGC) video website Bibibili is an essential base for SWM livestream. cctv.com (China Central Television) released a news piece, \textit{Do you know that this generation of young people would love to study on Bilibili} on April 17, 2019. The news described the current situation of netizens studying on Bilibili. Data from Bilibili showed that 18.27 million people had studied on Bilibili in 2018. The kind of livestream hash-tagged \#Study with Me\# has become the category with the longest livestream time of Bilibili. In 2018, the total duration of SWM livestream reached 1.46 million hours, and the number of them reached 1.03 million. On Bilibili, there can be a maximum of 2,000 to 3,000 SWM livestream per day. Popular SWM streamers can have hundreds of thousands of subscribers. \begin{figure}[h] \centering \subfigure[A SWM livestream on Bilibili.]{ \begin{minipage}{15cm} \centering \includegraphics[width=0.6\linewidth]{instruction4.png} \end{minipage} } \subfigure[A typical SWM livestream on Youtube.]{ \begin{minipage}{15cm} \centering \includegraphics[width=0.6\linewidth]{youtube.png} \end{minipage} } \caption{The snapshot of a SWM livestream on Bilibili (the layout of this livestream contain almost all the typical activities) and YouTube.} \label{snapshot} \end{figure} Despite the contribution of a personalized study ambiance, we still have little understanding of the practices and activities of SWM livestream and the influence of these activities on SRL community. In prior research on SWM videos, they described SWM videos as a kind of video that shows the recorded real learning sessions of the uploaders. The uploaders don’t interact nor instruct directly with the viewers in the video. They found SWM video is a new way to study in the presence of others and that watching SWM videos is an environmental regulation of self-regulated learning (SRL) <|cite_start|> (Reference: Personalizing Ambience and Illusionary Presence: How People Use “Study with me” Videos to Create Effective Studying Environments: “Study with me” videos contain footage of people studying for hours, in which social components like conversations or informational content like instructions are absent. Recently, they became increasingly popular on video-sharing platforms. This paper provides the first broad look into what “study with me” videos are and how people use them. We analyzed 30 “study with me” videos and conducted 12 interviews with their viewers to understand their motivation and viewing practices. We identified a three-factor model that explains the mechanism for shaping a satisfactory studying experience in general. One of the factors, a well-suited ambience, was difficult to achieve because of two common challenges: external conditions that prevent studying in study-friendly places and extra cost needed to create a personally desired ambience. We found that the viewers used “study with me” videos to create a personalized ambience at a lower cost, to find controllable peer pressure, and to get emotional support. These findings suggest that the viewers self-regulate their learning through watching “study with me” videos to improve efficiency even when studying alone at home.) <|cite_end|>. But the potential of SWM livestream for a SRL community is still underexplored. The research about knowledge-sharing-focused livestreams, such as creative livestream <|cite_start|> (Reference: Sharing the Studio: How Creative Livestreaming Can Inspire, Educate, and Engage: Many artists livestream their creative process, allowing viewers to learn and be inspired from the decisions -- and mistakes -- they make along the way. This paper presents the first broad look at the range of creative activities people stream. Through content analysis of livestream archives, interviews with 8 streamers, and online surveys with 165 viewers, we study current practices and challenges in creative livestream communities and compare them with prior observations of livestreaming in other domains. We observed four common types of creative livestreams: teaching, making, socializing, and performing. We identify three open questions for the research community around how to better support the goals of creative streamers and viewers: how to support richer audience interactions at scale, how to support all parts of the creative process, and how to support watching livestream archives.) <|cite_end|>, programming livestream, and intangible cultural heritage livestream <|cite_start|> (Reference: "I feel it is my responsibility to stream": Streaming and Engaging with Intangible Cultural Heritage through Livestreaming: Globalization has led to the destruction of many cultural practices, expressions, and knowledge found within local communities. These practices, defined by UNESCO as Intangible Cultural Heritage (ICH), have been identified, promoted, and safeguarded by nations, academia, organizations and local communities to varying degrees. Despite such efforts, many practices are still in danger of being lost or forgotten forever. With the increased popularity of livestreaming in China, some streamers have begun to use livestreaming to showcase and promote ICH activities. To better understand the practices, opportunities, and challenges inherent in sharing and safeguarding ICH through livestreaming, we interviewed 10 streamers and 8 viewers from China. Through our qualitative investigation, we found that ICH streamers had altruistic motivations and engaged with viewers using multiple modalities beyond livestreams. We also found that livestreaming encouraged real-time interaction and sociality, while non-live curated videos attracted attention from a broader audience and assisted in the archiving of knowledge.) <|cite_end|>, has shown how these livestream practices help the streamer and viewers promote culture, mentor each other, get inspired, and develop computer supported collaborative learning (CSCL). However, SWM livestream shows different characteristics from knowledge-sharing livestream or CSCL community because it doesn’t focus on pedagogical process but support self-regulated learning (SRL). In China, with the support of local social network and other SRL-support software, SWM livestream in local environment presents various unique activities as shown in Fig.\ref{snapshot}, and can facilitate an SRL community. Therefore, we regard SWM livestream as a computer supported collaborative SRL (CSCSRL) and it is underexplored how the practices of SWM livestream support such a community and facilitate SRL. In this paper, we explored how SWM livestream support SRL community. To understand the activities and practices in SWM livestream, we conducted an observation of SWM livestream on Bilibili for a year and a half. In combination with titles and snapshots, we analyzed the contents of 800 SWM livestreams to understand the study goals of streamers. And we conducted a more in-depth content analysis on 20 SWM livestream to understand the organization, management and regulation of the activities of SWM livestream. To have a deep understanding of the activities of SWM livestream, we conducted interviews with 12 streamers and 10 viewers. Combined with observation and interview, we found that 1) streamers produced SWM livestream to seek supervision, find like-minded study partners, and help with other’s SRL; 2) sharing the study experience and plan helped members to reflect on their study and regulated their study plan and behavior; 3) checking in and study progress report promote the viewers’ social presence, which can enhance their engagement, help with SRL and facilitate SOC. 4) strict activity rules, limitation of the topics, and punishment mechanism concentrated the members on study and contribute to community security; 5) chat-bot and autonomous interaction of the viewers are widely used in SWM livestream, which alleviated the streamers interaction burden but limit the activity of interaction. 6) streamers expand the interaction beyond Bilibili and set up private fans group and multi-person virtual study room to facilitate SRL. The contributions of this work are, thus, an observation and interview-based study that identified \romannumeral1) the practices and activities of SWM livestream and how this differs from other entertainment livestream or knowledge-sharing-focused livestreams; \romannumeral2) the motivations of SWM streamers, \romannumeral3)how streamers organize, manage and regulate the activities and facilitate SRL and SOC. <|paper_end|>
[ "<|reference_start|> lofi hip hop radio - beats to relax/study to: “lofi hip-hop radio: beats to relax/study to” by Justin Wang is both a synopsis for and an analysis of a subgenre of hiphop which has only recently risen to popularity. In his analysis, Wang discusses some of lo-fi music's greatest influences, mainly anime and jazz. Using an analysis of lofi channel titles, comments, and graphics on YouTube, Wang theorizes the connections between the recent and rapid ascent of the subgenre, young Americans’ mental health, and recent events in the public sphere, including the election of President Donald Trump. <|reference_end|>", "<|reference_start|> Personalizing Ambience and Illusionary Presence: How People Use “Study with me” Videos to Create Effective Studying Environments: “Study with me” videos contain footage of people studying for hours, in which social components like conversations or informational content like instructions are absent. Recently, they became increasingly popular on video-sharing platforms. This paper provides the first broad look into what “study with me” videos are and how people use them. We analyzed 30 “study with me” videos and conducted 12 interviews with their viewers to understand their motivation and viewing practices. We identified a three-factor model that explains the mechanism for shaping a satisfactory studying experience in general. One of the factors, a well-suited ambience, was difficult to achieve because of two common challenges: external conditions that prevent studying in study-friendly places and extra cost needed to create a personally desired ambience. We found that the viewers used “study with me” videos to create a personalized ambience at a lower cost, to find controllable peer pressure, and to get emotional support. These findings suggest that the viewers self-regulate their learning through watching “study with me” videos to improve efficiency even when studying alone at home. <|reference_end|>", "<|reference_start|> Sharing the Studio: How Creative Livestreaming Can Inspire, Educate, and Engage: Many artists livestream their creative process, allowing viewers to learn and be inspired from the decisions -- and mistakes -- they make along the way. This paper presents the first broad look at the range of creative activities people stream. Through content analysis of livestream archives, interviews with 8 streamers, and online surveys with 165 viewers, we study current practices and challenges in creative livestream communities and compare them with prior observations of livestreaming in other domains. We observed four common types of creative livestreams: teaching, making, socializing, and performing. We identify three open questions for the research community around how to better support the goals of creative streamers and viewers: how to support richer audience interactions at scale, how to support all parts of the creative process, and how to support watching livestream archives. <|reference_end|>", "<|reference_start|> \"I feel it is my responsibility to stream\": Streaming and Engaging with Intangible Cultural Heritage through Livestreaming: Globalization has led to the destruction of many cultural practices, expressions, and knowledge found within local communities. These practices, defined by UNESCO as Intangible Cultural Heritage (ICH), have been identified, promoted, and safeguarded by nations, academia, organizations and local communities to varying degrees. Despite such efforts, many practices are still in danger of being lost or forgotten forever. With the increased popularity of livestreaming in China, some streamers have begun to use livestreaming to showcase and promote ICH activities. To better understand the practices, opportunities, and challenges inherent in sharing and safeguarding ICH through livestreaming, we interviewed 10 streamers and 8 viewers from China. Through our qualitative investigation, we found that ICH streamers had altruistic motivations and engaged with viewers using multiple modalities beyond livestreams. We also found that livestreaming encouraged real-time interaction and sociality, while non-live curated videos attracted attention from a broader audience and assisted in the archiving of knowledge. <|reference_end|>" ]
[ 0, 1, 2, 3 ]
{"<|cite_3|>": "ss-722971", "<|cite_7|>": "ss-722972", "<|cite_8|>": "ss-1205821", "<|cite_10|>": "ss-722973"}
2312.02902-0
<|paper_start|> Title: HeadGaS: Real-Time Animatable Head Avatars via 3D Gaussian Splatting Abstract: HeadGaS: Real-Time Animatable Head Avatars via 3D Gaussian Splatting: 3D head animation has seen major quality and runtime improvements over the last few years, particularly empowered by the advances in differentiable rendering and neural radiance fields. Real-time rendering is a highly desirable goal for real-world applications. We propose HeadGaS, a model that uses 3D Gaussian Splats (3DGS) for 3D head reconstruction and animation. In this paper we introduce a hybrid model that extends the explicit 3DGS representation with a base of learnable latent features, which can be linearly blended with low-dimensional parameters from parametric head models to obtain expression-dependent color and opacity values. We demonstrate that HeadGaS delivers state-of-the-art results in real-time inference frame rates, surpassing baselines by up to 2dB, while accelerating rendering speed by over x10. Introduction \label{sec:intro} Reconstructing photorealistic 3D heads which are in turn controllable and naturally expressive is essential for building digital avatars that look and behave like real humans. This has a wide range of applications including AR/VR, teleconferencing, and gaming. Designing head models that accomplish high fidelity in their appearance, are easy to capture and enable expressive control has been an active research field in recent years, specially due to the fast development of neural and differentiable rendering approaches. Animatable 3D head reconstruction consists on driving a captured head avatar, based on a target sequence of facial expressions and head poses. In the last decades, various parametric 3D morphable models (3DMM) have emerged <|cite_start|> (Reference: {A morphable model for the synthesis of 3D faces: In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an ''unlikely'' appearance Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness.) <|cite_end|> <|cite_start|> (Reference: Learning a model of facial shape and expression from 4d scans: The field of 3D face modeling has a large gap between high-end and low-end methods. At the high end, the best facial animation is indistinguishable from real humans, but this comes at the cost of extensive manual labor. At the low end, face capture from consumer depth sensors relies on 3D face models that are not expressive enough to capture the variability in natural facial shape and expression. We seek a middle ground by learning a facial model from thousands of accurately aligned 3D scans. Our FLAME model (Faces Learned with an Articulated Model and Expressions) is designed to work with existing graphics software and be easy to fit to data. FLAME uses a linear shape space trained from 3800 scans of human heads. FLAME combines this linear shape space with an articulated jaw, neck, and eyeballs, pose-dependent corrective blendshapes, and additional global expression blendshapes. The pose and expression dependent articulations are learned from 4D face sequences in the D3DFACS dataset along with additional 4D sequences. We accurately register a template mesh to the scan sequences and make the D3DFACS registrations available for research purposes. In total the model is trained from over 33, 000 scans. FLAME is low-dimensional but more expressive than the FaceWarehouse model and the Basel Face Model. We compare FLAME to these models by fitting them to static 3D scans and 4D sequences using the same optimization method. FLAME is significantly more accurate and is available for research purposes (http://flame.is.tue.mpg.de).) <|cite_end|> <|cite_start|> (Reference: FaceWarehouse: a 3D facial expression database for visual computing: We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.) <|cite_end|>, which can be fitted to sequences of a moving head and later on enable pose and expression control. Though these models make it possible to drive a captured avatar via a set of low-dimensional parameters, generally their generated images lack realism. Other works utilize the fitting of low-dimensional parameters from such 3DMM models for initial estimates and build on other mechanisms to obtain more realistic imagery with animation capabilities <|cite_start|> (Reference: Neural Head Avatars from Monocular RGB Videos: We present Neural Head Avatars, a novel neural representation that explicitly models the surface geometry and appearance of an animatable human avatar that can be used for teleconferencing in AR/VR or other applications in the movie or games industry that rely on a digital human. Our representation can be learned from a monocular RGB portrait video that features a range of different expressions and views. Specifically, we propose a hybrid representation consisting of a morphable model for the coarse shape and expressions of the face, and two feed-forward networks, predicting vertex offsets of the underlying mesh as well as a view- and expression-dependent texture. We demonstrate that this representation is able to accurately extrapolate to unseen poses and view points, and generates natural expressions while providing sharp texture details. Compared to previous works on head avatars, our method provides a disentangled shape and appearance model of the complete human head (including hair) that is compatible with the standard graphics pipeline. Moreover, it quantitatively and qualitatively outperforms current state of the art in terms of reconstruction quality and novel-view synthesis.) <|cite_end|> <|cite_start|> (Reference: Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction: We present dynamic neural radiance fields for modeling the appearance and dynamics of a human face. Digitally modeling and reconstructing a talking human is a key building-block for a variety of applications. Especially, for telepresence applications in AR or VR, a faithful reproduction of the appearance including novel viewpoints or head-poses is required. In contrast to state-of-the-art approaches that model the geometry and material properties explicitly, or are purely image-based, we introduce an implicit representation of the head based on scene representation networks. To handle the dynamics of the face, we combine our scene representation network with a low-dimensional morphable model which provides explicit control over pose and expressions. We use volumetric rendering to generate images from this hybrid representation and demonstrate that such a dynamic neural scene representation can be learned from monocular input data only, without the need of a specialized capture setup. In our experiments, we show that this learned volumetric representation allows for photo-realistic image generation that surpasses the quality of state-of-the-art video-based reenactment methods.) <|cite_end|>. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figs/teaser_final.PNG} \caption{\textbf{Overview of \methodName.} We reconstruct a 3D head based on an expression-aware 3D Gaussian cloud representation, which results in real-time rendering and high image quality. \textbf{Left:} The model is trained with a monocular video of a moving head. \textbf{Right:} At inference, we query the model with a novel sequence of camera poses and expression parameters to render a real-time video.} \label{fig:teaser} \end{figure} In particular, with the recent success of differentiable rendering, various 3D-aware animatable head models emerged that can reconstruct and render 3D heads, while providing the functionality to drive them based on expression parameters from 3DMM models. These representations can be explicit (mesh, point clouds) <|cite_start|> (Reference: Neural Head Avatars from Monocular RGB Videos: We present Neural Head Avatars, a novel neural representation that explicitly models the surface geometry and appearance of an animatable human avatar that can be used for teleconferencing in AR/VR or other applications in the movie or games industry that rely on a digital human. Our representation can be learned from a monocular RGB portrait video that features a range of different expressions and views. Specifically, we propose a hybrid representation consisting of a morphable model for the coarse shape and expressions of the face, and two feed-forward networks, predicting vertex offsets of the underlying mesh as well as a view- and expression-dependent texture. We demonstrate that this representation is able to accurately extrapolate to unseen poses and view points, and generates natural expressions while providing sharp texture details. Compared to previous works on head avatars, our method provides a disentangled shape and appearance model of the complete human head (including hair) that is compatible with the standard graphics pipeline. Moreover, it quantitatively and qualitatively outperforms current state of the art in terms of reconstruction quality and novel-view synthesis.) <|cite_end|> <|cite_start|> (Reference: PointAvatar: Deformable Point-based Head Avatars from Videos: The ability to create realistic, animatable and relightable head avatars from casual video sequences would open up wide ranging applications in communication and entertainment. Current methods either build on explicit 3D morphable meshes (3DMM) or exploit neural implicit representations. The former are limited by fixed topology, while the latter are non-trivial to deform and inefficient to render. Furthermore, existing approaches entangle lighting in the color estimation, thus they are limited in re-rendering the avatar in new environments. In contrast, we propose PointAvatar, a deformable point-based representation that disentangles the source color into intrinsic albedo and normal-dependent shading. We demonstrate that PointAvatar bridges the gap between existing mesh- and implicit representations, combining high-quality geometry and appearance with topological flexibility, ease of deformation and rendering efficiency. We show that our method is able to generate animatable 3D avatars using monocular videos from multiple sources including hand-held smartphones, laptop webcams and internet videos, achieving state-of-the-art quality in challenging cases where previous methods fail, e.g., thin hair strands, while being significantly more efficient in training than competing methods.) <|cite_end|>or implicit (neural) <|cite_start|> (Reference: Reconstructing Personalized Semantic Facial NeRF Models From Monocular Video: We present a novel semantic model for human head defined with neural radiance field. The 3D-consistent head model consist of a set of disentangled and interpretable bases, and can be driven by low-dimensional expression coefficients. Thanks to the powerful representation ability of neural radiance field, the constructed model can represent complex facial attributes including hair, wearings, which can not be represented by traditional mesh blendshape. To construct the personalized semantic facial model, we propose to define the bases as several multi-level voxel fields. With a short monocular RGB video as input, our method can construct the subject's semantic facial NeRF model with only ten to twenty minutes, and can render a photo-realistic human head image in tens of miliseconds with a given expression coefficient and view direction. With this novel representation, we apply it to many tasks like facial retargeting and expression editing. Experimental results demonstrate its strong representation ability and training/inference speed. Demo videos and released code are provided in our project page: https://ustc3dv.github.io/NeRFBlendShape/) <|cite_end|>. Thereby, the explicit models impose stronger constraints on the head surface, which allows for better expression and pose generalization, while making it more difficult to preserve photo realism, as they inherit the limitations and artifacts of the underlying representation (mesh, point cloud) as reported in findings from other works <|cite_start|> (Reference: Reconstructing Personalized Semantic Facial NeRF Models From Monocular Video: We present a novel semantic model for human head defined with neural radiance field. The 3D-consistent head model consist of a set of disentangled and interpretable bases, and can be driven by low-dimensional expression coefficients. Thanks to the powerful representation ability of neural radiance field, the constructed model can represent complex facial attributes including hair, wearings, which can not be represented by traditional mesh blendshape. To construct the personalized semantic facial model, we propose to define the bases as several multi-level voxel fields. With a short monocular RGB video as input, our method can construct the subject's semantic facial NeRF model with only ten to twenty minutes, and can render a photo-realistic human head image in tens of miliseconds with a given expression coefficient and view direction. With this novel representation, we apply it to many tasks like facial retargeting and expression editing. Experimental results demonstrate its strong representation ability and training/inference speed. Demo videos and released code are provided in our project page: https://ustc3dv.github.io/NeRFBlendShape/) <|cite_end|>and also seen in our experiments (Fig.~\ref{fig:result}). With the recent success of neural radiance fields (NeRFs) <|cite_start|> (Reference: NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis: We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x,y,z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.) <|cite_end|>, typically implicit models are based on a NeRF representation <|cite_start|> (Reference: Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction: We present dynamic neural radiance fields for modeling the appearance and dynamics of a human face. Digitally modeling and reconstructing a talking human is a key building-block for a variety of applications. Especially, for telepresence applications in AR or VR, a faithful reproduction of the appearance including novel viewpoints or head-poses is required. In contrast to state-of-the-art approaches that model the geometry and material properties explicitly, or are purely image-based, we introduce an implicit representation of the head based on scene representation networks. To handle the dynamics of the face, we combine our scene representation network with a low-dimensional morphable model which provides explicit control over pose and expressions. We use volumetric rendering to generate images from this hybrid representation and demonstrate that such a dynamic neural scene representation can be learned from monocular input data only, without the need of a specialized capture setup. In our experiments, we show that this learned volumetric representation allows for photo-realistic image generation that surpasses the quality of state-of-the-art video-based reenactment methods.) <|cite_end|>. Some of these models <|cite_start|> (Reference: Instant Volumetric Head Avatars: We present Instant Volumetric Head Avatars (INSTA), a novel approach for reconstructing photo-realistic digital avatars instantaneously. INSTA models a dynamic neural radiance field based on neural graphics primitives embedded around a parametric face model. Our pipeline is trained on a single monocular RGB portrait video that observes the subject under different expressions and views. While state-of-the-art methods take up to several days to train an avatar, our method can reconstruct a digital avatar in less than 10 minutes on modern GPU hardware, which is orders of magnitude faster than previous solutions. In addition, it allows for the interactive rendering of novel poses and expressions. By leveraging the geometry prior of the underlying parametric face model, we demonstrate that INSTA extrapolates to unseen poses. In quantitative and qualitative studies on various subjects, INSTA outperforms state-of-the-art methods regarding rendering quality and training time.) <|cite_end|> <|cite_start|> (Reference: Reconstructing Personalized Semantic Facial NeRF Models From Monocular Video: We present a novel semantic model for human head defined with neural radiance field. The 3D-consistent head model consist of a set of disentangled and interpretable bases, and can be driven by low-dimensional expression coefficients. Thanks to the powerful representation ability of neural radiance field, the constructed model can represent complex facial attributes including hair, wearings, which can not be represented by traditional mesh blendshape. To construct the personalized semantic facial model, we propose to define the bases as several multi-level voxel fields. With a short monocular RGB video as input, our method can construct the subject's semantic facial NeRF model with only ten to twenty minutes, and can render a photo-realistic human head image in tens of miliseconds with a given expression coefficient and view direction. With this novel representation, we apply it to many tasks like facial retargeting and expression editing. Experimental results demonstrate its strong representation ability and training/inference speed. Demo videos and released code are provided in our project page: https://ustc3dv.github.io/NeRFBlendShape/) <|cite_end|>prioritize time constraints and therefore rely on very fast volumetric NeRF variants (e.g. Instant NGP <|cite_start|> (Reference: Instant Neural Graphics Primitives with a Multiresolution Hash Encoding: Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate. We reduce this cost with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations: a small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through stochastic gradient descent. The multiresolution structure allows the network to disambiguate hash collisions, making for a simple architecture that is trivial to parallelize on modern GPUs. We leverage this parallelism by implementing the whole system using fully-fused CUDA kernels with a focus on minimizing wasted bandwidth and compute operations. We achieve a combined speedup of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of ${1920\!\times\!1080}$.) <|cite_end|>) to enable fast training and rendering. Despite impressive efforts to improve NeRFs to be more accurate <|cite_start|> (Reference: Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields: Neural Radiance Field training can be accelerated through the use of grid-based representations in NeRF's learned mapping from spatial coordinates to colors and volumetric density. However, these grid-based approaches lack an explicit understanding of scale and therefore often introduce aliasing, usually in the form of jaggies or missing scene content. Anti-aliasing has previously been addressed by mip-NeRF 360, which reasons about sub-volumes along a cone rather than points along a ray, but this approach is not natively compatible with current grid-based techniques. We show how ideas from rendering and signal processing can be used to construct a technique that combines mip-NeRF 360 and grid-based models such as Instant NGP to yield error rates that are 8% - 77% lower than either prior technique, and that trains 24x faster than mip-NeRF 360.) <|cite_end|>and fast <|cite_start|> (Reference: Instant Neural Graphics Primitives with a Multiresolution Hash Encoding: Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate. We reduce this cost with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations: a small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through stochastic gradient descent. The multiresolution structure allows the network to disambiguate hash collisions, making for a simple architecture that is trivial to parallelize on modern GPUs. We leverage this parallelism by implementing the whole system using fully-fused CUDA kernels with a focus on minimizing wasted bandwidth and compute operations. We achieve a combined speedup of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of ${1920\!\times\!1080}$.) <|cite_end|>, there is a trade-off between these two aspects that is hard to satisfy simultaneously. Moreover, even fast and efficient NeRF models like InstantNGP typically enable interactive inference frame rates at best (10-15 fps) <|cite_start|> (Reference: 3D Gaussian Splatting for Real-Time Radiance Field Rendering: Radiance Field methods have recently revolutionized novel-view synthesis of scenes captured with multiple photos or videos. However, achieving high visual quality still requires neural networks that are costly to train and render, while recent faster methods inevitably trade off speed for quality. For unbounded and complete scenes (rather than isolated objects) and 1080p resolution rendering, no current method can achieve real-time display rates. We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (>= 30 fps) novel-view synthesis at 1080p resolution. First, starting from sparse points produced during camera calibration, we represent the scene with 3D Gaussians that preserve desirable properties of continuous volumetric radiance fields for scene optimization while avoiding unnecessary computation in empty space; Second, we perform interleaved optimization/density control of the 3D Gaussians, notably optimizing anisotropic covariance to achieve an accurate representation of the scene; Third, we develop a fast visibility-aware rendering algorithm that supports anisotropic splatting and both accelerates training and allows realtime rendering. We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.) <|cite_end|>. Very recently, 3D Gaussian Splatting (3DGS) <|cite_start|> (Reference: 3D Gaussian Splatting for Real-Time Radiance Field Rendering: Radiance Field methods have recently revolutionized novel-view synthesis of scenes captured with multiple photos or videos. However, achieving high visual quality still requires neural networks that are costly to train and render, while recent faster methods inevitably trade off speed for quality. For unbounded and complete scenes (rather than isolated objects) and 1080p resolution rendering, no current method can achieve real-time display rates. We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (>= 30 fps) novel-view synthesis at 1080p resolution. First, starting from sparse points produced during camera calibration, we represent the scene with 3D Gaussians that preserve desirable properties of continuous volumetric radiance fields for scene optimization while avoiding unnecessary computation in empty space; Second, we perform interleaved optimization/density control of the 3D Gaussians, notably optimizing anisotropic covariance to achieve an accurate representation of the scene; Third, we develop a fast visibility-aware rendering algorithm that supports anisotropic splatting and both accelerates training and allows realtime rendering. We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.) <|cite_end|>emerged as a competitive alternative to NeRF, which leads to reasonable photo-realism while bringing the rendering speed to real-time rates. This is thanks to its representation as a set of 3D Gaussian primitives, with a more efficient space coverage compared to point clouds, combined with efficient tile-based rasterization. However, in the light of 3D head animation, in its original form, 3DGS does not constitute an intuitive surface or point set that can be directly deformed based on 3DMM deformation, unlike other well-known representations \eg surface or pointcloud based. To circumvent this limitation, we propose \methodName, the first work that enhances 3D Gaussians with head animation capabilities (see Figure~\ref{fig:teaser}). At test time, our model receives a sequence of camera views and expression parameters and is capable of generating a corresponding video of the reconstructed avatar. Thereby, inspired by the expression blending idea of traditional 3DMMs <|cite_start|> (Reference: {A morphable model for the synthesis of 3D faces: In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an ''unlikely'' appearance Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness.) <|cite_end|>, we introduce a base of latent features inside each Gaussian. This base is multiplied with an expression vector, and its sum is fed to a multi-layer perceptron (MLP) to yield the final color and opacity. Such model allows for varying colors of a certain rendered frame based on the expression vector. Our model can work with any 3DMM representation, as it does not explicitly model deformations with respect to a particular mesh topology. Practically, in our experiments we show that \methodNameSpace can be controlled with expression parameters from two different 3DMMs, namely FLAME <|cite_start|> (Reference: Learning a model of facial shape and expression from 4d scans: The field of 3D face modeling has a large gap between high-end and low-end methods. At the high end, the best facial animation is indistinguishable from real humans, but this comes at the cost of extensive manual labor. At the low end, face capture from consumer depth sensors relies on 3D face models that are not expressive enough to capture the variability in natural facial shape and expression. We seek a middle ground by learning a facial model from thousands of accurately aligned 3D scans. Our FLAME model (Faces Learned with an Articulated Model and Expressions) is designed to work with existing graphics software and be easy to fit to data. FLAME uses a linear shape space trained from 3800 scans of human heads. FLAME combines this linear shape space with an articulated jaw, neck, and eyeballs, pose-dependent corrective blendshapes, and additional global expression blendshapes. The pose and expression dependent articulations are learned from 4D face sequences in the D3DFACS dataset along with additional 4D sequences. We accurately register a template mesh to the scan sequences and make the D3DFACS registrations available for research purposes. In total the model is trained from over 33, 000 scans. FLAME is low-dimensional but more expressive than the FaceWarehouse model and the Basel Face Model. We compare FLAME to these models by fitting them to static 3D scans and 4D sequences using the same optimization method. FLAME is significantly more accurate and is available for research purposes (http://flame.is.tue.mpg.de).) <|cite_end|>and FaceWarehouse <|cite_start|> (Reference: FaceWarehouse: a 3D facial expression database for visual computing: We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.) <|cite_end|>. The rendering is done in real-time frame rates, at over 100fps (more than $200$fps for $512^2$ resolution). We evaluate our model in publicly monocular video datasets, also used in related works <|cite_start|> (Reference: Reconstructing Personalized Semantic Facial NeRF Models From Monocular Video: We present a novel semantic model for human head defined with neural radiance field. The 3D-consistent head model consist of a set of disentangled and interpretable bases, and can be driven by low-dimensional expression coefficients. Thanks to the powerful representation ability of neural radiance field, the constructed model can represent complex facial attributes including hair, wearings, which can not be represented by traditional mesh blendshape. To construct the personalized semantic facial model, we propose to define the bases as several multi-level voxel fields. With a short monocular RGB video as input, our method can construct the subject's semantic facial NeRF model with only ten to twenty minutes, and can render a photo-realistic human head image in tens of miliseconds with a given expression coefficient and view direction. With this novel representation, we apply it to many tasks like facial retargeting and expression editing. Experimental results demonstrate its strong representation ability and training/inference speed. Demo videos and released code are provided in our project page: https://ustc3dv.github.io/NeRFBlendShape/) <|cite_end|> <|cite_start|> (Reference: Instant Volumetric Head Avatars: We present Instant Volumetric Head Avatars (INSTA), a novel approach for reconstructing photo-realistic digital avatars instantaneously. INSTA models a dynamic neural radiance field based on neural graphics primitives embedded around a parametric face model. Our pipeline is trained on a single monocular RGB portrait video that observes the subject under different expressions and views. While state-of-the-art methods take up to several days to train an avatar, our method can reconstruct a digital avatar in less than 10 minutes on modern GPU hardware, which is orders of magnitude faster than previous solutions. In addition, it allows for the interactive rendering of novel poses and expressions. By leveraging the geometry prior of the underlying parametric face model, we demonstrate that INSTA extrapolates to unseen poses. In quantitative and qualitative studies on various subjects, INSTA outperforms state-of-the-art methods regarding rendering quality and training time.) <|cite_end|> <|cite_start|> (Reference: PointAvatar: Deformable Point-based Head Avatars from Videos: The ability to create realistic, animatable and relightable head avatars from casual video sequences would open up wide ranging applications in communication and entertainment. Current methods either build on explicit 3D morphable meshes (3DMM) or exploit neural implicit representations. The former are limited by fixed topology, while the latter are non-trivial to deform and inefficient to render. Furthermore, existing approaches entangle lighting in the color estimation, thus they are limited in re-rendering the avatar in new environments. In contrast, we propose PointAvatar, a deformable point-based representation that disentangles the source color into intrinsic albedo and normal-dependent shading. We demonstrate that PointAvatar bridges the gap between existing mesh- and implicit representations, combining high-quality geometry and appearance with topological flexibility, ease of deformation and rendering efficiency. We show that our method is able to generate animatable 3D avatars using monocular videos from multiple sources including hand-held smartphones, laptop webcams and internet videos, achieving state-of-the-art quality in challenging cases where previous methods fail, e.g., thin hair strands, while being significantly more efficient in training than competing methods.) <|cite_end|>. Thereby, we demonstrate that the proposed model is capable of producing state-of-the-art results, while increasing the rendering speed by a factor of at least $\times 10$ compared to interactive NeRF-based baselines <|cite_start|> (Reference: Reconstructing Personalized Semantic Facial NeRF Models From Monocular Video: We present a novel semantic model for human head defined with neural radiance field. The 3D-consistent head model consist of a set of disentangled and interpretable bases, and can be driven by low-dimensional expression coefficients. Thanks to the powerful representation ability of neural radiance field, the constructed model can represent complex facial attributes including hair, wearings, which can not be represented by traditional mesh blendshape. To construct the personalized semantic facial model, we propose to define the bases as several multi-level voxel fields. With a short monocular RGB video as input, our method can construct the subject's semantic facial NeRF model with only ten to twenty minutes, and can render a photo-realistic human head image in tens of miliseconds with a given expression coefficient and view direction. With this novel representation, we apply it to many tasks like facial retargeting and expression editing. Experimental results demonstrate its strong representation ability and training/inference speed. Demo videos and released code are provided in our project page: https://ustc3dv.github.io/NeRFBlendShape/) <|cite_end|> <|cite_start|> (Reference: Instant Volumetric Head Avatars: We present Instant Volumetric Head Avatars (INSTA), a novel approach for reconstructing photo-realistic digital avatars instantaneously. INSTA models a dynamic neural radiance field based on neural graphics primitives embedded around a parametric face model. Our pipeline is trained on a single monocular RGB portrait video that observes the subject under different expressions and views. While state-of-the-art methods take up to several days to train an avatar, our method can reconstruct a digital avatar in less than 10 minutes on modern GPU hardware, which is orders of magnitude faster than previous solutions. In addition, it allows for the interactive rendering of novel poses and expressions. By leveraging the geometry prior of the underlying parametric face model, we demonstrate that INSTA extrapolates to unseen poses. In quantitative and qualitative studies on various subjects, INSTA outperforms state-of-the-art methods regarding rendering quality and training time.) <|cite_end|>. We show the applications of \methodNameSpace in various tasks, such as novel same-person expression transfer, cross-subject expression transfer, as well as novel view synthesis. To summarize, our contributions include: \begin{enumerate} \item We formulate the first work that can render animatable heads on real-time, adopting an efficient set of 3D Gaussian primitives as representation. \item We extend the 3D Gaussian representation <|cite_start|> (Reference: 3D Gaussian Splatting for Real-Time Radiance Field Rendering: Radiance Field methods have recently revolutionized novel-view synthesis of scenes captured with multiple photos or videos. However, achieving high visual quality still requires neural networks that are costly to train and render, while recent faster methods inevitably trade off speed for quality. For unbounded and complete scenes (rather than isolated objects) and 1080p resolution rendering, no current method can achieve real-time display rates. We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (>= 30 fps) novel-view synthesis at 1080p resolution. First, starting from sparse points produced during camera calibration, we represent the scene with 3D Gaussians that preserve desirable properties of continuous volumetric radiance fields for scene optimization while avoiding unnecessary computation in empty space; Second, we perform interleaved optimization/density control of the 3D Gaussians, notably optimizing anisotropic covariance to achieve an accurate representation of the scene; Third, we develop a fast visibility-aware rendering algorithm that supports anisotropic splatting and both accelerates training and allows realtime rendering. We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.) <|cite_end|>with a base of latent features, which can be weighted by an expression vector to enable head expression controllability. \item We evaluate extensively our proposed method, and compare it against other recent state-of-the-art approaches, obtaining up to 2dB improvement and $\times 10$ speed ups. \end{enumerate} Related Work \label{sec:related} Our work consists in reconstruction and rendering of controllable 3D heads, via a set of images. As our framework relies on a radiance field, we first give an overview of the respective relevant methods. Further, we discuss works related to animatable head reconstruction. \subsection{Radiance Fields} NeRF <|cite_start|> (Reference: NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis: We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x,y,z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.) <|cite_end|>represent the scene as an implicit neural radiance field, that queries 3D space and predicts density and view-dependent color via a multi-layer perceptron (MLP). In the following years, many follow-up works have focused on improving different aspects of it such as anti-aliasing <|cite_start|> (Reference: Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields: The rendering procedure used by neural radiance fields (NeRF) samples a scene with a single ray per pixel and may therefore produce renderings that are excessively blurred or aliased when training or testing images observe scene content at different resolutions. The straightforward solution of supersampling by rendering with multiple rays per pixel is impractical for NeRF, because rendering each ray requires querying a multilayer perceptron hundreds of times. Our solution, which we call "mip-NeRF" (a la "mipmap"), extends NeRF to represent the scene at a continuously-valued scale. By efficiently rendering anti-aliased conical frustums instead of rays, mip-NeRF reduces objectionable aliasing artifacts and significantly improves NeRF's ability to represent fine details, while also being 7% faster than NeRF and half the size. Compared to NeRF, mip-NeRF reduces average error rates by 17% on the dataset presented with NeRF and by 60% on a challenging multiscale variant of that dataset that we present. Mip-NeRF is also able to match the accuracy of a brute-force supersampled NeRF on our multiscale dataset while being 22x faster.) <|cite_end|> <|cite_start|> (Reference: Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields: Though neural radiance fields (NeRF) have demonstrated impressive view synthesis results on objects and small bounded regions of space, they struggle on "unbounded" scenes, where the camera may point in any direction and content may exist at any distance. In this setting, existing NeRF-like models often produce blurry or low-resolution renderings (due to the unbalanced detail and scale of nearby and distant objects), are slow to train, and may exhibit artifacts due to the inherent ambiguity of the task of reconstructing a large scene from a small set of images. We present an extension of mip-NeRF (a NeRF variant that addresses sampling and aliasing) that uses a non-linear scene parameterization, online distillation, and a novel distortion-based regularizer to overcome the challenges presented by unbounded scenes. Our model, which we dub "mip-NeRF 360" as we target scenes in which the camera rotates 360 degrees around a point, reduces mean-squared error by 57% compared to mip-NeRF, and is able to produce realistic synthesized views and detailed depth maps for highly intricate, unbounded real-world scenes.) <|cite_end|> <|cite_start|> (Reference: Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields: Neural Radiance Field training can be accelerated through the use of grid-based representations in NeRF's learned mapping from spatial coordinates to colors and volumetric density. However, these grid-based approaches lack an explicit understanding of scale and therefore often introduce aliasing, usually in the form of jaggies or missing scene content. Anti-aliasing has previously been addressed by mip-NeRF 360, which reasons about sub-volumes along a cone rather than points along a ray, but this approach is not natively compatible with current grid-based techniques. We show how ideas from rendering and signal processing can be used to construct a technique that combines mip-NeRF 360 and grid-based models such as Instant NGP to yield error rates that are 8% - 77% lower than either prior technique, and that trains 24x faster than mip-NeRF 360.) <|cite_end|>, regularization for sparse views <|cite_start|> (Reference: RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs: Neural Radiance Fields (NeRF) have emerged as a powerful representation for the task of novel view synthesis due to their simplicity and state-of-the-art performance. Though NeRF can produce photorealistic renderings of unseen viewpoints when many input views are available, its performance drops significantly when this number is reduced. We observe that the majority of artifacts in sparse input scenarios are caused by errors in the estimated scene geometry, and by divergent behavior at the start of training. We address this by regularizing the geometry and appearance of patches rendered from unobserved viewpoints, and annealing the ray sampling space during training. We additionally use a normalizing flow model to regularize the color of unobserved viewpoints. Our model outperforms not only other methods that optimize over a single scene, but in many cases also conditional models that are extensively pre-trained on large multi-view datasets.) <|cite_end|> <|cite_start|> (Reference: Depth-supervised NeRF: Fewer Views and Faster Training for Free: A commonly observed failure mode of Neural Radiance Field (NeRF) is fitting incorrect geometries when given an insufficient number of input views. One potential reason is that standard volumetric rendering does not enforce the constraint that most of a scene's geometry consist of empty space and opaque surfaces. We formalize the above assumption through DS-NeRF (Depth-supervised Neural Radiance Fields), a loss for learning radiance fields that takes advantage of readily-available depth supervision. We leverage the fact that current NeRF pipelines require images with known camera poses that are typically estimated by running structure-from-motion (SFM). Crucially, SFM also produces sparse 3D points that can be used as "free" depth supervision during training: we add a loss to encourage the distribution of a ray's terminating depth matches a given 3D keypoint, incorporating depth uncertainty. DS-NeRF can render better images given fewer training views while training 2-3x faster. Further, we show that our loss is compatible with other recently proposed NeRF methods, demonstrating that depth is a cheap and easily digestible supervisory signal. And finally, we find that DS-NeRF can support other types of depth supervision such as scanned depth sensors and RGB-D reconstruction outputs.) <|cite_end|> <|cite_start|> (Reference: SPARF: Neural Radiance Fields from Sparse and Noisy Poses: Neural Radiance Field (NeRF) has recently emerged as a powerful representation to synthesize photorealistic novel views. While showing impressive performance, it relies on the availability of dense input views with highly accurate camera poses, thus limiting its application in real-world scenarios. In this work, we introduce Sparse Pose Adjusting Radiance Field (SPARF), to address the challenge of novel-view synthesis given only few wide-baseline input images (as low as 3) with noisy camera poses. Our approach exploits multi-view geometry constraints in order to jointly learn the NeRF and refine the camera poses. By relying on pixel matches extracted between the input views, our multi-view correspondence objective enforces the optimized scene and camera poses to converge to a global and geometrically accurate solution. Our depth consistency loss further encourages the reconstructed scene to be consistent from any viewpoint. Our approach sets a new state of the art in the sparse-view regime on multiple challenging datasets.) <|cite_end|>and speed <|cite_start|> (Reference: Direct Voxel Grid Optimization: Super-fast Convergence for Radiance Fields Reconstruction: We present a super-fast convergence approach to reconstructing the per-scene radiance field from a set of images that capture the scene with known poses. This task, which is often applied to novel view synthesis, is recently revolutionized by Neural Radiance Field (NeRF) for its state-of-the-art quality and flexibility. However, NeRF and its variants require a lengthy training time ranging from hours to days for a single scene. In contrast, our approach achieves NeRF-comparable quality and converges rapidly from scratch in less than 15 minutes with a single GPU. We adopt a representation consisting of a density voxel grid for scene geometry and a feature voxel grid with a shallow network for complex view-dependent appearance. Modeling with explicit and discretized volume representations is not new, but we propose two simple yet non-trivial techniques that contribute to fast convergence speed and high-quality output. First, we introduce the post-activation interpolation on voxel density, which is capable of producing sharp surfaces in lower grid resolution. Second, direct voxel density optimization is prone to suboptimal geometry solutions, so we robustify the optimization process by imposing several priors. Finally, evaluation on five inward-facing benchmarks shows that our method matches, if not surpasses, NeRF's quality, yet it only takes about 15 minutes to train from scratch for a new scene.) <|cite_end|> <|cite_start|> (Reference: Instant Neural Graphics Primitives with a Multiresolution Hash Encoding: Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate. We reduce this cost with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations: a small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through stochastic gradient descent. The multiresolution structure allows the network to disambiguate hash collisions, making for a simple architecture that is trivial to parallelize on modern GPUs. We leverage this parallelism by implementing the whole system using fully-fused CUDA kernels with a focus on minimizing wasted bandwidth and compute operations. We achieve a combined speedup of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of ${1920\!\times\!1080}$.) <|cite_end|> <|cite_start|> (Reference: TensoRF: Tensorial Radiance Fields: We present TensoRF, a novel approach to model and reconstruct radiance fields. Unlike NeRF that purely uses MLPs, we model the radiance field of a scene as a 4D tensor, which represents a 3D voxel grid with per-voxel multi-channel features. Our central idea is to factorize the 4D scene tensor into multiple compact low-rank tensor components. We demonstrate that applying traditional CP decomposition -- that factorizes tensors into rank-one components with compact vectors -- in our framework leads to improvements over vanilla NeRF. To further boost performance, we introduce a novel vector-matrix (VM) decomposition that relaxes the low-rank constraints for two modes of a tensor and factorizes tensors into compact vector and matrix factors. Beyond superior rendering quality, our models with CP and VM decompositions lead to a significantly lower memory footprint in comparison to previous and concurrent works that directly optimize per-voxel features. Experimentally, we demonstrate that TensoRF with CP decomposition achieves fast reconstruction (<30 min) with better rendering quality and even a smaller model size (<4 MB) compared to NeRF. Moreover, TensoRF with VM decomposition further boosts rendering quality and outperforms previous state-of-the-art methods, while reducing the reconstruction time (<10 min) and retaining a compact model size (<75 MB).) <|cite_end|>. DVGO <|cite_start|> (Reference: Direct Voxel Grid Optimization: Super-fast Convergence for Radiance Fields Reconstruction: We present a super-fast convergence approach to reconstructing the per-scene radiance field from a set of images that capture the scene with known poses. This task, which is often applied to novel view synthesis, is recently revolutionized by Neural Radiance Field (NeRF) for its state-of-the-art quality and flexibility. However, NeRF and its variants require a lengthy training time ranging from hours to days for a single scene. In contrast, our approach achieves NeRF-comparable quality and converges rapidly from scratch in less than 15 minutes with a single GPU. We adopt a representation consisting of a density voxel grid for scene geometry and a feature voxel grid with a shallow network for complex view-dependent appearance. Modeling with explicit and discretized volume representations is not new, but we propose two simple yet non-trivial techniques that contribute to fast convergence speed and high-quality output. First, we introduce the post-activation interpolation on voxel density, which is capable of producing sharp surfaces in lower grid resolution. Second, direct voxel density optimization is prone to suboptimal geometry solutions, so we robustify the optimization process by imposing several priors. Finally, evaluation on five inward-facing benchmarks shows that our method matches, if not surpasses, NeRF's quality, yet it only takes about 15 minutes to train from scratch for a new scene.) <|cite_end|>replace the MLP of NeRF with a density and learned feature voxel grid to considerably speed up convergence. TensoRF <|cite_start|> (Reference: TensoRF: Tensorial Radiance Fields: We present TensoRF, a novel approach to model and reconstruct radiance fields. Unlike NeRF that purely uses MLPs, we model the radiance field of a scene as a 4D tensor, which represents a 3D voxel grid with per-voxel multi-channel features. Our central idea is to factorize the 4D scene tensor into multiple compact low-rank tensor components. We demonstrate that applying traditional CP decomposition -- that factorizes tensors into rank-one components with compact vectors -- in our framework leads to improvements over vanilla NeRF. To further boost performance, we introduce a novel vector-matrix (VM) decomposition that relaxes the low-rank constraints for two modes of a tensor and factorizes tensors into compact vector and matrix factors. Beyond superior rendering quality, our models with CP and VM decompositions lead to a significantly lower memory footprint in comparison to previous and concurrent works that directly optimize per-voxel features. Experimentally, we demonstrate that TensoRF with CP decomposition achieves fast reconstruction (<30 min) with better rendering quality and even a smaller model size (<4 MB) compared to NeRF. Moreover, TensoRF with VM decomposition further boosts rendering quality and outperforms previous state-of-the-art methods, while reducing the reconstruction time (<10 min) and retaining a compact model size (<75 MB).) <|cite_end|>factorize the 4D feature voxel grid of a scene into a set of low-rank 2D and 3D tensors which improves efficiency. InstantNGP <|cite_start|> (Reference: Instant Neural Graphics Primitives with a Multiresolution Hash Encoding: Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate. We reduce this cost with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations: a small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through stochastic gradient descent. The multiresolution structure allows the network to disambiguate hash collisions, making for a simple architecture that is trivial to parallelize on modern GPUs. We leverage this parallelism by implementing the whole system using fully-fused CUDA kernels with a focus on minimizing wasted bandwidth and compute operations. We achieve a combined speedup of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of ${1920\!\times\!1080}$.) <|cite_end|>employ a hash grid and an occupancy grid to accelerate computation, followed by a small MLP that infers density and color. NeRFs have also been used to represent dynamic scenes including human bodies <|cite_start|> (Reference: HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video: We introduce a free-viewpoint rendering method -- HumanNeRF -- that works on a given monocular video of a human performing complex body motions, e.g. a video from YouTube. Our method enables pausing the video at any frame and rendering the subject from arbitrary new camera viewpoints or even a full 360-degree camera path for that particular frame and body pose. This task is particularly challenging, as it requires synthesizing photorealistic details of the body, as seen from various camera angles that may not exist in the input video, as well as synthesizing fine details such as cloth folds and facial appearance. Our method optimizes for a volumetric representation of the person in a canonical T-pose, in concert with a motion field that maps the estimated canonical representation to every frame of the video via backward warps. The motion field is decomposed into skeletal rigid and non-rigid motions, produced by deep networks. We show significant performance improvements over prior work, and compelling examples of free-viewpoint renderings from monocular video of moving humans in challenging uncontrolled capture scenarios.) <|cite_end|> <|cite_start|> (Reference: Animatable Neural Radiance Fields from Monocular RGB Videos: We present animatable neural radiance fields (animatable NeRF) for detailed human avatar creation from monocular videos. Our approach extends neural radiance fields (NeRF) to the dynamic scenes with human movements via introducing explicit pose-guided deformation while learning the scene representation network. In particular, we estimate the human pose for each frame and learn a constant canonical space for the detailed human template, which enables natural shape deformation from the observation space to the canonical space under the explicit control of the pose parameters. To compensate for inaccurate pose estimation, we introduce the pose refinement strategy that updates the initial pose during the learning process, which not only helps to learn more accurate human reconstruction but also accelerates the convergence. In experiments we show that the proposed approach achieves 1) implicit human geometry and appearance reconstruction with high-quality details, 2) photo-realistic rendering of the human from novel views, and 3) animation of the human with novel poses.) <|cite_end|> <|cite_start|> (Reference: Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies: This paper addresses the challenge of reconstructing an animatable human model from a multi-view video. Some recent works have proposed to decompose a non-rigidly deforming scene into a canonical neural radiance field and a set of deformation fields that map observation-space points to the canonical space, thereby enabling them to learn the dynamic scene from images. However, they represent the deformation field as translational vector field or SE(3) field, which makes the optimization highly under-constrained. Moreover, these representations cannot be explicitly controlled by input motions. Instead, we introduce neural blend weight fields to produce the deformation fields. Based on the skeleton-driven deformation, blend weight fields are used with 3D human skeletons to generate observation-to-canonical and canonical-to-observation correspondences. Since 3D human skeletons are more observable, they can regularize the learning of deformation fields. Moreover, the learned blend weight fields can be combined with input skeletal motions to generate new deformation fields to animate the human model. Experiments show that our approach significantly outperforms recent human synthesis methods. The code and supplementary materials are available at https://zju3dv.github.io/animatable_nerf/.) <|cite_end|>, human heads <|cite_start|> (Reference: Reconstructing Personalized Semantic Facial NeRF Models From Monocular Video: We present a novel semantic model for human head defined with neural radiance field. The 3D-consistent head model consist of a set of disentangled and interpretable bases, and can be driven by low-dimensional expression coefficients. Thanks to the powerful representation ability of neural radiance field, the constructed model can represent complex facial attributes including hair, wearings, which can not be represented by traditional mesh blendshape. To construct the personalized semantic facial model, we propose to define the bases as several multi-level voxel fields. With a short monocular RGB video as input, our method can construct the subject's semantic facial NeRF model with only ten to twenty minutes, and can render a photo-realistic human head image in tens of miliseconds with a given expression coefficient and view direction. With this novel representation, we apply it to many tasks like facial retargeting and expression editing. Experimental results demonstrate its strong representation ability and training/inference speed. Demo videos and released code are provided in our project page: https://ustc3dv.github.io/NeRFBlendShape/) <|cite_end|> <|cite_start|> (Reference: Instant Volumetric Head Avatars: We present Instant Volumetric Head Avatars (INSTA), a novel approach for reconstructing photo-realistic digital avatars instantaneously. INSTA models a dynamic neural radiance field based on neural graphics primitives embedded around a parametric face model. Our pipeline is trained on a single monocular RGB portrait video that observes the subject under different expressions and views. While state-of-the-art methods take up to several days to train an avatar, our method can reconstruct a digital avatar in less than 10 minutes on modern GPU hardware, which is orders of magnitude faster than previous solutions. In addition, it allows for the interactive rendering of novel poses and expressions. By leveraging the geometry prior of the underlying parametric face model, we demonstrate that INSTA extrapolates to unseen poses. In quantitative and qualitative studies on various subjects, INSTA outperforms state-of-the-art methods regarding rendering quality and training time.) <|cite_end|>, and generic time-varying scenes <|cite_start|> (Reference: D-NeRF: Neural Radiance Fields for Dynamic Scenes: Neural rendering techniques combining machine learning with geometric reasoning have arisen as one of the most promising approaches for synthesizing novel views of a scene from a sparse set of images. Among these, stands out the Neural radiance fields (NeRF), which trains a deep network to map 5D input coordinates (representing spatial location and viewing direction) into a volume density and view-dependent emitted radiance. However, despite achieving an unprecedented level of photorealism on the generated images, NeRF is only applicable to static scenes, where the same spatial location can be queried from different images. In this paper we introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain, allowing to reconstruct and render novel images of objects under rigid and non-rigid motions from a \emph{single} camera moving around the scene. For this purpose we consider time as an additional input to the system, and split the learning process in two main stages: one that encodes the scene into a canonical space and another that maps this canonical representation into the deformed scene at a particular time. Both mappings are simultaneously learned using fully-connected networks. Once the networks are trained, D-NeRF can render novel images, controlling both the camera view and the time variable, and thus, the object movement. We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions. Code, model weights and the dynamic scenes dataset will be released.) <|cite_end|> <|cite_start|> (Reference: HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields: Neural Radiance Fields (NeRF) are able to reconstruct scenes with unprecedented fidelity, and various recent works have extended NeRF to handle dynamic scenes. A common approach to reconstruct such non-rigid scenes is through the use of a learned deformation field mapping from coordinates in each input image into a canonical template coordinate space. However, these deformation-based approaches struggle to model changes in topology, as topological changes require a discontinuity in the deformation field, but these deformation fields are necessarily continuous. We address this limitation by lifting NeRFs into a higher dimensional space, and by representing the 5D radiance field corresponding to each individual input image as a slice through this "hyper-space". Our method is inspired by level set methods, which model the evolution of surfaces as slices through a higher dimensional surface. We evaluate our method on two tasks: (i) interpolating smoothly between "moments", i.e., configurations of the scene, seen in the input images while maintaining visual plausibility, and (ii) novel-view synthesis at fixed moments. We show that our method, which we dub HyperNeRF, outperforms existing methods on both tasks. Compared to Nerfies, HyperNeRF reduces average error rates by 4.1% for interpolation and 8.6% for novel-view synthesis, as measured by LPIPS. Additional videos, results, and visualizations are available at https://hypernerf.github.io.) <|cite_end|> <|cite_start|> (Reference: Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes: We present a method to perform novel view and time synthesis of dynamic scenes, requiring only a monocular video with known camera poses as input. To do this, we introduce Neural Scene Flow Fields, a new representation that models the dynamic scene as a time-variant continuous function of appearance, geometry, and 3D scene motion. Our representation is optimized through a neural network to fit the observed input views. We show that our representation can be used for complex dynamic scenes, including thin structures, view-dependent effects, and natural degrees of motion. We conduct a number of experiments that demonstrate our approach significantly outperforms recent monocular view synthesis methods, and show qualitative results of space-time view synthesis on a variety of real-world videos.) <|cite_end|> <|cite_start|> (Reference: Neural Radiance Flow for 4D View Synthesis and Video Processing: We present a method, Neural Radiance Flow (NeRFlow),to learn a 4D spatial-temporal representation of a dynamic scene from a set of RGB images. Key to our approach is the use of a neural implicit representation that learns to capture the 3D occupancy, radiance, and dynamics of the scene. By enforcing consistency across different modalities, our representation enables multi-view rendering in diverse dynamic scenes, including water pouring, robotic interaction, and real images, outperforming state-of-the-art methods for spatial-temporal view synthesis. Our approach works even when inputs images are captured with only one camera. We further demonstrate that the learned representation can serve as an implicit scene prior, enabling video processing tasks such as image super-resolution and de-noising without any additional supervision.) <|cite_end|> <|cite_start|> (Reference: Nerfies: Deformable Neural Radiance Fields: We present the first method capable of photorealistically reconstructing deformable scenes using photos/videos captured casually from mobile phones. Our approach augments neural radiance fields (NeRF) by optimizing an additional continuous volumetric deformation field that warps each observed point into a canonical 5D NeRF. We observe that these NeRF-like deformation fields are prone to local minima, and propose a coarse-to-fine optimization method for coordinate-based models that allows for more robust optimization. By adapting principles from geometry processing and physical simulation to NeRF-like models, we propose an elastic regularization of the deformation field that further improves robustness. We show that our method can turn casually captured selfie photos/videos into deformable NeRF models that allow for photorealistic renderings of the subject from arbitrary viewpoints, which we dub "nerfies." We evaluate our method by collecting time-synchronized data using a rig with two mobile phones, yielding train/validation images of the same pose at different viewpoints. We show that our method faithfully reconstructs non-rigidly deforming scenes and reproduces unseen views with high fidelity.) <|cite_end|> <|cite_start|> (Reference: Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video: We present Non-Rigid Neural Radiance Fields (NR-NeRF), a reconstruction and novel view synthesis approach for general non-rigid dynamic scenes. Our approach takes RGB images of a dynamic scene as input (e.g., from a monocular video recording), and creates a high-quality space-time geometry and appearance representation. We show that a single handheld consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views, e.g. a `bullet-time' video effect. NR-NeRF disentangles the dynamic scene into a canonical volume and its deformation. Scene deformation is implemented as ray bending, where straight rays are deformed non-rigidly. We also propose a novel rigidity network to better constrain rigid regions of the scene, leading to more stable results. The ray bending and rigidity network are trained without explicit supervision. Our formulation enables dense correspondence estimation across views and time, and compelling video editing applications such as motion exaggeration. Our code will be open sourced.) <|cite_end|>. Typically these models rely on a canonical space, where all observations are mapped for time consistent reconstruction. Thereby, the fastest head animation methods <|cite_start|> (Reference: Instant Volumetric Head Avatars: We present Instant Volumetric Head Avatars (INSTA), a novel approach for reconstructing photo-realistic digital avatars instantaneously. INSTA models a dynamic neural radiance field based on neural graphics primitives embedded around a parametric face model. Our pipeline is trained on a single monocular RGB portrait video that observes the subject under different expressions and views. While state-of-the-art methods take up to several days to train an avatar, our method can reconstruct a digital avatar in less than 10 minutes on modern GPU hardware, which is orders of magnitude faster than previous solutions. In addition, it allows for the interactive rendering of novel poses and expressions. By leveraging the geometry prior of the underlying parametric face model, we demonstrate that INSTA extrapolates to unseen poses. In quantitative and qualitative studies on various subjects, INSTA outperforms state-of-the-art methods regarding rendering quality and training time.) <|cite_end|> <|cite_start|> (Reference: Reconstructing Personalized Semantic Facial NeRF Models From Monocular Video: We present a novel semantic model for human head defined with neural radiance field. The 3D-consistent head model consist of a set of disentangled and interpretable bases, and can be driven by low-dimensional expression coefficients. Thanks to the powerful representation ability of neural radiance field, the constructed model can represent complex facial attributes including hair, wearings, which can not be represented by traditional mesh blendshape. To construct the personalized semantic facial model, we propose to define the bases as several multi-level voxel fields. With a short monocular RGB video as input, our method can construct the subject's semantic facial NeRF model with only ten to twenty minutes, and can render a photo-realistic human head image in tens of miliseconds with a given expression coefficient and view direction. With this novel representation, we apply it to many tasks like facial retargeting and expression editing. Experimental results demonstrate its strong representation ability and training/inference speed. Demo videos and released code are provided in our project page: https://ustc3dv.github.io/NeRFBlendShape/) <|cite_end|>build on an InstantNGP hash grid to enable rendering in interactive frame rates (10-15 fps). 3DGS <|cite_start|> (Reference: 3D Gaussian Splatting for Real-Time Radiance Field Rendering: Radiance Field methods have recently revolutionized novel-view synthesis of scenes captured with multiple photos or videos. However, achieving high visual quality still requires neural networks that are costly to train and render, while recent faster methods inevitably trade off speed for quality. For unbounded and complete scenes (rather than isolated objects) and 1080p resolution rendering, no current method can achieve real-time display rates. We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (>= 30 fps) novel-view synthesis at 1080p resolution. First, starting from sparse points produced during camera calibration, we represent the scene with 3D Gaussians that preserve desirable properties of continuous volumetric radiance fields for scene optimization while avoiding unnecessary computation in empty space; Second, we perform interleaved optimization/density control of the 3D Gaussians, notably optimizing anisotropic covariance to achieve an accurate representation of the scene; Third, we develop a fast visibility-aware rendering algorithm that supports anisotropic splatting and both accelerates training and allows realtime rendering. We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.) <|cite_end|>represent a scene as a set of explicit 3D Gaussians with the motivation to minimize computation in empty spaces. Their efficient representation combined with tile-based rasterization algorithm allows for accelerated training and real-time rendering (over 100fps). A line of works extends the 3D Gaussian representation to model dynamic scenes <|cite_start|> (Reference: Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis: We present a method that simultaneously addresses the tasks of dynamic scene novel-view synthesis and six degree-of-freedom (6-DOF) tracking of all dense scene elements. We follow an analysis-by-synthesis framework, inspired by recent work that models scenes as a collection of 3D Gaussians which are optimized to reconstruct input images via differentiable rendering. To model dynamic scenes, we allow Gaussians to move and rotate over time while enforcing that they have persistent color, opacity, and size. By regularizing Gaussians' motion and rotation with local-rigidity constraints, we show that our Dynamic 3D Gaussians correctly model the same area of physical space over time, including the rotation of that space. Dense 6-DOF tracking and dynamic reconstruction emerges naturally from persistent dynamic view synthesis, without requiring any correspondence or flow as input. We demonstrate a large number of downstream applications enabled by our representation, including first-person view synthesis, dynamic compositional scene synthesis, and 4D video editing.) <|cite_end|> <|cite_start|> (Reference: 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering: Representing and rendering dynamic scenes has been an important but challenging task. Especially, to accurately model complex motions, high efficiency is usually hard to guarantee. To achieve real-time dynamic scene rendering while also enjoying high training and storage efficiency, we propose 4D Gaussian Splatting (4D-GS) as a holistic representation for dynamic scenes rather than applying 3D-GS for each individual frame. In 4D-GS, a novel explicit representation containing both 3D Gaussians and 4D neural voxels is proposed. A decomposed neural voxel encoding algorithm inspired by HexPlane is proposed to efficiently build Gaussian features from 4D neural voxels and then a lightweight MLP is applied to predict Gaussian deformations at novel timestamps. Our 4D-GS method achieves real-time rendering under high resolutions, 82 FPS at an 800$\times$800 resolution on an RTX 3090 GPU while maintaining comparable or better quality than previous state-of-the-art methods. More demos and code are available at https://guanjunwu.github.io/4dgs/.) <|cite_end|> <|cite_start|> (Reference: Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction: Implicit neural representation has paved the way for new approaches to dynamic scene reconstruction and rendering. Nonetheless, cutting-edge dynamic neural rendering methods rely heavily on these implicit representations, which frequently struggle to capture the intricate details of objects in the scene. Furthermore, implicit methods have difficulty achieving real-time rendering in general dynamic scenes, limiting their use in a variety of tasks. To address the issues, we propose a deformable 3D Gaussians Splatting method that reconstructs scenes using 3D Gaussians and learns them in canonical space with a deformation field to model monocular dynamic scenes. We also introduce an annealing smoothing training mechanism with no extra overhead, which can mitigate the impact of inaccurate poses on the smoothness of time interpolation tasks in real-world datasets. Through a differential Gaussian rasterizer, the deformable 3D Gaussians not only achieve higher rendering quality but also real-time rendering speed. Experiments show that our method outperforms existing methods significantly in terms of both rendering quality and speed, making it well-suited for tasks such as novel-view synthesis, time interpolation, and real-time rendering.) <|cite_end|>. Luiten et al.
[ "<|reference_start|> Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction: We present dynamic neural radiance fields for modeling the appearance and dynamics of a human face. Digitally modeling and reconstructing a talking human is a key building-block for a variety of applications. Especially, for telepresence applications in AR or VR, a faithful reproduction of the appearance including novel viewpoints or head-poses is required. In contrast to state-of-the-art approaches that model the geometry and material properties explicitly, or are purely image-based, we introduce an implicit representation of the head based on scene representation networks. To handle the dynamics of the face, we combine our scene representation network with a low-dimensional morphable model which provides explicit control over pose and expressions. We use volumetric rendering to generate images from this hybrid representation and demonstrate that such a dynamic neural scene representation can be learned from monocular input data only, without the need of a specialized capture setup. In our experiments, we show that this learned volumetric representation allows for photo-realistic image generation that surpasses the quality of state-of-the-art video-based reenactment methods. <|reference_end|>", "<|reference_start|> 3D Gaussian Splatting for Real-Time Radiance Field Rendering: Radiance Field methods have recently revolutionized novel-view synthesis of scenes captured with multiple photos or videos. However, achieving high visual quality still requires neural networks that are costly to train and render, while recent faster methods inevitably trade off speed for quality. For unbounded and complete scenes (rather than isolated objects) and 1080p resolution rendering, no current method can achieve real-time display rates. We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (>= 30 fps) novel-view synthesis at 1080p resolution. First, starting from sparse points produced during camera calibration, we represent the scene with 3D Gaussians that preserve desirable properties of continuous volumetric radiance fields for scene optimization while avoiding unnecessary computation in empty space; Second, we perform interleaved optimization/density control of the 3D Gaussians, notably optimizing anisotropic covariance to achieve an accurate representation of the scene; Third, we develop a fast visibility-aware rendering algorithm that supports anisotropic splatting and both accelerates training and allows realtime rendering. We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets. <|reference_end|>", "<|reference_start|> {A morphable model for the synthesis of 3D faces: In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an ''unlikely'' appearance Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness. <|reference_end|>", "<|reference_start|> Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video: We present Non-Rigid Neural Radiance Fields (NR-NeRF), a reconstruction and novel view synthesis approach for general non-rigid dynamic scenes. Our approach takes RGB images of a dynamic scene as input (e.g., from a monocular video recording), and creates a high-quality space-time geometry and appearance representation. We show that a single handheld consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views, e.g. a `bullet-time' video effect. NR-NeRF disentangles the dynamic scene into a canonical volume and its deformation. Scene deformation is implemented as ray bending, where straight rays are deformed non-rigidly. We also propose a novel rigidity network to better constrain rigid regions of the scene, leading to more stable results. The ray bending and rigidity network are trained without explicit supervision. Our formulation enables dense correspondence estimation across views and time, and compelling video editing applications such as motion exaggeration. Our code will be open sourced. <|reference_end|>" ]
[ 4, 16, 18, 50 ]
{"<|multi_cite_1_1|>": "ss-1225817", "<|multi_cite_1_2|>": "ss-678868", "<|multi_cite_1_3|>": "ss-995116", "<|multi_cite_2_1|>": "arxiv-384947", "<|multi_cite_2_2|>": "arxiv-308034", "<|multi_cite_3_1|>": "arxiv-384947", "<|multi_cite_3_2|>": "arxiv-469944", "<|cite_4|>": "arxiv-453141", "<|cite_5|>": "arxiv-453141", "<|cite_6|>": "arxiv-254624", "<|cite_7|>": "arxiv-308034", "<|multi_cite_8_1|>": "arxiv-464252", "<|multi_cite_8_2|>": "arxiv-453141", "<|cite_9|>": "arxiv-392871", "<|cite_10|>": "arxiv-496884", "<|cite_11|>": "arxiv-392871", "<|cite_12|>": "arxiv-529521", "<|cite_13|>": "arxiv-529521", "<|cite_14|>": "ss-1225817", "<|cite_15|>": "ss-678868", "<|cite_16|>": "ss-995116", "<|multi_cite_17_1|>": "arxiv-453141", "<|multi_cite_17_2|>": "arxiv-464252", "<|multi_cite_17_3|>": "arxiv-469944", "<|multi_cite_18_1|>": "arxiv-453141", "<|multi_cite_18_2|>": "arxiv-464252", "<|cite_19|>": "arxiv-529521", "<|cite_20|>": "arxiv-254624", "<|multi_cite_21_1|>": "arxiv-329660", "<|multi_cite_21_2|>": "arxiv-382860", "<|multi_cite_21_3|>": "arxiv-496884", "<|multi_cite_22_1|>": "arxiv-384578", "<|multi_cite_22_2|>": "arxiv-353354", "<|multi_cite_22_3|>": "arxiv-463897", "<|multi_cite_23_1|>": "arxiv-382524", "<|multi_cite_23_2|>": "arxiv-392871", "<|multi_cite_23_3|>": "arxiv-406448", "<|cite_24|>": "arxiv-382524", "<|cite_25|>": "arxiv-406448", "<|cite_26|>": "arxiv-392871", "<|multi_cite_27_1|>": "arxiv-392137", "<|multi_cite_27_2|>": "arxiv-350997", "<|multi_cite_27_3|>": "arxiv-339318", "<|multi_cite_28_1|>": "arxiv-453141", "<|multi_cite_28_2|>": "arxiv-464252", "<|multi_cite_29_1|>": "arxiv-306259", "<|multi_cite_29_2|>": "arxiv-350817", "<|multi_cite_29_3|>": "arxiv-305931", "<|multi_cite_29_4|>": "arxiv-310756", "<|multi_cite_29_5|>": "arxiv-305883", "<|multi_cite_29_6|>": "arxiv-311638", "<|multi_cite_30_1|>": "arxiv-464252", "<|multi_cite_30_2|>": "arxiv-453141", "<|cite_31|>": "arxiv-529521", "<|multi_cite_32_1|>": "arxiv-532214", "<|multi_cite_32_2|>": "arxiv-548509", "<|multi_cite_32_3|>": "arxiv-541943", "<|cite_33|>": "arxiv-532214", "<|cite_34|>": "arxiv-541943", "<|cite_35|>": "arxiv-548509", "<|multi_cite_36_1|>": "arxiv-418680", "<|multi_cite_36_2|>": "arxiv-386677", "<|multi_cite_36_3|>": "ss-2246378", "<|multi_cite_37_1|>": "arxiv-502236", "<|multi_cite_37_2|>": "ss-2246525", "<|multi_cite_38_1|>": "ss-2246378", "<|multi_cite_38_2|>": "ss-2246525", "<|multi_cite_39_1|>": "arxiv-386677", "<|multi_cite_39_2|>": "arxiv-502236", "<|multi_cite_40_1|>": "arxiv-160521", "<|multi_cite_40_2|>": "arxiv-91814", "<|multi_cite_40_3|>": "arxiv-384947", "<|multi_cite_40_4|>": "arxiv-469944", "<|cite_41|>": "arxiv-384947", "<|cite_42|>": "arxiv-469944", "<|cite_43|>": "arxiv-308034", "<|cite_45|>": "arxiv-187680", "<|cite_46|>": "arxiv-392871", "<|cite_47|>": "arxiv-464252", "<|cite_48|>": "arxiv-392871", "<|cite_49|>": "arxiv-453141", "<|cite_50|>": "ss-995116", "<|cite_51|>": "arxiv-392871", "<|cite_52|>": "arxiv-464565"}
2210.04665
<|paper_start|> Title: Towards Developing and Analysing Metric-Based Software Defect Severity Prediction Model Abstract: Towards Developing and Analysing Metric-Based Software Defect Severity Prediction Model: In a critical software system, the testers have to spend an enormous amount of time and effort to maintain the software due to the continuous occurrence of defects. Among such defects, some severe defects may adversely affect the software. To reduce the time and effort of a tester, many machine learning models have been proposed in the literature, which use the documented defect reports to automatically predict the severity of the defective software modules. In contrast to the traditional approaches, in this work we propose a metric-based software defect severity prediction (SDSP) model that uses a self-training semi-supervised learning approach to classify the severity of the defective software modules. The approach is constructed on a mixture of unlabelled and labelled defect severity data. The self-training works on the basis of a decision tree classifier to assign the pseudo-class labels to the unlabelled instances. The predictions are promising since the self-training successfully assigns the suitable class labels to the unlabelled instances. On the other hand, numerous research studies have covered proposing prediction approaches as well as the methodological aspects of defect severity prediction models, the gap in estimating project attributes from the prediction model remains unresolved. To bridge the gap, we propose five project specific measures such as the Risk-Factor (RF), the Percent of Saved Budget (PSB), the Loss in the Saved Budget (LSB), the Remaining Service Time (RST) and Gratuitous Service Time (GST) to capture project outcomes from the predictions. Similar to the traditional measures, these measures are also calculated from the observed confusion matrix. These measures are used to analyse the impact that the prediction model has on the software project. Introduction \label{Introduction} Building highly reliable software is always a challenging task for the software quality assurance team, and that costs more time and manpower <|cite_start|> (Reference: Software Engineering: A Practitioner's Approach: ) <|cite_end|> <|cite_start|> (Reference: Handbook of Software Reliability Engineering: Technical foundations introduction software reliability and system reliability the operational profile software reliability modelling survey model evaluation and recalibration techniques practices and experiences best current practice of SRE software reliability measurement experience measurement-based analysis of software reliability software fault and failure classification techniques trend analysis in validation and maintenance software reliability and field data analysis software reliability process assessment emerging techniques software reliability prediction metrics software reliability and testing fault-tolerant SRE software reliability using fault trees software reliability process simulation neural networks and software reliability. Appendices: software reliability tools software failure data set repository.) <|cite_end|>. In this regard, many organisations are spending an enormous amount of money on their test teams to remove/modify the defective code content before releasing the product. However, many software systems are facing maintenance issues due to the improper development of software modules <|cite_start|> (Reference: Software Engineering: A Practitioner's Approach: ) <|cite_end|>. Of these, some issues may require quick assessment and some may require mandatory assessment with less priority. For this, instead of identifying the severity (priority) of the defective modules manually, automation tools such as software defect severity prediction (SDSP) models have been developed in recent years <|cite_start|> (Reference: Bug severity prediction using question-and-answer pairs from Stack Overflow: ) <|cite_end|> <|cite_start|> (Reference: The effect of Bellwether analysis on software vulnerability severity prediction models: ) <|cite_end|> <|cite_start|> (Reference: Severity assessment of software defect reports using text classification: Defect severity assessment is essential in order to allocate testing resources and effectively plan testing activities. In this paper, we use text classification techniques to predict and assess the severity of defects. The results are based on defect description of issue requirements obtained from NASA project. We have used Support Vector Machine technique to predict defect severity from issue reports.) <|cite_end|>. The predictive models for the defect severity classification mainly utilise the text records to classify the software modules into respective severity classes <|cite_start|> (Reference: Automated severity assessment of software defect reports: In mission critical systems, such as those developed by NASA, it is very important that the test engineers properly recognize the severity of each issue they identify during testing. Proper severity assessment is essential for appropriate resource allocation and planning for fixing activities and additional testing. Severity assessment is strongly influenced by the experience of the test engineers and by the time they spend on each issue. The paper presents a new and automated method named SEVERIS (severity issue assessment), which assists the test engineer in assigning severity levels to defect reports. SEVERIS is based on standard text mining and machine learning techniques applied to existing sets of defect reports. A case study on using SEVERIS with data from NASApsilas Project and Issue Tracking System (PITS) is presented in the paper. The case study results indicate that SEVERIS is a good predictor for issue severity levels, while it is easy to use and efficient.) <|cite_end|> <|cite_start|> (Reference: Predicting the severity of a reported bug: The severity of a reported bug is a critical factor in deciding how soon it needs to be fixed. Unfortunately, while clear guidelines exist on how to assign the severity of a bug, it remains an inherent manual process left to the person reporting the bug. In this paper we investigate whether we can accurately predict the severity of a reported bug by analyzing its textual description using text mining algorithms. Based on three cases drawn from the open-source community (Mozilla, Eclipse and GNOME), we conclude that given a training set of sufficient size (approximately 500 reports per severity), it is possible to predict the severity with a reasonable accuracy (both precision and recall vary between 0.65–0.75 with Mozilla and Eclipse; 0.70–0.85 in the case of GNOME).) <|cite_end|> <|cite_start|> (Reference: Towards more accurate severity prediction and fixer recommendation of software bugs: ) <|cite_end|> <|cite_start|> (Reference: Predicting Software Defect Severity Level using Sentence Embedding and Ensemble Learning: Bug tracking is one of the prominent activities during the maintenance phase of software development. The severity of the bug acts as a key indicator of its criticality and impact towards planning evolution and maintenance of various types of software products. This indicator measures how negatively the bug may affect the system functionality. This helps in determining how quickly the development teams need to address the bug for successful execution of the software system. Due to a large number of bugs reported every day, the developers find it really difficult to assign the severity level to bugs accurately. Assigning incorrect severity level results in delaying the bug resolution process. Thus automated systems were developed which will assign a severity level using various machine learning techniques. In this work, five different types of sentence embedding techniques have been applied on bugs description to convert the description comments to an n-dimensional vector. These computed vectors are used as an input of the software defect severity level prediction models and ensemble techniques like Bagging, Random Forest classifier, Extra Trees classifier, AdaBoost and Gradient Boosting have been used to train these models. We have also considered different variants of the Synthetic Minority Oversampling Technique (SMOTE) to handle the class imbalance problem as the considered datasets are not evenly distributed. The experimental results on six projects highlight that the usage of sentence embedding, ensemble techniques, and different variants of SMOTE techniques helps in improving the predictive ability of defect severity level prediction models.) <|cite_end|> <|cite_start|> (Reference: An empirical study on improving severity prediction of defect reports using feature selection: In software maintenance, severity prediction on defect reports is an emerging issue obtaining research attention due to the considerable triaging cost. In the past research work, several text mining approaches have been proposed to predict the severity using advanced learning models. Although these approaches demonstrate the effectiveness of predicting the severity, they do not discuss the problem of how to find the indicators in good quality. In this paper, we discuss whether feature selection can benefit the severity prediction task with three commonly used feature selection schemes, Information Gain, Chi-Square, and Correlation Coefficient, based on the Multinomial Naive Bayes classification approach. We have conducted empirical experiments with four open-source components from Eclipse and Mozilla. The experimental results show that these three feature selection schemes can further improve the predication performance in over half the cases.) <|cite_end|>. These models utilise text mining approaches to first extract the features from the documented text and then classify the severity of the defective software modules. However, the literature exhibits a little progress towards providing the solution using the multi-class classification approaches without mining the documented records of the software projects <|cite_start|> (Reference: Bug report severity level prediction in open source software: A survey and research opportunities: ) <|cite_end|>. As an alternative to proposing the traditional text mining approaches or proposing solutions for the methodological aspects of finding the severity of the defective software module, in this work we propose a classification solution using a self-training semi-supervised learning approach. The primary objective of this work is to classify the software module into five different classes, such as \textit{High Severity, Critical, Major, Non-trivial}, and \textit{Clean} from the mixture of labelled and unlabelled data. In this approach, first the available labelled data is over-sampled using a well-known technique called, the adaptive synthetic sampling (ADASYN) <|cite_start|> (Reference: ADASYN: Adaptive synthetic sampling approach for imbalanced learning: This paper presents a novel adaptive synthetic (ADASYN) sampling approach for learning from imbalanced data sets. The essential idea of ADASYN is to use a weighted distribution for different minority class examples according to their level of difficulty in learning, where more synthetic data is generated for minority class examples that are harder to learn compared to those minority examples that are easier to learn. As a result, the ADASYN approach improves learning with respect to the data distributions in two ways: (1) reducing the bias introduced by the class imbalance, and (2) adaptively shifting the classification decision boundary toward the difficult examples. Simulation analyses on several machine learning data sets show the effectiveness of this method across five evaluation metrics.) <|cite_end|>, to enhance the minority classes. After obtaining the balanced training data, the self-training semi-supervised learning model <|cite_start|> (Reference: Unsupervised Domain Adaptation for Semantic Segmentation via Class-Balanced Self-training: ) <|cite_end|> <|cite_start|> (Reference: Confidence Regularized Self-Training: Recent advances in domain adaptation show that deep self-training presents a powerful means for unsupervised domain adaptation. These methods often involve an iterative process of predicting on target domain and then taking the confident predictions as pseudo-labels for retraining. However, since pseudo-labels can be noisy, self-training can put overconfident label belief on wrong classes, leading to deviated solutions with propagated errors. To address the problem, we propose a confidence regularized self-training (CRST) framework, formulated as regularized self-training. Our method treats pseudo-labels as continuous latent variables jointly optimized via alternating optimization. We propose two types of confidence regularization: label regularization (LR) and model regularization (MR). CRST-LR generates soft pseudo-labels while CRST-MR encourages the smoothness on network output. Extensive experiments on image classification and semantic segmentation show that CRSTs outperform their non-regularized counterpart with state-of-the-art performance. The code and models of this work are available at https://github.com/yzou2/CRST.) <|cite_end|> <|cite_start|> (Reference: Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach: Fine-tuned pre-trained language models (LMs) have achieved enormous success in many natural language processing (NLP) tasks, but they still require excessive labeled data in the fine-tuning stage. We study the problem of fine-tuning pre-trained LMs using only weak supervision, without any labeled data. This problem is challenging because the high capacity of LMs makes them prone to overfitting the noisy labels generated by weak supervision. To address this problem, we develop a contrastive self-training framework, COSINE, to enable fine-tuning LMs with weak supervision. Underpinned by contrastive regularization and confidence-based reweighting, this contrastive self-training framework can gradually improve model fitting while effectively suppressing error propagation. Experiments on sequence, token, and sentence pair classification tasks show that our model outperforms the strongest baseline by large margins on 7 benchmarks in 6 tasks, and achieves competitive performance with fully-supervised fine-tuning methods.) <|cite_end|> <|cite_start|> (Reference: Statistical and Algorithmic Insights for Semi-supervised Learning with Self-training: Self-training is a classical approach in semi-supervised learning which is successfully applied to a variety of machine learning problems. Self-training algorithm generates pseudo-labels for the unlabeled examples and progressively refines these pseudo-labels which hopefully coincides with the actual labels. This work provides theoretical insights into self-training algorithm with a focus on linear classifiers. We first investigate Gaussian mixture models and provide a sharp non-asymptotic finite-sample characterization of the self-training iterations. Our analysis reveals the provable benefits of rejecting samples with low confidence and demonstrates that self-training iterations gracefully improve the model accuracy even if they do get stuck in sub-optimal fixed points. We then demonstrate that regularization and class margin (i.e. separation) is provably important for the success and lack of regularization may prevent self-training from identifying the core features in the data. Finally, we discuss statistical aspects of empirical risk minimization with self-training for general distributions. We show how a purely unsupervised notion of generalization based on self-training based clustering can be formalized based on cluster margin. We then establish a connection between self-training based semi-supervision and the more general problem of learning with heterogenous data and weak supervision.) <|cite_end|> is implemented on both labelled and unlabelled data. The self-training is an iterative model, which uses the decision tree model as the base learner to assign the pseudo-labels to the unlabelled instances and, at each iteration, using the pre-defined acceptance threshold, high-confidence instances will be added to the original labelled set. In the end, the generated pseudo-labelled training data is fed to the decision tree classifier to observe the performance on the test dataset. While most of the literature describes the approaches to the defect severity prediction problem, the gap of estimating the project-specific attributes from the prediction model is still present in the literature. To bridge this gap, to understand how far the prediction results are helpful to the project managers, in this work, we propose five project specific measures, such as, \textit{the Risk-Factor (RF)}, \textit{the Percent of Saved Budget (PSB), the Loss in the Saved Budget (LSB)}, \textit{the Remaining Service Time (RST)}, and \textit{Gratuitous Service Time (GST)}. Similar to the traditional measures, these measures are also calculated from the observed confusion matrix of the prediction model. \textit{The RF} is calculated as the amount of risk in the project as a result of the false negatives. \textit{The PSB} and \textit{LSB} are indicative of the savings and the loss of the original savings in the project, respectively. \textit{The RST and GST} measure the amount of time still required to service the damaged code and unnecessary time spent on the project, respectively. To the best of our knowledge, providing interpretable performance in terms of project attributes is novel in the field of software defect severity prediction. For this empirical study, we have evaluated the proposed approach on the four software systems collected from the publicly available AEEEM <|cite_start|> (Reference: Evaluating defect prediction approaches: a benchmark and an extensive comparison: ) <|cite_end|> repository. The experimental evaluations are conducted before and after implementing the self-training model (using the decision tree classifier) to observe the difference in the predictive performances. The comparative analysis is made using both the traditional (such as \textit{Accuracy} and \textit{F-measure}) and proposed measures. The experimental results show that the proposed self-training model successfully assigns the class-labels to the unlabelled instances. On average, the proposed self-training model is showing a reduction in the risk of failure of the system and a reduction in the remaining service time. Hence, as a consequence, the software system accounts for increased budget savings. This work makes the following novel contributions in the field of software defect severity prediction: \begin{enumerate} \item As an alternative to proposing traditional text-mining approaches for severity prediction, we provide a metric-based solution. From the mixture of labelled and unlabelled data, the self-training semi-supervised classification approach tries to classify the software modules into five different classes, such as \textit{high severity, critical, major, non-trivial}, and \textit{clean}. \item To understand how far the prediction results are helpful to the project managers, in this work, we propose five project specific measures, such as, \textit{the risk-factor, the percent of saved budget, the loss in the saved budget, the remaining service time}, and \textit{gratuitous service time}. To the best of our knowledge, proposing such project-specific measures is new to the area of software defect severity prediction. \end{enumerate} \textit{\textbf{Paper Organization}}: Section \ref{RelatedWork} presents various text mining approaches for the defect severity prediction task. The detailed architecture of the proposed decision tree based self-training semi-supervised learning model is presented in section \ref{Self-training}. Section \ref{empiSetup} provides details of the utilised datasets and traditional and proposed evaluation measures. The empirical results from the proposed model are discussed in section \ref{results}. The section \ref{threats} provides threats to the validity of the proposed framework. And, Section \ref{conclusion} concludes the work and provides potential research directions for this work. <|paper_end|>
[ "<|reference_start|> Predicting the severity of a reported bug: The severity of a reported bug is a critical factor in deciding how soon it needs to be fixed. Unfortunately, while clear guidelines exist on how to assign the severity of a bug, it remains an inherent manual process left to the person reporting the bug. In this paper we investigate whether we can accurately predict the severity of a reported bug by analyzing its textual description using text mining algorithms. Based on three cases drawn from the open-source community (Mozilla, Eclipse and GNOME), we conclude that given a training set of sufficient size (approximately 500 reports per severity), it is possible to predict the severity with a reasonable accuracy (both precision and recall vary between 0.65–0.75 with Mozilla and Eclipse; 0.70–0.85 in the case of GNOME). <|reference_end|>", "<|reference_start|> Predicting Software Defect Severity Level using Sentence Embedding and Ensemble Learning: Bug tracking is one of the prominent activities during the maintenance phase of software development. The severity of the bug acts as a key indicator of its criticality and impact towards planning evolution and maintenance of various types of software products. This indicator measures how negatively the bug may affect the system functionality. This helps in determining how quickly the development teams need to address the bug for successful execution of the software system. Due to a large number of bugs reported every day, the developers find it really difficult to assign the severity level to bugs accurately. Assigning incorrect severity level results in delaying the bug resolution process. Thus automated systems were developed which will assign a severity level using various machine learning techniques. In this work, five different types of sentence embedding techniques have been applied on bugs description to convert the description comments to an n-dimensional vector. These computed vectors are used as an input of the software defect severity level prediction models and ensemble techniques like Bagging, Random Forest classifier, Extra Trees classifier, AdaBoost and Gradient Boosting have been used to train these models. We have also considered different variants of the Synthetic Minority Oversampling Technique (SMOTE) to handle the class imbalance problem as the considered datasets are not evenly distributed. The experimental results on six projects highlight that the usage of sentence embedding, ensemble techniques, and different variants of SMOTE techniques helps in improving the predictive ability of defect severity level prediction models. <|reference_end|>", "<|reference_start|> Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach: Fine-tuned pre-trained language models (LMs) have achieved enormous success in many natural language processing (NLP) tasks, but they still require excessive labeled data in the fine-tuning stage. We study the problem of fine-tuning pre-trained LMs using only weak supervision, without any labeled data. This problem is challenging because the high capacity of LMs makes them prone to overfitting the noisy labels generated by weak supervision. To address this problem, we develop a contrastive self-training framework, COSINE, to enable fine-tuning LMs with weak supervision. Underpinned by contrastive regularization and confidence-based reweighting, this contrastive self-training framework can gradually improve model fitting while effectively suppressing error propagation. Experiments on sequence, token, and sentence pair classification tasks show that our model outperforms the strongest baseline by large margins on 7 benchmarks in 6 tasks, and achieves competitive performance with fully-supervised fine-tuning methods. <|reference_end|>", "<|reference_start|> Evaluating defect prediction approaches: a benchmark and an extensive comparison: <|reference_end|>" ]
[ 7, 9, 15, 17 ]
{"<|multi_cite_1_1|>": "ss-1291812", "<|multi_cite_1_2|>": "ss-1773967", "<|cite_2|>": "ss-1291812", "<|multi_cite_3_1|>": "ss-1516694", "<|multi_cite_3_2|>": "ss-2348918", "<|multi_cite_3_3|>": "ss-2348919", "<|multi_cite_4_1|>": "ss-1274710", "<|multi_cite_4_2|>": "ss-1274709", "<|multi_cite_4_3|>": "ss-2348920", "<|multi_cite_4_4|>": "ss-2348921", "<|multi_cite_4_5|>": "ss-2348922", "<|cite_5|>": "ss-2348923", "<|cite_6|>": "ss-682394", "<|multi_cite_7_1|>": "ss-906620", "<|multi_cite_7_2|>": "arxiv-220451", "<|multi_cite_7_3|>": "arxiv-296549", "<|multi_cite_7_4|>": "arxiv-273070", "<|cite_8|>": "ss-725922"}
1811.05251
<|paper_start|> Title: SVM-Based Sea-Surface Small Target Detection: A False-Alarm-Rate-Controllable Approach Abstract: SVM-Based Sea-Surface Small Target Detection: A False-Alarm-Rate-Controllable Approach: In this letter, we consider the varying detection environments to address the problem of detecting small targets within sea clutter. We first extract three simple yet practically discriminative features from the returned signals in the time and frequency domains and then fuse them into a 3-D feature space. Based on the constructed space, we then adopt and elegantly modify the support vector machine (SVM) to design a learning-based detector that enfolds the false alarm rate (FAR). Most importantly, our proposed detector can flexibly control the FAR by simply adjusting two introduced parameters, which facilitates to regulate detector's sensitivity to the outliers incurred by the sea spikes and to fairly evaluate the performance of different detection algorithms. Experimental results demonstrate that our proposed detector significantly improves the detection probability over several existing classical detectors in both low signal to clutter ratio (SCR) (up to 58%) and low FAR (up to 40%) cases. Introduction Accurate detection of small targets on sea surface is an important problem in remote sensing and radar signal processing applications <|cite_start|> (Reference: Marine Wireless Big Data: Efficient Transmission, Related Applications, and Challenges: The vast volume of marine wireless sampling data and its continuously explosive growth herald the coming of the era of marine wireless big data. Two challenges imposed by these data are how to fast, reliably, and sustainably deliver them in extremely hostile marine environments and how to apply them after collection. In this article, we first propose an architecture of heterogeneous marine networks that flexibly exploits the existing underwater wireless techniques as a potential solution for fast data transmission. We then investigate the possibilities of and develop the schemes for energy-efficient and reliable undersea transmission without or slightly with data rate reduction. After discussing the data transmission, we summarize the possible applications of the collected big data and particularly focus on the problems of applying these data in sea-surface object detection and marine object recognition. Open issues and challenges that need to be further explored regarding transmission and detection/recognition are also discussed in the article.) <|cite_end|>. However, when detecting, the radar returns from the small targets are severely obscured by the backscatter from the sea surface, which is referred to as sea clutter <|cite_start|> (Reference: Marine Wireless Big Data: Efficient Transmission, Related Applications, and Challenges: The vast volume of marine wireless sampling data and its continuously explosive growth herald the coming of the era of marine wireless big data. Two challenges imposed by these data are how to fast, reliably, and sustainably deliver them in extremely hostile marine environments and how to apply them after collection. In this article, we first propose an architecture of heterogeneous marine networks that flexibly exploits the existing underwater wireless techniques as a potential solution for fast data transmission. We then investigate the possibilities of and develop the schemes for energy-efficient and reliable undersea transmission without or slightly with data rate reduction. After discussing the data transmission, we summarize the possible applications of the collected big data and particularly focus on the problems of applying these data in sea-surface object detection and marine object recognition. Open issues and challenges that need to be further explored regarding transmission and detection/recognition are also discussed in the article.) <|cite_end|>. To identify the small targets from the sea clutter, a promising approach is to seek certain features from the returned signals that can depict the intrinsic differences between these two classes and then design a feature-based detector. However, the extracted features usually become ineffective when the detection environment changes, as the characteristics of the sea clutter are highly dependent on the sea states and radar's parameter configurations. Therefore, extracting robust features from the returned radar signals that adapt to varying environments is crucial for target detection. There have been extensive works to design potentially discriminative features for detecting small targets within sea clutter. In <|cite_start|> (Reference: Small target detection in sea clutter based on doppler spectrum features: Small target detection in sea clutter is a challenge problem in radar signal processing community. Based on the analysis of sea clutter Doppler spectrum characteristics, two target detection algorithms are proposed, namely, a Bayesian detection algorithm based on joint Rayleigh distribution model and a feature detection algorithm based on the entropy feature extracted from signal's Doppler spectrum. The detection performances of the proposed algorithms are evaluated based on the data collected by the McMaster IPIX radar at the east coast of Canada) <|cite_end|>, the authors utilized a doppler spectrum feature to describe the differences between the sea clutter and target signals, where the detector's decision was made by simply comparing the feature's value with a predefined threshold. However, such single feature based detector only exploits limited information of the returned signals and thus its detection performance is likely to be affected by the varying detection environments. Consider this, a potential solution for detection performance improvement is to integrate more features to construct multi-dimensional feature spaces, as by this more additional information within the returned signals can be provided. Following this insight, Xu in <|cite_start|> (Reference: Low observable targets detection by joint fractal properties of sea clutter: An experimental study of IPIX OHGR datasets: We exploit the joint fractal properties of sea clutter extracted from detrended fluctuation analysis (DFA) for targets detection. We find that two specific fractal statistics, i.e., the intercept at the crucial scale and the Hurst exponent of optimal scales provide valuable information for targets detection. The first statistic measures the discrepancy between sea clutter and low observable targets at the crucial fractal scale, and the second one evaluates the average fractal difference within the optimal multi-scales. A target detection method integrating these two statistics is proposed, which is validated by real-life IPIX radar datasets. We find that this joint fractal detection approach achieves more accurate results for low observable targets detection.) <|cite_end|> extracted two temporal fractal features to devise a 2-D convexhull learning algorithm for detection. Further, Shui $\textit{et. al}$ in <|cite_start|> (Reference: Tri-feature-based detection of floating small targets in sea clutter: It is always a challenging problem for marine surface surveillance radar to detect sea-surface floating small targets. Conventional detectors using incoherent integration and adaptive clutter suppression have low detection probabilities for such targets with weak returns and unobservable Doppler shifts. In this paper, three features of a received vector at a resolution cell-the relative amplitude, relative Doppler peak height, and relative entropy of the Doppler amplitude spectrum-are exploited to give returns with targets from sea clutter. Real datasets show that each feature alone has some discriminability, and the three features jointly exhibit strong discriminability. Due to diversity of targets in practice, it is impossible to get features of returns with all kinds of targets. We recast detection of sea-surface floating small targets as a one-class anomaly detection problem in the 3D feature space. A fast convexhull learning algorithm is proposed to learn the decision region of the clutter pattern from feature vectors of clutter-only observations. As a result, a tri-feature-based detector is developed. The experiment results for the IPIX datasets show that the proposed detector at an observation time of several seconds attains better detection performance than several existing detectors.) <|cite_end|> introduced three features, i.e., the RAA, RPH, and RVE, to construct a 3-D feature space, under which the detection accuracy is improved in both high and low signal to clutter ratio (SCR) scenarios compared with several single feature based detectors. Nevertheless, it should be noted that the detection performance in <|cite_start|> (Reference: Low observable targets detection by joint fractal properties of sea clutter: An experimental study of IPIX OHGR datasets: We exploit the joint fractal properties of sea clutter extracted from detrended fluctuation analysis (DFA) for targets detection. We find that two specific fractal statistics, i.e., the intercept at the crucial scale and the Hurst exponent of optimal scales provide valuable information for targets detection. The first statistic measures the discrepancy between sea clutter and low observable targets at the crucial fractal scale, and the second one evaluates the average fractal difference within the optimal multi-scales. A target detection method integrating these two statistics is proposed, which is validated by real-life IPIX radar datasets. We find that this joint fractal detection approach achieves more accurate results for low observable targets detection.) <|cite_end|> and <|cite_start|> (Reference: Tri-feature-based detection of floating small targets in sea clutter: It is always a challenging problem for marine surface surveillance radar to detect sea-surface floating small targets. Conventional detectors using incoherent integration and adaptive clutter suppression have low detection probabilities for such targets with weak returns and unobservable Doppler shifts. In this paper, three features of a received vector at a resolution cell-the relative amplitude, relative Doppler peak height, and relative entropy of the Doppler amplitude spectrum-are exploited to give returns with targets from sea clutter. Real datasets show that each feature alone has some discriminability, and the three features jointly exhibit strong discriminability. Due to diversity of targets in practice, it is impossible to get features of returns with all kinds of targets. We recast detection of sea-surface floating small targets as a one-class anomaly detection problem in the 3D feature space. A fast convexhull learning algorithm is proposed to learn the decision region of the clutter pattern from feature vectors of clutter-only observations. As a result, a tri-feature-based detector is developed. The experiment results for the IPIX datasets show that the proposed detector at an observation time of several seconds attains better detection performance than several existing detectors.) <|cite_end|> is still poor in low SCR scenarios, e.g., lower than $57\%$ when SCR $=$ -2 dB. To further promote the robustness of the detectors, the following two ideas could be considered. Firstly, seek more discriminative features. It was observed that some features such as the widely-adopted amplitude become ineffective in low SCR scenarios <|cite_start|> (Reference: Robust CFAR detector with weighted amplitude iteration in nonhomogeneous sea clutter: Constant false alarm rate (CFAR) is a desired property for target detection in unknown and nonstationary sea clutter. Analysis of the experimental data shows that gamma distribution is a promising model for sea clutter. A robust CFAR method is proposed for target detection in nonhomogeneous gamma-distributed clutter, using the weighted amplitude iteration of the samples in the reference window as the adaptive threshold. By combining the advantages of cell-averaging CFAR (CA-CFAR), greatest of selection CFAR (GO-CFAR), and ordered statistic CFAR (OS-CFAR), the proposed method shows a similar detection performance as the CA-CFAR in homogenous gamma-distributed environment with a known shape parameter. In a nonhomogeneous environment, the proposed method also works robustly with the appropriate weighting factors, whereas CA-, GO-, and OS-CFAR methods exhibit a serious degradation of the detection probability and an excessive increase in the false alarm rate. The detection performance of the proposed method in gamma-distributed clutter with different shape parameters is also presented by simulation. The superiority of the proposed method, which is applicable to different clutter scenarios with corresponding weighting factors, is investigated and verified by simulations and experimental data.) <|cite_end|>. On the contrary, we find that some concepts in other research fields can be used to define features that are effective even in low SCR situations, e.g., the information entropy in the communication theory. Secondly, establish more advanced detection frameworks. Several recent works have shown that machine learning based techniques exhibit excellent potential in target detection compared with some conventional approaches <|cite_start|> (Reference: Artificial intelligence techniques for clutter identification with polarimetric radar signatures: ) <|cite_end|> <|cite_start|> (Reference: Target detection in sea clutter based on multifractal characteristics after empirical mode decomposition: Characteristic analysis of sea clutter is important in utilizing radar observations and detecting sea-surface targets. Real data signals are analyzed to determine the multifractal characteristics of sea clutter signals. Sea clutter is a nonlinear, nonstationary radar echo signal. A novel method that detects targets in sea clutter is proposed by completely utilizing the strengths of empirical mode decomposition (EMD) and combining it with multifractal characteristics. The EMD method is applied to decompose sea clutter signals into several intrinsic mode functions (IMFs). Multifractal detrended fluctuation analysis is utilized to calculate the generalized Hurst exponent for the main functions of IMF after which real sea clutter data are used for training and testing. Results show that targets in sea clutter can be effectively observed and detected through the proposed method, the performance of which is better than that of the target detection method for the generalized Hurst exponent under typical time, fractional Fourier transform and wavelet transform domains.) <|cite_end|> <|cite_start|> (Reference: A duct mapping method using least squares support vector machines: This paper introduces a “refractivity from clutter” (RFC) approach with an inversion method based on a pregenerated database. The RFC method exploits the information contained in the radar sea clutter return to estimate the refractive index profile. Whereas initial efforts are based on algorithms giving a good accuracy involving high computational needs, the present method is based on a learning machine algorithm in order to obtain a real‐time system. This paper shows the feasibility of a RFC technique based on the least squares support vector machine inversion method by comparing it to a genetic algorithm on simulated and noise‐free data, at 1 and 5 GHz. These data are simulated in the presence of ideal trilinear surface‐based ducts. The learning machine is based on a pregenerated database computed using Latin hypercube sampling to improve the efficiency of the learning. The results show that little accuracy is lost compared to a genetic algorithm approach. The computational time of a genetic algorithm is very high, whereas the learning machine approach is real time. The advantage of a real‐time RFC system is that it could work on several azimuths in near real time.) <|cite_end|>. One of their main advantages is that they can adaptively adjust the involved parameters and decision regions according to the collected radar returns, which are usually predefined in existing popular frameworks, e.g., the constant false alarm rate (CFAR) detector <|cite_start|> (Reference: A modified CFAR algorithm based on object proposals for ship target detection in SAR images: Target detection for synthetic aperture radar (SAR) images has great influence on the successive discrimination based on the target regions. However, as a pixel-based method, the traditional constant false alarm rate (CFAR) detection could not work well for the ship target detection problem of multiple ship targets with different sizes in a SAR image, which is referred to as the multiscale situation. Moreover, it needs to use the clustering method on the pixel-level detection results to obtain the accurate target regions, which may merge two or more different targets into a target region. In this letter, a modified CFAR based on object proposals is proposed. We use the object proposal generator to generate a small set of object proposals with different sizes, and then use the proposal-based CFAR detector, where the extracted object proposals are regarded as the guard windows instead of setting fixed guard window, to detect the true positive object proposals. By introducing the object proposals as the variable guard windows in the CFAR detector, the proposed algorithm could gain good detection performance in the multiscale situation, since the missed detection resulting from the big differences between the sizes of the fixed guard window and ship targets can be avoided. Meanwhile, the proposed method can directly obtain the accurate target regions. The effectiveness of the proposed algorithm is verified using the measured SAR data.) <|cite_end|>. In this way, learning-based detectors may be less sensitive to the variation of the detection environments. In view of these, this letter devotes to exploring discriminative features for feature space construction and designing a learning-based detector for accurate small target detection. The main contributions of this work are as follows: \begin{itemize} \item We exploit some concepts in other research fields to define three features i.e., the temporal information entropy (TIE), the temporal Hurst exponent (THE), and the frequency peak to average ratio (FPAR), from the perspective of time and frequency domains. Particularly, the three defined features are quite simple yet practically discriminative under varying detection environments even in low SCR and false alarm rate (FAR) cases. \item We adopt and elegantly modify the support vector machine (SVM), a classical binary classifier, to design a learning-based detector. Significantly different from the existing learning-based detectors, our proposed detector enfolds the FAR and can flexibly control it by simply tuning two introduced parameters. By this, it is convenient to fairly evaluate the performance of different detection algorithms, and to flexibly regulate the sensitivity of the detector to the outliers incurred by factors such as the sea spikes to meet the requirements of different applications. \item Experimental results show that, compared with several classical detectors, our proposed detector significantly improves the detection probability in both low SCR (up to $58\%$) and low FAR (up to $40\%$) cases. \end{itemize} <|paper_end|>
[ "<|reference_start|> Marine Wireless Big Data: Efficient Transmission, Related Applications, and Challenges: The vast volume of marine wireless sampling data and its continuously explosive growth herald the coming of the era of marine wireless big data. Two challenges imposed by these data are how to fast, reliably, and sustainably deliver them in extremely hostile marine environments and how to apply them after collection. In this article, we first propose an architecture of heterogeneous marine networks that flexibly exploits the existing underwater wireless techniques as a potential solution for fast data transmission. We then investigate the possibilities of and develop the schemes for energy-efficient and reliable undersea transmission without or slightly with data rate reduction. After discussing the data transmission, we summarize the possible applications of the collected big data and particularly focus on the problems of applying these data in sea-surface object detection and marine object recognition. Open issues and challenges that need to be further explored regarding transmission and detection/recognition are also discussed in the article. <|reference_end|>", "<|reference_start|> Low observable targets detection by joint fractal properties of sea clutter: An experimental study of IPIX OHGR datasets: We exploit the joint fractal properties of sea clutter extracted from detrended fluctuation analysis (DFA) for targets detection. We find that two specific fractal statistics, i.e., the intercept at the crucial scale and the Hurst exponent of optimal scales provide valuable information for targets detection. The first statistic measures the discrepancy between sea clutter and low observable targets at the crucial fractal scale, and the second one evaluates the average fractal difference within the optimal multi-scales. A target detection method integrating these two statistics is proposed, which is validated by real-life IPIX radar datasets. We find that this joint fractal detection approach achieves more accurate results for low observable targets detection. <|reference_end|>", "<|reference_start|> Tri-feature-based detection of floating small targets in sea clutter: It is always a challenging problem for marine surface surveillance radar to detect sea-surface floating small targets. Conventional detectors using incoherent integration and adaptive clutter suppression have low detection probabilities for such targets with weak returns and unobservable Doppler shifts. In this paper, three features of a received vector at a resolution cell-the relative amplitude, relative Doppler peak height, and relative entropy of the Doppler amplitude spectrum-are exploited to give returns with targets from sea clutter. Real datasets show that each feature alone has some discriminability, and the three features jointly exhibit strong discriminability. Due to diversity of targets in practice, it is impossible to get features of returns with all kinds of targets. We recast detection of sea-surface floating small targets as a one-class anomaly detection problem in the 3D feature space. A fast convexhull learning algorithm is proposed to learn the decision region of the clutter pattern from feature vectors of clutter-only observations. As a result, a tri-feature-based detector is developed. The experiment results for the IPIX datasets show that the proposed detector at an observation time of several seconds attains better detection performance than several existing detectors. <|reference_end|>", "<|reference_start|> A modified CFAR algorithm based on object proposals for ship target detection in SAR images: Target detection for synthetic aperture radar (SAR) images has great influence on the successive discrimination based on the target regions. However, as a pixel-based method, the traditional constant false alarm rate (CFAR) detection could not work well for the ship target detection problem of multiple ship targets with different sizes in a SAR image, which is referred to as the multiscale situation. Moreover, it needs to use the clustering method on the pixel-level detection results to obtain the accurate target regions, which may merge two or more different targets into a target region. In this letter, a modified CFAR based on object proposals is proposed. We use the object proposal generator to generate a small set of object proposals with different sizes, and then use the proposal-based CFAR detector, where the extracted object proposals are regarded as the guard windows instead of setting fixed guard window, to detect the true positive object proposals. By introducing the object proposals as the variable guard windows in the CFAR detector, the proposed algorithm could gain good detection performance in the multiscale situation, since the missed detection resulting from the big differences between the sizes of the fixed guard window and ship targets can be avoided. Meanwhile, the proposed method can directly obtain the accurate target regions. The effectiveness of the proposed algorithm is verified using the measured SAR data. <|reference_end|>" ]
[ 0, 5, 6, 11 ]
{"<|cite_1|>": "arxiv-136996", "<|cite_2|>": "arxiv-136996", "<|cite_3|>": "ss-1033129", "<|cite_4|>": "ss-1033130", "<|cite_5|>": "ss-2163619", "<|cite_6|>": "ss-1033130", "<|cite_7|>": "ss-2163619", "<|cite_8|>": "ss-1636682", "<|multi_cite_9_1|>": "ss-1033131", "<|multi_cite_9_2|>": "ss-1033132", "<|multi_cite_9_3|>": "ss-1033133", "<|cite_10|>": "ss-1293774"}